Apr 30 03:29:54.005561 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 23:03:20 -00 2025 Apr 30 03:29:54.005601 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:29:54.005619 kernel: BIOS-provided physical RAM map: Apr 30 03:29:54.005632 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 03:29:54.005644 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 03:29:54.005656 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 03:29:54.005671 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Apr 30 03:29:54.005683 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Apr 30 03:29:54.005698 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 30 03:29:54.005711 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 30 03:29:54.005723 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 03:29:54.005735 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 03:29:54.005748 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 30 03:29:54.005761 kernel: NX (Execute Disable) protection: active Apr 30 03:29:54.005779 kernel: APIC: Static calls initialized Apr 30 03:29:54.005793 kernel: SMBIOS 3.0.0 present. Apr 30 03:29:54.005806 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 30 03:29:54.005820 kernel: Hypervisor detected: KVM Apr 30 03:29:54.005833 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 03:29:54.005847 kernel: kvm-clock: using sched offset of 3513779974 cycles Apr 30 03:29:54.005861 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 03:29:54.005875 kernel: tsc: Detected 2495.310 MHz processor Apr 30 03:29:54.005890 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 03:29:54.005907 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 03:29:54.005921 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Apr 30 03:29:54.005935 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 03:29:54.006041 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 03:29:54.006055 kernel: Using GB pages for direct mapping Apr 30 03:29:54.006069 kernel: ACPI: Early table checksum verification disabled Apr 30 03:29:54.006083 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Apr 30 03:29:54.006097 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:29:54.006111 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:29:54.006129 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:29:54.006163 kernel: ACPI: FACS 0x000000007CFE0000 000040 Apr 30 03:29:54.006177 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:29:54.006191 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:29:54.006205 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:29:54.006219 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 03:29:54.006232 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Apr 30 03:29:54.006247 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Apr 30 03:29:54.006269 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Apr 30 03:29:54.006283 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Apr 30 03:29:54.006297 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Apr 30 03:29:54.006312 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Apr 30 03:29:54.006327 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Apr 30 03:29:54.006341 kernel: No NUMA configuration found Apr 30 03:29:54.006355 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Apr 30 03:29:54.006373 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Apr 30 03:29:54.006388 kernel: Zone ranges: Apr 30 03:29:54.006402 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 03:29:54.006417 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Apr 30 03:29:54.006431 kernel: Normal empty Apr 30 03:29:54.006446 kernel: Movable zone start for each node Apr 30 03:29:54.006460 kernel: Early memory node ranges Apr 30 03:29:54.006474 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 03:29:54.006489 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Apr 30 03:29:54.006506 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Apr 30 03:29:54.006520 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 03:29:54.006535 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 03:29:54.006550 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 30 03:29:54.006564 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 03:29:54.006579 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 03:29:54.006593 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 03:29:54.006608 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 03:29:54.006622 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 03:29:54.006640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 03:29:54.006654 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 03:29:54.006669 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 03:29:54.006683 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 03:29:54.006698 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 03:29:54.006712 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 03:29:54.006727 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 03:29:54.006742 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 30 03:29:54.006756 kernel: Booting paravirtualized kernel on KVM Apr 30 03:29:54.006774 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 03:29:54.006789 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 03:29:54.006804 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 03:29:54.006819 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 03:29:54.006833 kernel: pcpu-alloc: [0] 0 1 Apr 30 03:29:54.006847 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 30 03:29:54.006864 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:29:54.006880 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 03:29:54.006898 kernel: random: crng init done Apr 30 03:29:54.006912 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 03:29:54.006927 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 03:29:54.006978 kernel: Fallback order for Node 0: 0 Apr 30 03:29:54.006993 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Apr 30 03:29:54.007007 kernel: Policy zone: DMA32 Apr 30 03:29:54.007021 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 03:29:54.007037 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 125152K reserved, 0K cma-reserved) Apr 30 03:29:54.007136 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 03:29:54.007176 kernel: ftrace: allocating 37944 entries in 149 pages Apr 30 03:29:54.007191 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 03:29:54.007214 kernel: Dynamic Preempt: voluntary Apr 30 03:29:54.007228 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 03:29:54.007244 kernel: rcu: RCU event tracing is enabled. Apr 30 03:29:54.007260 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 03:29:54.007274 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 03:29:54.007289 kernel: Rude variant of Tasks RCU enabled. Apr 30 03:29:54.007304 kernel: Tracing variant of Tasks RCU enabled. Apr 30 03:29:54.007319 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 03:29:54.007336 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 03:29:54.007351 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 03:29:54.007366 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 03:29:54.007394 kernel: Console: colour VGA+ 80x25 Apr 30 03:29:54.007408 kernel: printk: console [tty0] enabled Apr 30 03:29:54.007423 kernel: printk: console [ttyS0] enabled Apr 30 03:29:54.007437 kernel: ACPI: Core revision 20230628 Apr 30 03:29:54.007453 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 03:29:54.007467 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 03:29:54.007485 kernel: x2apic enabled Apr 30 03:29:54.007499 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 03:29:54.007514 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 03:29:54.007528 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 30 03:29:54.007543 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495310) Apr 30 03:29:54.007558 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 03:29:54.007573 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 30 03:29:54.007588 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 30 03:29:54.007615 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 03:29:54.007630 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 03:29:54.007645 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 03:29:54.007663 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 03:29:54.007678 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 30 03:29:54.007693 kernel: RETBleed: Mitigation: untrained return thunk Apr 30 03:29:54.007709 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 03:29:54.007724 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 03:29:54.007740 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 03:29:54.007758 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 03:29:54.007773 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 03:29:54.007789 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 03:29:54.007804 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Apr 30 03:29:54.007820 kernel: Freeing SMP alternatives memory: 32K Apr 30 03:29:54.007835 kernel: pid_max: default: 32768 minimum: 301 Apr 30 03:29:54.007850 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 03:29:54.007865 kernel: landlock: Up and running. Apr 30 03:29:54.007883 kernel: SELinux: Initializing. Apr 30 03:29:54.007899 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:29:54.008046 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 03:29:54.008065 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 30 03:29:54.008081 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:29:54.008105 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:29:54.008120 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 03:29:54.008136 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 30 03:29:54.008169 kernel: ... version: 0 Apr 30 03:29:54.008190 kernel: ... bit width: 48 Apr 30 03:29:54.008205 kernel: ... generic registers: 6 Apr 30 03:29:54.008220 kernel: ... value mask: 0000ffffffffffff Apr 30 03:29:54.008236 kernel: ... max period: 00007fffffffffff Apr 30 03:29:54.008251 kernel: ... fixed-purpose events: 0 Apr 30 03:29:54.008266 kernel: ... event mask: 000000000000003f Apr 30 03:29:54.008281 kernel: signal: max sigframe size: 1776 Apr 30 03:29:54.008296 kernel: rcu: Hierarchical SRCU implementation. Apr 30 03:29:54.008312 kernel: rcu: Max phase no-delay instances is 400. Apr 30 03:29:54.008330 kernel: smp: Bringing up secondary CPUs ... Apr 30 03:29:54.008345 kernel: smpboot: x86: Booting SMP configuration: Apr 30 03:29:54.008360 kernel: .... node #0, CPUs: #1 Apr 30 03:29:54.008375 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 03:29:54.008390 kernel: smpboot: Max logical packages: 1 Apr 30 03:29:54.008405 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Apr 30 03:29:54.008420 kernel: devtmpfs: initialized Apr 30 03:29:54.008435 kernel: x86/mm: Memory block size: 128MB Apr 30 03:29:54.008451 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 03:29:54.008469 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 03:29:54.008484 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 03:29:54.008500 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 03:29:54.008515 kernel: audit: initializing netlink subsys (disabled) Apr 30 03:29:54.008530 kernel: audit: type=2000 audit(1745983793.003:1): state=initialized audit_enabled=0 res=1 Apr 30 03:29:54.008545 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 03:29:54.008560 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 03:29:54.008575 kernel: cpuidle: using governor menu Apr 30 03:29:54.008590 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 03:29:54.008608 kernel: dca service started, version 1.12.1 Apr 30 03:29:54.008624 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 30 03:29:54.008639 kernel: PCI: Using configuration type 1 for base access Apr 30 03:29:54.008655 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 03:29:54.008670 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 03:29:54.008686 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 03:29:54.008701 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 03:29:54.008716 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 03:29:54.008731 kernel: ACPI: Added _OSI(Module Device) Apr 30 03:29:54.008749 kernel: ACPI: Added _OSI(Processor Device) Apr 30 03:29:54.008764 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 03:29:54.008779 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 03:29:54.008794 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 03:29:54.008809 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 03:29:54.008824 kernel: ACPI: Interpreter enabled Apr 30 03:29:54.008839 kernel: ACPI: PM: (supports S0 S5) Apr 30 03:29:54.008854 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 03:29:54.008869 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 03:29:54.008887 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 03:29:54.008902 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 03:29:54.008917 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 03:29:54.009219 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 03:29:54.009387 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 30 03:29:54.009637 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 30 03:29:54.009659 kernel: PCI host bridge to bus 0000:00 Apr 30 03:29:54.009823 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 03:29:54.011240 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 03:29:54.011406 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 03:29:54.011544 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Apr 30 03:29:54.011679 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 30 03:29:54.011813 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 30 03:29:54.011976 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 03:29:54.012180 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 03:29:54.013155 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Apr 30 03:29:54.013333 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Apr 30 03:29:54.013489 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Apr 30 03:29:54.013643 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Apr 30 03:29:54.013799 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Apr 30 03:29:54.015103 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 03:29:54.015282 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 03:29:54.015407 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Apr 30 03:29:54.015540 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 03:29:54.015661 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Apr 30 03:29:54.015797 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 03:29:54.015922 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Apr 30 03:29:54.017133 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 03:29:54.017283 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Apr 30 03:29:54.017414 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 03:29:54.017536 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Apr 30 03:29:54.017663 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 03:29:54.017784 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Apr 30 03:29:54.017918 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 03:29:54.020178 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Apr 30 03:29:54.020320 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 03:29:54.020444 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Apr 30 03:29:54.020571 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 30 03:29:54.020692 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Apr 30 03:29:54.020826 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 03:29:54.020994 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 03:29:54.021125 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 03:29:54.021261 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Apr 30 03:29:54.021379 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Apr 30 03:29:54.021514 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 03:29:54.021639 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 30 03:29:54.021775 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 03:29:54.021903 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Apr 30 03:29:54.023093 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Apr 30 03:29:54.023331 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Apr 30 03:29:54.023460 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 03:29:54.023582 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 03:29:54.023704 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 03:29:54.023847 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 03:29:54.025062 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Apr 30 03:29:54.025214 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 03:29:54.025337 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 03:29:54.025456 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 03:29:54.025592 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 30 03:29:54.025725 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Apr 30 03:29:54.025850 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Apr 30 03:29:54.027032 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 03:29:54.027189 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 03:29:54.027312 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 03:29:54.027448 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 30 03:29:54.027576 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Apr 30 03:29:54.027803 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 03:29:54.027927 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 03:29:54.028071 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 03:29:54.028227 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 03:29:54.028354 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Apr 30 03:29:54.028483 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Apr 30 03:29:54.028606 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 03:29:54.028733 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 03:29:54.028856 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 03:29:54.031042 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 30 03:29:54.032046 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Apr 30 03:29:54.032200 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Apr 30 03:29:54.032327 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 03:29:54.032532 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 03:29:54.032660 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 03:29:54.032676 kernel: acpiphp: Slot [0] registered Apr 30 03:29:54.032811 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 03:29:54.034968 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Apr 30 03:29:54.035155 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Apr 30 03:29:54.035286 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Apr 30 03:29:54.035411 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 03:29:54.035532 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 03:29:54.035659 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 03:29:54.035675 kernel: acpiphp: Slot [0-2] registered Apr 30 03:29:54.035795 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 03:29:54.035915 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Apr 30 03:29:54.041123 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 03:29:54.041165 kernel: acpiphp: Slot [0-3] registered Apr 30 03:29:54.041383 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 03:29:54.041513 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 03:29:54.041642 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 03:29:54.041658 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 03:29:54.041671 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 03:29:54.041684 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 03:29:54.041696 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 03:29:54.041709 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 03:29:54.041721 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 03:29:54.041733 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 03:29:54.041745 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 03:29:54.041760 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 03:29:54.041772 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 03:29:54.041784 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 03:29:54.041796 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 03:29:54.041808 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 03:29:54.041821 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 03:29:54.041833 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 03:29:54.041845 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 03:29:54.041857 kernel: iommu: Default domain type: Translated Apr 30 03:29:54.041872 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 03:29:54.041884 kernel: PCI: Using ACPI for IRQ routing Apr 30 03:29:54.041896 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 03:29:54.042011 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 03:29:54.042027 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Apr 30 03:29:54.042211 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 03:29:54.042336 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 03:29:54.042455 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 03:29:54.042470 kernel: vgaarb: loaded Apr 30 03:29:54.042489 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 03:29:54.042501 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 03:29:54.042514 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 03:29:54.042526 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 03:29:54.042539 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 03:29:54.042551 kernel: pnp: PnP ACPI init Apr 30 03:29:54.042689 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 30 03:29:54.042707 kernel: pnp: PnP ACPI: found 5 devices Apr 30 03:29:54.042723 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 03:29:54.042736 kernel: NET: Registered PF_INET protocol family Apr 30 03:29:54.042748 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 03:29:54.042761 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 03:29:54.042773 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 03:29:54.042785 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 03:29:54.042798 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 03:29:54.042810 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 03:29:54.042822 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:29:54.042837 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 03:29:54.042850 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 03:29:54.042862 kernel: NET: Registered PF_XDP protocol family Apr 30 03:29:54.044190 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 03:29:54.044324 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 03:29:54.044447 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 03:29:54.044571 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 03:29:54.044700 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 03:29:54.044821 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 03:29:54.046026 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 03:29:54.046215 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 03:29:54.046295 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 03:29:54.046374 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 03:29:54.046447 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 03:29:54.046528 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 03:29:54.046622 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 03:29:54.046705 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 03:29:54.046775 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 03:29:54.046851 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 03:29:54.046924 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 03:29:54.047077 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 03:29:54.047165 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 03:29:54.047240 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 03:29:54.047324 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 03:29:54.047414 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 03:29:54.047497 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 03:29:54.047568 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 03:29:54.047660 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 03:29:54.047753 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 30 03:29:54.047843 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 03:29:54.047924 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 03:29:54.048035 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 03:29:54.048115 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 30 03:29:54.048221 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Apr 30 03:29:54.048299 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 03:29:54.048379 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 03:29:54.048460 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 30 03:29:54.048537 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 03:29:54.048618 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 03:29:54.048698 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 03:29:54.048769 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 03:29:54.048845 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 03:29:54.048913 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Apr 30 03:29:54.049001 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 30 03:29:54.049069 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 30 03:29:54.049167 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Apr 30 03:29:54.049258 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 03:29:54.049350 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Apr 30 03:29:54.049425 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 03:29:54.049509 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Apr 30 03:29:54.049582 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 03:29:54.049665 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Apr 30 03:29:54.049737 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 03:29:54.049813 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Apr 30 03:29:54.049883 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 03:29:54.049987 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Apr 30 03:29:54.050061 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 03:29:54.050150 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 30 03:29:54.050224 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Apr 30 03:29:54.050303 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 03:29:54.050395 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 30 03:29:54.050471 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Apr 30 03:29:54.050559 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 03:29:54.050651 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 30 03:29:54.050735 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Apr 30 03:29:54.050819 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 03:29:54.050831 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 03:29:54.050839 kernel: PCI: CLS 0 bytes, default 64 Apr 30 03:29:54.050847 kernel: Initialise system trusted keyrings Apr 30 03:29:54.050857 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 03:29:54.050871 kernel: Key type asymmetric registered Apr 30 03:29:54.050878 kernel: Asymmetric key parser 'x509' registered Apr 30 03:29:54.050886 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 03:29:54.050896 kernel: io scheduler mq-deadline registered Apr 30 03:29:54.050905 kernel: io scheduler kyber registered Apr 30 03:29:54.050913 kernel: io scheduler bfq registered Apr 30 03:29:54.051079 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 30 03:29:54.051185 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 30 03:29:54.051264 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 30 03:29:54.051353 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 30 03:29:54.051430 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 30 03:29:54.051506 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 30 03:29:54.051582 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 30 03:29:54.051659 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 30 03:29:54.051737 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 30 03:29:54.051812 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 30 03:29:54.051888 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 30 03:29:54.051983 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 30 03:29:54.052061 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 30 03:29:54.052150 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 30 03:29:54.052229 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 30 03:29:54.052305 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 30 03:29:54.052316 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 03:29:54.052392 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 30 03:29:54.052474 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 30 03:29:54.052488 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 03:29:54.052496 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 30 03:29:54.052504 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 03:29:54.052512 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 03:29:54.052519 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 03:29:54.052527 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 03:29:54.052535 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 03:29:54.052633 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 30 03:29:54.052653 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 03:29:54.052729 kernel: rtc_cmos 00:03: registered as rtc0 Apr 30 03:29:54.052809 kernel: rtc_cmos 00:03: setting system clock to 2025-04-30T03:29:53 UTC (1745983793) Apr 30 03:29:54.052892 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 30 03:29:54.052905 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 30 03:29:54.052914 kernel: NET: Registered PF_INET6 protocol family Apr 30 03:29:54.052924 kernel: Segment Routing with IPv6 Apr 30 03:29:54.052933 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 03:29:54.052979 kernel: NET: Registered PF_PACKET protocol family Apr 30 03:29:54.052990 kernel: Key type dns_resolver registered Apr 30 03:29:54.052997 kernel: IPI shorthand broadcast: enabled Apr 30 03:29:54.053005 kernel: sched_clock: Marking stable (1286012773, 147926184)->(1504262595, -70323638) Apr 30 03:29:54.053012 kernel: registered taskstats version 1 Apr 30 03:29:54.053020 kernel: Loading compiled-in X.509 certificates Apr 30 03:29:54.053028 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4a2605119c3649b55d5796c3fe312b2581bff37b' Apr 30 03:29:54.053037 kernel: Key type .fscrypt registered Apr 30 03:29:54.053045 kernel: Key type fscrypt-provisioning registered Apr 30 03:29:54.053052 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 03:29:54.053061 kernel: ima: Allocated hash algorithm: sha1 Apr 30 03:29:54.053069 kernel: ima: No architecture policies found Apr 30 03:29:54.053076 kernel: clk: Disabling unused clocks Apr 30 03:29:54.053084 kernel: Freeing unused kernel image (initmem) memory: 42864K Apr 30 03:29:54.053092 kernel: Write protecting the kernel read-only data: 36864k Apr 30 03:29:54.053100 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K Apr 30 03:29:54.053108 kernel: Run /init as init process Apr 30 03:29:54.053117 kernel: with arguments: Apr 30 03:29:54.053128 kernel: /init Apr 30 03:29:54.053136 kernel: with environment: Apr 30 03:29:54.053156 kernel: HOME=/ Apr 30 03:29:54.053165 kernel: TERM=linux Apr 30 03:29:54.053174 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 03:29:54.053186 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:29:54.053197 systemd[1]: Detected virtualization kvm. Apr 30 03:29:54.053205 systemd[1]: Detected architecture x86-64. Apr 30 03:29:54.053215 systemd[1]: Running in initrd. Apr 30 03:29:54.053223 systemd[1]: No hostname configured, using default hostname. Apr 30 03:29:54.053231 systemd[1]: Hostname set to . Apr 30 03:29:54.053239 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:29:54.053247 systemd[1]: Queued start job for default target initrd.target. Apr 30 03:29:54.053255 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:29:54.053263 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:29:54.053272 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 03:29:54.053281 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:29:54.053289 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 03:29:54.053297 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 03:29:54.053307 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 03:29:54.053315 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 03:29:54.053323 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:29:54.053331 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:29:54.053341 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:29:54.053349 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:29:54.053357 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:29:54.053365 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:29:54.053373 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:29:54.053381 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:29:54.053389 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 03:29:54.053397 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 03:29:54.053406 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:29:54.053415 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:29:54.053423 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:29:54.053431 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:29:54.053439 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 03:29:54.053447 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:29:54.053455 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 03:29:54.053463 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 03:29:54.053471 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:29:54.053481 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:29:54.053490 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:29:54.053498 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 03:29:54.053525 systemd-journald[187]: Collecting audit messages is disabled. Apr 30 03:29:54.053548 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:29:54.053556 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 03:29:54.053565 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:29:54.053574 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 03:29:54.053583 kernel: Bridge firewalling registered Apr 30 03:29:54.053591 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:29:54.053600 systemd-journald[187]: Journal started Apr 30 03:29:54.053620 systemd-journald[187]: Runtime Journal (/run/log/journal/e64cc05acd1f4aa69a4994e77aab6a5a) is 4.8M, max 38.4M, 33.6M free. Apr 30 03:29:54.003770 systemd-modules-load[188]: Inserted module 'overlay' Apr 30 03:29:54.042155 systemd-modules-load[188]: Inserted module 'br_netfilter' Apr 30 03:29:54.088994 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:29:54.089543 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:29:54.090235 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:29:54.097051 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:29:54.098842 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:29:54.101059 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:29:54.104081 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:29:54.119008 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:29:54.122299 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:29:54.126115 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:29:54.132251 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 03:29:54.133208 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:29:54.139079 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:29:54.143876 dracut-cmdline[219]: dracut-dracut-053 Apr 30 03:29:54.146662 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c687c1f8aad1bd5ea19c342ca6f52efb69b4807a131e3bd7f3f07b950e1ec39d Apr 30 03:29:54.173238 systemd-resolved[223]: Positive Trust Anchors: Apr 30 03:29:54.174050 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:29:54.174084 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:29:54.182784 systemd-resolved[223]: Defaulting to hostname 'linux'. Apr 30 03:29:54.183865 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:29:54.184641 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:29:54.203992 kernel: SCSI subsystem initialized Apr 30 03:29:54.213993 kernel: Loading iSCSI transport class v2.0-870. Apr 30 03:29:54.223989 kernel: iscsi: registered transport (tcp) Apr 30 03:29:54.247739 kernel: iscsi: registered transport (qla4xxx) Apr 30 03:29:54.247861 kernel: QLogic iSCSI HBA Driver Apr 30 03:29:54.312063 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 03:29:54.322231 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 03:29:54.360784 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 03:29:54.360932 kernel: device-mapper: uevent: version 1.0.3 Apr 30 03:29:54.360979 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 03:29:54.419040 kernel: raid6: avx2x4 gen() 22460 MB/s Apr 30 03:29:54.435996 kernel: raid6: avx2x2 gen() 30142 MB/s Apr 30 03:29:54.453228 kernel: raid6: avx2x1 gen() 26257 MB/s Apr 30 03:29:54.453345 kernel: raid6: using algorithm avx2x2 gen() 30142 MB/s Apr 30 03:29:54.472052 kernel: raid6: .... xor() 20444 MB/s, rmw enabled Apr 30 03:29:54.472220 kernel: raid6: using avx2x2 recovery algorithm Apr 30 03:29:54.491005 kernel: xor: automatically using best checksumming function avx Apr 30 03:29:54.644010 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 03:29:54.660835 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:29:54.669294 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:29:54.680611 systemd-udevd[405]: Using default interface naming scheme 'v255'. Apr 30 03:29:54.684621 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:29:54.693220 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 03:29:54.717586 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Apr 30 03:29:54.754158 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:29:54.761329 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:29:54.805028 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:29:54.817306 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 03:29:54.850459 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 03:29:54.853558 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:29:54.855857 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:29:54.857302 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:29:54.865216 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 03:29:54.880842 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:29:54.906825 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 03:29:54.906925 kernel: scsi host0: Virtio SCSI HBA Apr 30 03:29:54.924269 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:29:54.926083 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 30 03:29:54.924550 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:29:54.927382 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:29:54.927826 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:29:54.927929 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:29:54.930240 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:29:54.935022 kernel: ACPI: bus type USB registered Apr 30 03:29:54.935046 kernel: usbcore: registered new interface driver usbfs Apr 30 03:29:54.936768 kernel: usbcore: registered new interface driver hub Apr 30 03:29:54.938237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:29:54.943034 kernel: usbcore: registered new device driver usb Apr 30 03:29:54.986996 kernel: libata version 3.00 loaded. Apr 30 03:29:55.021972 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 03:29:55.022056 kernel: AES CTR mode by8 optimization enabled Apr 30 03:29:55.024972 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 03:29:55.055016 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 03:29:55.055067 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 03:29:55.055219 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 03:29:55.055308 kernel: scsi host1: ahci Apr 30 03:29:55.055607 kernel: scsi host2: ahci Apr 30 03:29:55.057979 kernel: scsi host3: ahci Apr 30 03:29:55.058103 kernel: scsi host4: ahci Apr 30 03:29:55.058220 kernel: scsi host5: ahci Apr 30 03:29:55.058316 kernel: scsi host6: ahci Apr 30 03:29:55.058415 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Apr 30 03:29:55.058425 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Apr 30 03:29:55.058435 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Apr 30 03:29:55.058444 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Apr 30 03:29:55.058454 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Apr 30 03:29:55.058463 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Apr 30 03:29:55.069963 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 30 03:29:55.074479 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 30 03:29:55.074604 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 03:29:55.074715 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 30 03:29:55.074819 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 03:29:55.074925 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 03:29:55.074958 kernel: GPT:17805311 != 80003071 Apr 30 03:29:55.074969 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 03:29:55.074984 kernel: GPT:17805311 != 80003071 Apr 30 03:29:55.074993 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 03:29:55.075002 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:29:55.075012 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 03:29:55.108613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:29:55.114078 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 03:29:55.134420 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:29:55.367967 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 03:29:55.368102 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 30 03:29:55.368124 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 30 03:29:55.370251 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 30 03:29:55.373871 kernel: ata1.00: applying bridge limits Apr 30 03:29:55.379401 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 03:29:55.379446 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 03:29:55.379466 kernel: ata1.00: configured for UDMA/100 Apr 30 03:29:55.384319 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 03:29:55.385978 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 03:29:55.423024 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 03:29:55.484836 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 30 03:29:55.485075 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 03:29:55.485262 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 03:29:55.485417 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 30 03:29:55.485563 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 30 03:29:55.485713 kernel: hub 1-0:1.0: USB hub found Apr 30 03:29:55.485887 kernel: hub 1-0:1.0: 4 ports detected Apr 30 03:29:55.487021 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 03:29:55.487298 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 30 03:29:55.491664 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 03:29:55.491685 kernel: hub 2-0:1.0: USB hub found Apr 30 03:29:55.491873 kernel: hub 2-0:1.0: 4 ports detected Apr 30 03:29:55.492068 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 30 03:29:55.506251 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 30 03:29:55.511072 kernel: BTRFS: device fsid 24af5149-14c0-4f50-b6d3-2f5c9259df26 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (465) Apr 30 03:29:55.519972 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (464) Apr 30 03:29:55.532445 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 30 03:29:55.537388 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 30 03:29:55.538684 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 30 03:29:55.544350 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 03:29:55.554116 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 03:29:55.558611 disk-uuid[580]: Primary Header is updated. Apr 30 03:29:55.558611 disk-uuid[580]: Secondary Entries is updated. Apr 30 03:29:55.558611 disk-uuid[580]: Secondary Header is updated. Apr 30 03:29:55.567958 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:29:55.703976 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 03:29:55.851038 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 03:29:55.860100 kernel: usbcore: registered new interface driver usbhid Apr 30 03:29:55.860185 kernel: usbhid: USB HID core driver Apr 30 03:29:55.872619 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Apr 30 03:29:55.872709 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 30 03:29:56.587981 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 03:29:56.588566 disk-uuid[581]: The operation has completed successfully. Apr 30 03:29:56.678521 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 03:29:56.678679 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 03:29:56.699162 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 03:29:56.712790 sh[599]: Success Apr 30 03:29:56.738498 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 30 03:29:56.826284 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 03:29:56.839157 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 03:29:56.844052 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 03:29:56.881547 kernel: BTRFS info (device dm-0): first mount of filesystem 24af5149-14c0-4f50-b6d3-2f5c9259df26 Apr 30 03:29:56.881612 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:29:56.885731 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 03:29:56.889365 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 03:29:56.892154 kernel: BTRFS info (device dm-0): using free space tree Apr 30 03:29:56.926007 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 03:29:56.930196 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 03:29:56.932613 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 03:29:56.939349 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 03:29:56.943792 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 03:29:56.967212 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:29:56.967302 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:29:56.969515 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:29:56.976092 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:29:56.976259 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:29:56.992026 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 03:29:56.995189 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:29:57.003056 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 03:29:57.011184 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 03:29:57.065327 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:29:57.084338 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:29:57.105560 systemd-networkd[780]: lo: Link UP Apr 30 03:29:57.105574 systemd-networkd[780]: lo: Gained carrier Apr 30 03:29:57.107384 systemd-networkd[780]: Enumeration completed Apr 30 03:29:57.107699 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:29:57.108787 systemd[1]: Reached target network.target - Network. Apr 30 03:29:57.109295 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:29:57.109299 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:29:57.116589 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:29:57.116593 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:29:57.117530 systemd-networkd[780]: eth0: Link UP Apr 30 03:29:57.117533 systemd-networkd[780]: eth0: Gained carrier Apr 30 03:29:57.117539 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:29:57.127311 systemd-networkd[780]: eth1: Link UP Apr 30 03:29:57.127317 systemd-networkd[780]: eth1: Gained carrier Apr 30 03:29:57.127326 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:29:57.173502 ignition[719]: Ignition 2.19.0 Apr 30 03:29:57.173517 ignition[719]: Stage: fetch-offline Apr 30 03:29:57.175059 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:29:57.173570 ignition[719]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:29:57.173578 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:29:57.173694 ignition[719]: parsed url from cmdline: "" Apr 30 03:29:57.173697 ignition[719]: no config URL provided Apr 30 03:29:57.173702 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:29:57.173712 ignition[719]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:29:57.173718 ignition[719]: failed to fetch config: resource requires networking Apr 30 03:29:57.173920 ignition[719]: Ignition finished successfully Apr 30 03:29:57.181165 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 03:29:57.187127 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:29:57.196033 systemd-networkd[780]: eth0: DHCPv4 address 157.180.66.130/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 03:29:57.202007 ignition[787]: Ignition 2.19.0 Apr 30 03:29:57.202019 ignition[787]: Stage: fetch Apr 30 03:29:57.202229 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:29:57.202245 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:29:57.202342 ignition[787]: parsed url from cmdline: "" Apr 30 03:29:57.202346 ignition[787]: no config URL provided Apr 30 03:29:57.202350 ignition[787]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 03:29:57.202358 ignition[787]: no config at "/usr/lib/ignition/user.ign" Apr 30 03:29:57.202376 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 30 03:29:57.212453 ignition[787]: GET result: OK Apr 30 03:29:57.212533 ignition[787]: parsing config with SHA512: 556ae2dfbc728064dc467e3b825713b3baed70b6f73631186aec1487f1142a07c0df7af1066f65d9c7aab8f745c2620739cb23178f36b8368220ea7aeace217a Apr 30 03:29:57.217777 unknown[787]: fetched base config from "system" Apr 30 03:29:57.217802 unknown[787]: fetched base config from "system" Apr 30 03:29:57.218344 ignition[787]: fetch: fetch complete Apr 30 03:29:57.217827 unknown[787]: fetched user config from "hetzner" Apr 30 03:29:57.218352 ignition[787]: fetch: fetch passed Apr 30 03:29:57.218425 ignition[787]: Ignition finished successfully Apr 30 03:29:57.221134 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 03:29:57.226119 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 03:29:57.246681 ignition[795]: Ignition 2.19.0 Apr 30 03:29:57.246700 ignition[795]: Stage: kargs Apr 30 03:29:57.246923 ignition[795]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:29:57.246955 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:29:57.249888 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 03:29:57.248520 ignition[795]: kargs: kargs passed Apr 30 03:29:57.248580 ignition[795]: Ignition finished successfully Apr 30 03:29:57.257285 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 03:29:57.272169 ignition[801]: Ignition 2.19.0 Apr 30 03:29:57.272184 ignition[801]: Stage: disks Apr 30 03:29:57.272385 ignition[801]: no configs at "/usr/lib/ignition/base.d" Apr 30 03:29:57.272397 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:29:57.274922 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 03:29:57.273646 ignition[801]: disks: disks passed Apr 30 03:29:57.276246 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 03:29:57.273698 ignition[801]: Ignition finished successfully Apr 30 03:29:57.282759 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 03:29:57.283904 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:29:57.285014 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:29:57.286512 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:29:57.297193 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 03:29:57.317627 systemd-fsck[810]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 03:29:57.322652 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 03:29:57.329186 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 03:29:57.445342 kernel: EXT4-fs (sda9): mounted filesystem c246962b-d3a7-4703-a2cb-a633fbca1b76 r/w with ordered data mode. Quota mode: none. Apr 30 03:29:57.446579 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 03:29:57.448505 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 03:29:57.456031 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:29:57.470193 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 03:29:57.473102 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 03:29:57.477776 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 03:29:57.479493 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:29:57.485105 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 03:29:57.498113 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 03:29:57.518343 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (818) Apr 30 03:29:57.518383 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:29:57.518413 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:29:57.518430 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:29:57.518446 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:29:57.518462 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:29:57.521292 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:29:57.570363 coreos-metadata[820]: Apr 30 03:29:57.570 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 30 03:29:57.572527 coreos-metadata[820]: Apr 30 03:29:57.572 INFO Fetch successful Apr 30 03:29:57.575000 coreos-metadata[820]: Apr 30 03:29:57.573 INFO wrote hostname ci-4081-3-3-b-f8d40824c9 to /sysroot/etc/hostname Apr 30 03:29:57.577878 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 03:29:57.580407 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:29:57.586128 initrd-setup-root[854]: cut: /sysroot/etc/group: No such file or directory Apr 30 03:29:57.590388 initrd-setup-root[861]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 03:29:57.594449 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 03:29:57.709225 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 03:29:57.716125 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 03:29:57.719361 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 03:29:57.738100 kernel: BTRFS info (device sda6): last unmount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:29:57.782890 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 03:29:57.790721 ignition[940]: INFO : Ignition 2.19.0 Apr 30 03:29:57.791577 ignition[940]: INFO : Stage: mount Apr 30 03:29:57.792018 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:29:57.792018 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:29:57.793318 ignition[940]: INFO : mount: mount passed Apr 30 03:29:57.793318 ignition[940]: INFO : Ignition finished successfully Apr 30 03:29:57.794481 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 03:29:57.801062 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 03:29:57.879060 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 03:29:57.893551 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 03:29:57.912040 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (951) Apr 30 03:29:57.918255 kernel: BTRFS info (device sda6): first mount of filesystem dea0d870-fd31-489b-84db-7261ba2c88d5 Apr 30 03:29:57.918304 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 03:29:57.921375 kernel: BTRFS info (device sda6): using free space tree Apr 30 03:29:57.940254 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 03:29:57.940381 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 03:29:57.947612 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 03:29:57.985189 ignition[969]: INFO : Ignition 2.19.0 Apr 30 03:29:57.985189 ignition[969]: INFO : Stage: files Apr 30 03:29:57.987729 ignition[969]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:29:57.987729 ignition[969]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:29:57.987729 ignition[969]: DEBUG : files: compiled without relabeling support, skipping Apr 30 03:29:57.992265 ignition[969]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 03:29:57.992265 ignition[969]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 03:29:57.996156 ignition[969]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 03:29:57.996156 ignition[969]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 03:29:57.996156 ignition[969]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 03:29:57.994965 unknown[969]: wrote ssh authorized keys file for user: core Apr 30 03:29:58.002371 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:29:58.002371 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 03:29:58.234625 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 03:29:58.259216 systemd-networkd[780]: eth0: Gained IPv6LL Apr 30 03:29:58.261007 systemd-networkd[780]: eth1: Gained IPv6LL Apr 30 03:29:58.598454 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 03:29:58.598454 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 03:29:58.602444 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 03:29:59.254854 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 03:29:59.334432 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 03:29:59.334432 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 03:29:59.338323 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 03:29:59.338323 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:29:59.338323 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 03:29:59.338323 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:29:59.338323 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 03:29:59.338323 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:29:59.338323 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 03:29:59.338323 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:29:59.338323 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 03:29:59.338323 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:29:59.338323 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:29:59.338323 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:29:59.338323 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 03:29:59.917814 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 03:30:00.211066 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 03:30:00.211066 ignition[969]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 03:30:00.214973 ignition[969]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:30:00.214973 ignition[969]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 03:30:00.214973 ignition[969]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 03:30:00.214973 ignition[969]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 30 03:30:00.214973 ignition[969]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 03:30:00.214973 ignition[969]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 03:30:00.214973 ignition[969]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 30 03:30:00.214973 ignition[969]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 30 03:30:00.214973 ignition[969]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 03:30:00.214973 ignition[969]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:30:00.214973 ignition[969]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 03:30:00.214973 ignition[969]: INFO : files: files passed Apr 30 03:30:00.214973 ignition[969]: INFO : Ignition finished successfully Apr 30 03:30:00.214787 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 03:30:00.224310 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 03:30:00.229321 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 03:30:00.234419 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 03:30:00.234518 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 03:30:00.245868 initrd-setup-root-after-ignition[997]: grep: Apr 30 03:30:00.246854 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:30:00.248197 initrd-setup-root-after-ignition[997]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:30:00.248197 initrd-setup-root-after-ignition[997]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 03:30:00.249231 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:30:00.250410 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 03:30:00.257311 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 03:30:00.280271 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 03:30:00.280401 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 03:30:00.282433 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 03:30:00.283500 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 03:30:00.285214 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 03:30:00.291332 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 03:30:00.303508 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:30:00.309133 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 03:30:00.330051 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:30:00.331315 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:30:00.333001 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 03:30:00.334725 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 03:30:00.334894 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 03:30:00.336737 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 03:30:00.338046 systemd[1]: Stopped target basic.target - Basic System. Apr 30 03:30:00.339779 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 03:30:00.341984 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 03:30:00.343538 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 03:30:00.345423 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 03:30:00.347044 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 03:30:00.348547 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 03:30:00.349967 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 03:30:00.351436 systemd[1]: Stopped target swap.target - Swaps. Apr 30 03:30:00.352982 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 03:30:00.353170 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 03:30:00.355129 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:30:00.356638 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:30:00.358054 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 03:30:00.358351 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:30:00.359351 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 03:30:00.359511 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 03:30:00.361444 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 03:30:00.361641 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 03:30:00.362982 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 03:30:00.363205 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 03:30:00.364123 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 03:30:00.364273 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 03:30:00.374602 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 03:30:00.377220 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 03:30:00.378372 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 03:30:00.380063 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:30:00.381127 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 03:30:00.381273 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 03:30:00.393654 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 03:30:00.393794 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 03:30:00.398616 ignition[1021]: INFO : Ignition 2.19.0 Apr 30 03:30:00.398616 ignition[1021]: INFO : Stage: umount Apr 30 03:30:00.398616 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 03:30:00.398616 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 03:30:00.398616 ignition[1021]: INFO : umount: umount passed Apr 30 03:30:00.398616 ignition[1021]: INFO : Ignition finished successfully Apr 30 03:30:00.401072 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 03:30:00.401233 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 03:30:00.410788 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 03:30:00.410860 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 03:30:00.411746 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 03:30:00.411790 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 03:30:00.414030 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 03:30:00.414082 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 03:30:00.416221 systemd[1]: Stopped target network.target - Network. Apr 30 03:30:00.417355 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 03:30:00.417441 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 03:30:00.418562 systemd[1]: Stopped target paths.target - Path Units. Apr 30 03:30:00.419654 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 03:30:00.424991 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:30:00.425847 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 03:30:00.427393 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 03:30:00.429661 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 03:30:00.429704 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 03:30:00.430654 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 03:30:00.430691 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 03:30:00.432907 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 03:30:00.433018 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 03:30:00.434676 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 03:30:00.434715 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 03:30:00.436888 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 03:30:00.437990 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 03:30:00.441781 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 03:30:00.443989 systemd-networkd[780]: eth0: DHCPv6 lease lost Apr 30 03:30:00.447086 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 03:30:00.447223 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 03:30:00.448997 systemd-networkd[780]: eth1: DHCPv6 lease lost Apr 30 03:30:00.450770 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 03:30:00.450829 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:30:00.451884 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 03:30:00.452005 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 03:30:00.453888 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 03:30:00.453921 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:30:00.463588 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 03:30:00.465530 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 03:30:00.465602 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 03:30:00.467052 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:30:00.467119 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:30:00.469473 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 03:30:00.469508 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 03:30:00.470791 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:30:00.486388 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 03:30:00.486536 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 03:30:00.489514 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 03:30:00.489655 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:30:00.490816 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 03:30:00.490871 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 03:30:00.491629 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 03:30:00.491656 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:30:00.492182 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 03:30:00.492220 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 03:30:00.494081 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 03:30:00.494117 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 03:30:00.495447 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 03:30:00.495485 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 03:30:00.502172 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 03:30:00.503455 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 03:30:00.503535 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:30:00.504148 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 03:30:00.504194 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:30:00.504756 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 03:30:00.504792 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:30:00.507346 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:30:00.507386 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:00.508575 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 03:30:00.509233 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 03:30:00.510543 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 03:30:00.510616 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 03:30:00.511512 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 03:30:00.511625 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 03:30:00.513096 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 03:30:00.521181 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 03:30:00.527761 systemd[1]: Switching root. Apr 30 03:30:00.566226 systemd-journald[187]: Journal stopped Apr 30 03:30:01.667212 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Apr 30 03:30:01.667286 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 03:30:01.667305 kernel: SELinux: policy capability open_perms=1 Apr 30 03:30:01.667323 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 03:30:01.667336 kernel: SELinux: policy capability always_check_network=0 Apr 30 03:30:01.667348 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 03:30:01.667361 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 03:30:01.667378 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 03:30:01.667388 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 03:30:01.667397 kernel: audit: type=1403 audit(1745983800.744:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 03:30:01.667410 systemd[1]: Successfully loaded SELinux policy in 58.433ms. Apr 30 03:30:01.667431 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.589ms. Apr 30 03:30:01.667443 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 03:30:01.667454 systemd[1]: Detected virtualization kvm. Apr 30 03:30:01.667464 systemd[1]: Detected architecture x86-64. Apr 30 03:30:01.667475 systemd[1]: Detected first boot. Apr 30 03:30:01.667485 systemd[1]: Hostname set to . Apr 30 03:30:01.667495 systemd[1]: Initializing machine ID from VM UUID. Apr 30 03:30:01.667505 zram_generator::config[1063]: No configuration found. Apr 30 03:30:01.667518 systemd[1]: Populated /etc with preset unit settings. Apr 30 03:30:01.667529 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 03:30:01.667539 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 03:30:01.667549 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 03:30:01.667562 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 03:30:01.667572 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 03:30:01.667582 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 03:30:01.667594 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 03:30:01.667605 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 03:30:01.667615 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 03:30:01.667624 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 03:30:01.667634 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 03:30:01.667644 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 03:30:01.667656 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 03:30:01.667666 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 03:30:01.667677 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 03:30:01.667687 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 03:30:01.667698 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 03:30:01.667707 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 03:30:01.667717 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 03:30:01.667727 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 03:30:01.667739 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 03:30:01.667753 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 03:30:01.667763 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 03:30:01.667773 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 03:30:01.667784 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 03:30:01.667796 systemd[1]: Reached target slices.target - Slice Units. Apr 30 03:30:01.667806 systemd[1]: Reached target swap.target - Swaps. Apr 30 03:30:01.667815 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 03:30:01.667827 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 03:30:01.667841 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 03:30:01.667854 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 03:30:01.667865 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 03:30:01.667876 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 03:30:01.667886 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 03:30:01.667897 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 03:30:01.667912 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 03:30:01.667925 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:01.671073 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 03:30:01.671116 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 03:30:01.671128 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 03:30:01.671164 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 03:30:01.671174 systemd[1]: Reached target machines.target - Containers. Apr 30 03:30:01.671189 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 03:30:01.671201 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:30:01.671211 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 03:30:01.671221 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 03:30:01.671231 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:30:01.671241 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:30:01.671251 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:30:01.671260 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 03:30:01.671273 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:30:01.671283 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 03:30:01.671294 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 03:30:01.671303 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 03:30:01.671314 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 03:30:01.671323 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 03:30:01.671333 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 03:30:01.671342 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 03:30:01.671352 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 03:30:01.671363 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 03:30:01.671375 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 03:30:01.671389 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 03:30:01.671402 systemd[1]: Stopped verity-setup.service. Apr 30 03:30:01.671416 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:01.671430 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 03:30:01.671441 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 03:30:01.671451 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 03:30:01.671464 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 03:30:01.671474 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 03:30:01.671483 kernel: fuse: init (API version 7.39) Apr 30 03:30:01.671496 kernel: loop: module loaded Apr 30 03:30:01.671510 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 03:30:01.671530 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 03:30:01.671542 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 03:30:01.671553 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 03:30:01.671564 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:30:01.671574 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:30:01.671588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:30:01.671604 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:30:01.671618 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 03:30:01.671632 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 03:30:01.671646 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:30:01.671659 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:30:01.671672 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 03:30:01.671686 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 03:30:01.671700 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 03:30:01.671714 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 03:30:01.671774 systemd-journald[1146]: Collecting audit messages is disabled. Apr 30 03:30:01.671804 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 03:30:01.671813 kernel: ACPI: bus type drm_connector registered Apr 30 03:30:01.671824 systemd-journald[1146]: Journal started Apr 30 03:30:01.671844 systemd-journald[1146]: Runtime Journal (/run/log/journal/e64cc05acd1f4aa69a4994e77aab6a5a) is 4.8M, max 38.4M, 33.6M free. Apr 30 03:30:01.675271 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 03:30:01.318550 systemd[1]: Queued start job for default target multi-user.target. Apr 30 03:30:01.346854 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 03:30:01.347221 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 03:30:01.694227 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 03:30:01.694273 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 03:30:01.695511 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 03:30:01.698973 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 03:30:01.713104 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 03:30:01.721029 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 03:30:01.724966 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:30:01.729995 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 03:30:01.734027 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:30:01.740478 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 03:30:01.746004 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:30:01.756809 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:30:01.777977 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 03:30:01.794410 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 03:30:01.797017 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 03:30:01.799463 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:30:01.799715 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:30:01.802080 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 03:30:01.803715 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 03:30:01.804994 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 03:30:01.805725 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 03:30:01.811325 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 03:30:01.814977 kernel: loop0: detected capacity change from 0 to 140768 Apr 30 03:30:01.824172 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:30:01.846792 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 03:30:01.856419 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 03:30:01.867283 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 03:30:01.871443 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 03:30:01.876775 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 03:30:01.881787 systemd-journald[1146]: Time spent on flushing to /var/log/journal/e64cc05acd1f4aa69a4994e77aab6a5a is 27.297ms for 1144 entries. Apr 30 03:30:01.881787 systemd-journald[1146]: System Journal (/var/log/journal/e64cc05acd1f4aa69a4994e77aab6a5a) is 8.0M, max 584.8M, 576.8M free. Apr 30 03:30:01.937041 systemd-journald[1146]: Received client request to flush runtime journal. Apr 30 03:30:01.937092 kernel: loop1: detected capacity change from 0 to 8 Apr 30 03:30:01.937119 kernel: loop2: detected capacity change from 0 to 210664 Apr 30 03:30:01.897883 udevadm[1197]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 03:30:01.898343 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Apr 30 03:30:01.898356 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Apr 30 03:30:01.913575 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 03:30:01.933848 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 03:30:01.939717 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 03:30:01.956688 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 03:30:01.958224 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 03:30:01.985780 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 03:30:01.995578 kernel: loop3: detected capacity change from 0 to 142488 Apr 30 03:30:01.998510 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 03:30:02.018316 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Apr 30 03:30:02.018342 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Apr 30 03:30:02.023441 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 03:30:02.052278 kernel: loop4: detected capacity change from 0 to 140768 Apr 30 03:30:02.084984 kernel: loop5: detected capacity change from 0 to 8 Apr 30 03:30:02.088453 kernel: loop6: detected capacity change from 0 to 210664 Apr 30 03:30:02.118998 kernel: loop7: detected capacity change from 0 to 142488 Apr 30 03:30:02.151616 (sd-merge)[1211]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 30 03:30:02.152334 (sd-merge)[1211]: Merged extensions into '/usr'. Apr 30 03:30:02.160582 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 03:30:02.160595 systemd[1]: Reloading... Apr 30 03:30:02.273974 zram_generator::config[1240]: No configuration found. Apr 30 03:30:02.389979 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 03:30:02.404380 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:30:02.469345 systemd[1]: Reloading finished in 308 ms. Apr 30 03:30:02.492240 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 03:30:02.496238 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 03:30:02.504131 systemd[1]: Starting ensure-sysext.service... Apr 30 03:30:02.506693 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 03:30:02.519262 systemd[1]: Reloading requested from client PID 1280 ('systemctl') (unit ensure-sysext.service)... Apr 30 03:30:02.519271 systemd[1]: Reloading... Apr 30 03:30:02.541649 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 03:30:02.541964 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 03:30:02.542774 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 03:30:02.543927 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Apr 30 03:30:02.544080 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Apr 30 03:30:02.546931 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:30:02.547029 systemd-tmpfiles[1282]: Skipping /boot Apr 30 03:30:02.560319 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 03:30:02.560335 systemd-tmpfiles[1282]: Skipping /boot Apr 30 03:30:02.607603 zram_generator::config[1308]: No configuration found. Apr 30 03:30:02.725479 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:30:02.783578 systemd[1]: Reloading finished in 264 ms. Apr 30 03:30:02.800697 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 03:30:02.801590 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 03:30:02.814446 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:30:02.822222 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 03:30:02.827053 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 03:30:02.830100 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 03:30:02.840365 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 03:30:02.848151 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 03:30:02.854594 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:02.854767 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:30:02.862090 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:30:02.869312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:30:02.875001 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:30:02.875765 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:30:02.875879 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:02.876759 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:30:02.877246 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:30:02.884903 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:02.887157 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:30:02.895689 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:30:02.896887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:30:02.899563 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Apr 30 03:30:02.901814 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 03:30:02.902617 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:02.904794 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:30:02.905088 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:30:02.916225 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:30:02.916767 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:30:02.920591 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:30:02.921099 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:30:02.924490 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 03:30:02.926895 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:02.928160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:30:02.934220 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:30:02.937072 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 03:30:02.937675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:30:02.937737 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:30:02.937790 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:02.938353 systemd[1]: Finished ensure-sysext.service. Apr 30 03:30:02.940900 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 03:30:02.953567 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 03:30:02.956701 augenrules[1389]: No rules Apr 30 03:30:02.964232 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 03:30:02.965361 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:30:02.966912 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:30:02.967158 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:30:02.975112 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 03:30:02.975306 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 03:30:02.977715 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:30:02.979858 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 03:30:02.989101 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 03:30:03.005232 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 03:30:03.007472 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 03:30:03.020835 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 03:30:03.022793 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:30:03.118605 systemd-networkd[1400]: lo: Link UP Apr 30 03:30:03.118966 systemd-networkd[1400]: lo: Gained carrier Apr 30 03:30:03.119844 systemd-networkd[1400]: Enumeration completed Apr 30 03:30:03.120019 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 03:30:03.130158 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 03:30:03.133786 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 03:30:03.135079 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 03:30:03.160469 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 03:30:03.161717 systemd-resolved[1357]: Positive Trust Anchors: Apr 30 03:30:03.162000 systemd-resolved[1357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 03:30:03.162066 systemd-resolved[1357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 03:30:03.168456 systemd-resolved[1357]: Using system hostname 'ci-4081-3-3-b-f8d40824c9'. Apr 30 03:30:03.169836 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 03:30:03.171123 systemd[1]: Reached target network.target - Network. Apr 30 03:30:03.171967 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 03:30:03.217980 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1414) Apr 30 03:30:03.239978 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Apr 30 03:30:03.244873 systemd-networkd[1400]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:30:03.245972 systemd-networkd[1400]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:30:03.247801 systemd-networkd[1400]: eth1: Link UP Apr 30 03:30:03.247867 systemd-networkd[1400]: eth1: Gained carrier Apr 30 03:30:03.247914 systemd-networkd[1400]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:30:03.251973 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 03:30:03.265734 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:30:03.268470 systemd-networkd[1400]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 03:30:03.270984 kernel: ACPI: button: Power Button [PWRF] Apr 30 03:30:03.271476 systemd-networkd[1400]: eth0: Link UP Apr 30 03:30:03.271526 systemd-networkd[1400]: eth0: Gained carrier Apr 30 03:30:03.271577 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 03:30:03.292444 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 30 03:30:03.294039 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:03.294187 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 03:30:03.298426 systemd-networkd[1400]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 03:30:03.301056 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Apr 30 03:30:03.301407 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 03:30:03.311465 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 03:30:03.320960 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Apr 30 03:30:03.317203 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 03:30:03.321388 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 03:30:03.321440 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 03:30:03.321453 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 03:30:03.329954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 03:30:03.330450 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 03:30:03.332450 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 03:30:03.332792 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 03:30:03.338017 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 03:30:03.340190 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 03:30:03.346278 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 03:30:03.346395 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 30 03:30:03.337993 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 03:30:03.338189 systemd-networkd[1400]: eth0: DHCPv4 address 157.180.66.130/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 03:30:03.340900 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Apr 30 03:30:03.347352 kernel: EDAC MC: Ver: 3.0.0 Apr 30 03:30:03.347383 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 30 03:30:03.350068 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 03:30:03.351040 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 03:30:03.352507 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 03:30:03.360744 kernel: Console: switching to colour dummy device 80x25 Apr 30 03:30:03.360818 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 03:30:03.360829 kernel: [drm] features: -context_init Apr 30 03:30:03.362265 kernel: [drm] number of scanouts: 1 Apr 30 03:30:03.362969 kernel: [drm] number of cap sets: 0 Apr 30 03:30:03.368973 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 30 03:30:03.374600 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 03:30:03.378075 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 30 03:30:03.378116 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 03:30:03.388665 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 03:30:03.390575 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 03:30:03.396586 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:30:03.412243 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:30:03.412420 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:03.421242 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:30:03.424327 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 03:30:03.424484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:03.425698 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 03:30:03.435123 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 03:30:03.512314 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 03:30:03.562418 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 03:30:03.569287 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 03:30:03.598392 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:30:03.639190 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 03:30:03.639646 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 03:30:03.639818 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 03:30:03.640206 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 03:30:03.640421 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 03:30:03.641446 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 03:30:03.642902 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 03:30:03.643105 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 03:30:03.643232 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 03:30:03.643274 systemd[1]: Reached target paths.target - Path Units. Apr 30 03:30:03.643369 systemd[1]: Reached target timers.target - Timer Units. Apr 30 03:30:03.646366 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 03:30:03.649389 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 03:30:03.665392 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 03:30:03.670227 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 03:30:03.676794 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 03:30:03.679701 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 03:30:03.684788 systemd[1]: Reached target basic.target - Basic System. Apr 30 03:30:03.686187 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:30:03.686236 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 03:30:03.692254 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 03:30:03.697024 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 03:30:03.707440 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 03:30:03.715392 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 03:30:03.721409 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 03:30:03.733258 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 03:30:03.734223 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 03:30:03.739201 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 03:30:03.749351 jq[1476]: false Apr 30 03:30:03.754120 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 03:30:03.767235 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 30 03:30:03.768929 coreos-metadata[1472]: Apr 30 03:30:03.768 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 30 03:30:03.771390 coreos-metadata[1472]: Apr 30 03:30:03.771 INFO Fetch successful Apr 30 03:30:03.773526 coreos-metadata[1472]: Apr 30 03:30:03.772 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 30 03:30:03.774082 coreos-metadata[1472]: Apr 30 03:30:03.774 INFO Fetch successful Apr 30 03:30:03.774395 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 03:30:03.786413 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 03:30:03.800209 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 03:30:03.803378 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 03:30:03.804698 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 03:30:03.806842 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 03:30:03.819085 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 03:30:03.823209 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 03:30:03.827778 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 03:30:03.827955 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 03:30:03.842436 dbus-daemon[1473]: [system] SELinux support is enabled Apr 30 03:30:03.849163 jq[1493]: true Apr 30 03:30:03.842612 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 03:30:03.856660 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 03:30:03.856830 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 03:30:03.860974 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 03:30:03.864281 extend-filesystems[1477]: Found loop4 Apr 30 03:30:03.864281 extend-filesystems[1477]: Found loop5 Apr 30 03:30:03.864281 extend-filesystems[1477]: Found loop6 Apr 30 03:30:03.864281 extend-filesystems[1477]: Found loop7 Apr 30 03:30:03.864281 extend-filesystems[1477]: Found sda Apr 30 03:30:03.861106 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 03:30:03.883492 update_engine[1490]: I20250430 03:30:03.868755 1490 main.cc:92] Flatcar Update Engine starting Apr 30 03:30:03.883656 extend-filesystems[1477]: Found sda1 Apr 30 03:30:03.883656 extend-filesystems[1477]: Found sda2 Apr 30 03:30:03.883656 extend-filesystems[1477]: Found sda3 Apr 30 03:30:03.883656 extend-filesystems[1477]: Found usr Apr 30 03:30:03.883656 extend-filesystems[1477]: Found sda4 Apr 30 03:30:03.883656 extend-filesystems[1477]: Found sda6 Apr 30 03:30:03.883656 extend-filesystems[1477]: Found sda7 Apr 30 03:30:03.883656 extend-filesystems[1477]: Found sda9 Apr 30 03:30:03.883656 extend-filesystems[1477]: Checking size of /dev/sda9 Apr 30 03:30:03.940872 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 30 03:30:03.878792 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 03:30:03.940994 jq[1503]: true Apr 30 03:30:03.941127 update_engine[1490]: I20250430 03:30:03.903764 1490 update_check_scheduler.cc:74] Next update check in 8m51s Apr 30 03:30:03.941171 extend-filesystems[1477]: Resized partition /dev/sda9 Apr 30 03:30:03.878818 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 03:30:03.941544 tar[1496]: linux-amd64/helm Apr 30 03:30:03.941750 extend-filesystems[1515]: resize2fs 1.47.1 (20-May-2024) Apr 30 03:30:03.881759 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 03:30:03.881774 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 03:30:03.907269 systemd[1]: Started update-engine.service - Update Engine. Apr 30 03:30:03.910313 (ntainerd)[1507]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 03:30:03.922927 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 03:30:03.962041 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1407) Apr 30 03:30:04.006925 systemd-logind[1487]: New seat seat0. Apr 30 03:30:04.023480 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Apr 30 03:30:04.023509 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 03:30:04.023767 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 03:30:04.072534 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 03:30:04.073389 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 03:30:04.102405 locksmithd[1521]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 03:30:04.135529 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 03:30:04.140690 sshd_keygen[1512]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 03:30:04.140799 bash[1542]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:30:04.147270 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 03:30:04.150561 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 03:30:04.164201 systemd[1]: Starting sshkeys.service... Apr 30 03:30:04.178292 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 03:30:04.178490 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 03:30:04.188323 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 03:30:04.201331 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 03:30:04.213468 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 03:30:04.217726 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 03:30:04.231427 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 03:30:04.243801 containerd[1507]: time="2025-04-30T03:30:04.240301769Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 03:30:04.244877 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 03:30:04.246435 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 03:30:04.260975 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 30 03:30:04.292239 coreos-metadata[1569]: Apr 30 03:30:04.271 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 30 03:30:04.292239 coreos-metadata[1569]: Apr 30 03:30:04.272 INFO Fetch successful Apr 30 03:30:04.292484 containerd[1507]: time="2025-04-30T03:30:04.282399277Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:04.292484 containerd[1507]: time="2025-04-30T03:30:04.284064881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:30:04.292484 containerd[1507]: time="2025-04-30T03:30:04.284117339Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 03:30:04.292484 containerd[1507]: time="2025-04-30T03:30:04.284150602Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 03:30:04.292565 containerd[1507]: time="2025-04-30T03:30:04.292504931Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 03:30:04.292565 containerd[1507]: time="2025-04-30T03:30:04.292550537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:04.293285 containerd[1507]: time="2025-04-30T03:30:04.292716138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:30:04.293285 containerd[1507]: time="2025-04-30T03:30:04.292737267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:04.293285 containerd[1507]: time="2025-04-30T03:30:04.293000741Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:30:04.293285 containerd[1507]: time="2025-04-30T03:30:04.293016911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:04.293285 containerd[1507]: time="2025-04-30T03:30:04.293031439Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:30:04.293285 containerd[1507]: time="2025-04-30T03:30:04.293040536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:04.293285 containerd[1507]: time="2025-04-30T03:30:04.293117240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:04.294053 containerd[1507]: time="2025-04-30T03:30:04.293579126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 03:30:04.294053 containerd[1507]: time="2025-04-30T03:30:04.293685425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 03:30:04.294053 containerd[1507]: time="2025-04-30T03:30:04.293729198Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 03:30:04.300961 extend-filesystems[1515]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 03:30:04.300961 extend-filesystems[1515]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 30 03:30:04.300961 extend-filesystems[1515]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 30 03:30:04.318772 extend-filesystems[1477]: Resized filesystem in /dev/sda9 Apr 30 03:30:04.318772 extend-filesystems[1477]: Found sr0 Apr 30 03:30:04.328321 containerd[1507]: time="2025-04-30T03:30:04.301004713Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 03:30:04.328321 containerd[1507]: time="2025-04-30T03:30:04.301059587Z" level=info msg="metadata content store policy set" policy=shared Apr 30 03:30:04.328321 containerd[1507]: time="2025-04-30T03:30:04.316619050Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 03:30:04.328321 containerd[1507]: time="2025-04-30T03:30:04.316689872Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 03:30:04.328321 containerd[1507]: time="2025-04-30T03:30:04.316708778Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 03:30:04.328321 containerd[1507]: time="2025-04-30T03:30:04.316726301Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 03:30:04.328321 containerd[1507]: time="2025-04-30T03:30:04.316741660Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 03:30:04.328321 containerd[1507]: time="2025-04-30T03:30:04.316897111Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 03:30:04.328321 containerd[1507]: time="2025-04-30T03:30:04.317154234Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 03:30:04.328321 containerd[1507]: time="2025-04-30T03:30:04.317245755Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 03:30:04.328321 containerd[1507]: time="2025-04-30T03:30:04.317260282Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 03:30:04.328321 containerd[1507]: time="2025-04-30T03:30:04.317272946Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 03:30:04.328321 containerd[1507]: time="2025-04-30T03:30:04.317285830Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 03:30:04.328321 containerd[1507]: time="2025-04-30T03:30:04.317296991Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 03:30:04.302232 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 03:30:04.328664 containerd[1507]: time="2025-04-30T03:30:04.317308513Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 03:30:04.328664 containerd[1507]: time="2025-04-30T03:30:04.317323712Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 03:30:04.328664 containerd[1507]: time="2025-04-30T03:30:04.317337407Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 03:30:04.328664 containerd[1507]: time="2025-04-30T03:30:04.317348578Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 03:30:04.328664 containerd[1507]: time="2025-04-30T03:30:04.317360651Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 03:30:04.328664 containerd[1507]: time="2025-04-30T03:30:04.317374767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 03:30:04.328664 containerd[1507]: time="2025-04-30T03:30:04.317397720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 03:30:04.328664 containerd[1507]: time="2025-04-30T03:30:04.317414121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 03:30:04.328664 containerd[1507]: time="2025-04-30T03:30:04.317428528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 03:30:04.328664 containerd[1507]: time="2025-04-30T03:30:04.317443526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 03:30:04.328664 containerd[1507]: time="2025-04-30T03:30:04.317457923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 03:30:04.328664 containerd[1507]: time="2025-04-30T03:30:04.317481347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 03:30:04.328664 containerd[1507]: time="2025-04-30T03:30:04.317493119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 03:30:04.328664 containerd[1507]: time="2025-04-30T03:30:04.317504540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 03:30:04.302398 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 03:30:04.333057 containerd[1507]: time="2025-04-30T03:30:04.317516714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 03:30:04.333057 containerd[1507]: time="2025-04-30T03:30:04.317529778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 03:30:04.333057 containerd[1507]: time="2025-04-30T03:30:04.317545698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 03:30:04.333057 containerd[1507]: time="2025-04-30T03:30:04.317558141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 03:30:04.333057 containerd[1507]: time="2025-04-30T03:30:04.317569673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 03:30:04.333057 containerd[1507]: time="2025-04-30T03:30:04.317584470Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 03:30:04.333057 containerd[1507]: time="2025-04-30T03:30:04.317605480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 03:30:04.333057 containerd[1507]: time="2025-04-30T03:30:04.317619797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 03:30:04.333057 containerd[1507]: time="2025-04-30T03:30:04.317635426Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 03:30:04.333057 containerd[1507]: time="2025-04-30T03:30:04.317677976Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 03:30:04.333057 containerd[1507]: time="2025-04-30T03:30:04.317699096Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 03:30:04.333057 containerd[1507]: time="2025-04-30T03:30:04.317709635Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 03:30:04.333057 containerd[1507]: time="2025-04-30T03:30:04.317720937Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 03:30:04.305738 unknown[1569]: wrote ssh authorized keys file for user: core Apr 30 03:30:04.334478 containerd[1507]: time="2025-04-30T03:30:04.317730264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 03:30:04.334478 containerd[1507]: time="2025-04-30T03:30:04.317741555Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 03:30:04.334478 containerd[1507]: time="2025-04-30T03:30:04.317755602Z" level=info msg="NRI interface is disabled by configuration." Apr 30 03:30:04.334478 containerd[1507]: time="2025-04-30T03:30:04.317766412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 03:30:04.334555 containerd[1507]: time="2025-04-30T03:30:04.326633974Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 03:30:04.334555 containerd[1507]: time="2025-04-30T03:30:04.326784556Z" level=info msg="Connect containerd service" Apr 30 03:30:04.334555 containerd[1507]: time="2025-04-30T03:30:04.326839219Z" level=info msg="using legacy CRI server" Apr 30 03:30:04.334555 containerd[1507]: time="2025-04-30T03:30:04.326847544Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 03:30:04.334555 containerd[1507]: time="2025-04-30T03:30:04.326989911Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 03:30:04.334555 containerd[1507]: time="2025-04-30T03:30:04.331036712Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:30:04.334555 containerd[1507]: time="2025-04-30T03:30:04.331831292Z" level=info msg="Start subscribing containerd event" Apr 30 03:30:04.334555 containerd[1507]: time="2025-04-30T03:30:04.331923695Z" level=info msg="Start recovering state" Apr 30 03:30:04.334555 containerd[1507]: time="2025-04-30T03:30:04.334100428Z" level=info msg="Start event monitor" Apr 30 03:30:04.334555 containerd[1507]: time="2025-04-30T03:30:04.334118762Z" level=info msg="Start snapshots syncer" Apr 30 03:30:04.334555 containerd[1507]: time="2025-04-30T03:30:04.334130564Z" level=info msg="Start cni network conf syncer for default" Apr 30 03:30:04.334555 containerd[1507]: time="2025-04-30T03:30:04.334155531Z" level=info msg="Start streaming server" Apr 30 03:30:04.335576 containerd[1507]: time="2025-04-30T03:30:04.335395817Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 03:30:04.335576 containerd[1507]: time="2025-04-30T03:30:04.335447965Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 03:30:04.335551 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 03:30:04.340955 containerd[1507]: time="2025-04-30T03:30:04.340800244Z" level=info msg="containerd successfully booted in 0.111245s" Apr 30 03:30:04.346351 update-ssh-keys[1579]: Updated "/home/core/.ssh/authorized_keys" Apr 30 03:30:04.346881 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 03:30:04.354007 systemd[1]: Finished sshkeys.service. Apr 30 03:30:04.402167 systemd-networkd[1400]: eth1: Gained IPv6LL Apr 30 03:30:04.403404 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Apr 30 03:30:04.407877 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 03:30:04.414819 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 03:30:04.428054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:30:04.433364 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 03:30:04.480508 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 03:30:04.595461 systemd-networkd[1400]: eth0: Gained IPv6LL Apr 30 03:30:04.595843 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Apr 30 03:30:04.640307 tar[1496]: linux-amd64/LICENSE Apr 30 03:30:04.640373 tar[1496]: linux-amd64/README.md Apr 30 03:30:04.650822 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 03:30:05.538322 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:30:05.541628 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:30:05.542858 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 03:30:05.551246 systemd[1]: Startup finished in 1.462s (kernel) + 6.977s (initrd) + 4.863s (userspace) = 13.303s. Apr 30 03:30:06.490071 kubelet[1601]: E0430 03:30:06.489992 1601 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:30:06.493848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:30:06.494010 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:30:06.494293 systemd[1]: kubelet.service: Consumed 1.416s CPU time. Apr 30 03:30:16.745369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 03:30:16.752689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:30:16.888105 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:30:16.891324 (kubelet)[1622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:30:16.939698 kubelet[1622]: E0430 03:30:16.939614 1622 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:30:16.947492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:30:16.947665 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:30:27.198739 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 03:30:27.207718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:30:27.357788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:30:27.361478 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:30:27.398218 kubelet[1639]: E0430 03:30:27.398069 1639 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:30:27.401625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:30:27.401780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:30:35.771594 systemd-resolved[1357]: Clock change detected. Flushing caches. Apr 30 03:30:35.771730 systemd-timesyncd[1387]: Contacted time server 158.101.188.125:123 (2.flatcar.pool.ntp.org). Apr 30 03:30:35.771807 systemd-timesyncd[1387]: Initial clock synchronization to Wed 2025-04-30 03:30:35.771439 UTC. Apr 30 03:30:38.409706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 03:30:38.417325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:30:38.547405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:30:38.557193 (kubelet)[1655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:30:38.594680 kubelet[1655]: E0430 03:30:38.594616 1655 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:30:38.598153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:30:38.598285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:30:48.849394 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 03:30:48.856238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:30:48.990834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:30:48.994127 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:30:49.037622 kubelet[1671]: E0430 03:30:49.037508 1671 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:30:49.039269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:30:49.039503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:30:50.050618 update_engine[1490]: I20250430 03:30:50.050446 1490 update_attempter.cc:509] Updating boot flags... Apr 30 03:30:50.101261 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1690) Apr 30 03:30:50.167061 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1693) Apr 30 03:30:50.206009 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1693) Apr 30 03:30:59.065470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 03:30:59.072286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:30:59.192348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:30:59.208488 (kubelet)[1710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:30:59.248433 kubelet[1710]: E0430 03:30:59.248367 1710 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:30:59.250997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:30:59.251172 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:31:09.314736 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 03:31:09.322773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:31:09.468316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:31:09.472022 (kubelet)[1726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:31:09.506920 kubelet[1726]: E0430 03:31:09.506843 1726 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:31:09.509458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:31:09.509591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:31:19.564344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 30 03:31:19.570889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:31:19.715693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:31:19.718672 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:31:19.766839 kubelet[1742]: E0430 03:31:19.766755 1742 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:31:19.770494 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:31:19.770705 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:31:29.814729 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 30 03:31:29.823216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:31:29.959880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:31:29.963590 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:31:30.021099 kubelet[1758]: E0430 03:31:30.020923 1758 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:31:30.025449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:31:30.025631 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:31:40.064803 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 30 03:31:40.071221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:31:40.201918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:31:40.205099 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:31:40.245783 kubelet[1774]: E0430 03:31:40.245696 1774 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:31:40.249433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:31:40.249602 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:31:48.351447 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 03:31:48.358207 systemd[1]: Started sshd@0-157.180.66.130:22-139.178.68.195:47098.service - OpenSSH per-connection server daemon (139.178.68.195:47098). Apr 30 03:31:49.349032 sshd[1783]: Accepted publickey for core from 139.178.68.195 port 47098 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:31:49.352543 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:49.369975 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 03:31:49.376413 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 03:31:49.380533 systemd-logind[1487]: New session 1 of user core. Apr 30 03:31:49.403588 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 03:31:49.411418 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 03:31:49.430994 (systemd)[1787]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 03:31:49.575659 systemd[1787]: Queued start job for default target default.target. Apr 30 03:31:49.585958 systemd[1787]: Created slice app.slice - User Application Slice. Apr 30 03:31:49.585982 systemd[1787]: Reached target paths.target - Paths. Apr 30 03:31:49.585992 systemd[1787]: Reached target timers.target - Timers. Apr 30 03:31:49.587158 systemd[1787]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 03:31:49.610137 systemd[1787]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 03:31:49.610303 systemd[1787]: Reached target sockets.target - Sockets. Apr 30 03:31:49.610325 systemd[1787]: Reached target basic.target - Basic System. Apr 30 03:31:49.610369 systemd[1787]: Reached target default.target - Main User Target. Apr 30 03:31:49.610402 systemd[1787]: Startup finished in 169ms. Apr 30 03:31:49.610744 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 03:31:49.622166 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 03:31:50.313014 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 30 03:31:50.316038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:31:50.320080 systemd[1]: Started sshd@1-157.180.66.130:22-139.178.68.195:47110.service - OpenSSH per-connection server daemon (139.178.68.195:47110). Apr 30 03:31:50.431782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:31:50.434664 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:31:50.474911 kubelet[1807]: E0430 03:31:50.474827 1807 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:31:50.478699 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:31:50.478904 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:31:51.308581 sshd[1799]: Accepted publickey for core from 139.178.68.195 port 47110 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:31:51.311293 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:51.319227 systemd-logind[1487]: New session 2 of user core. Apr 30 03:31:51.330332 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 03:31:51.991270 sshd[1799]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:51.995761 systemd[1]: sshd@1-157.180.66.130:22-139.178.68.195:47110.service: Deactivated successfully. Apr 30 03:31:51.998866 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 03:31:52.001558 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Apr 30 03:31:52.003790 systemd-logind[1487]: Removed session 2. Apr 30 03:31:52.163404 systemd[1]: Started sshd@2-157.180.66.130:22-139.178.68.195:47116.service - OpenSSH per-connection server daemon (139.178.68.195:47116). Apr 30 03:31:53.156620 sshd[1822]: Accepted publickey for core from 139.178.68.195 port 47116 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:31:53.159195 sshd[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:53.164904 systemd-logind[1487]: New session 3 of user core. Apr 30 03:31:53.173168 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 03:31:53.831249 sshd[1822]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:53.837104 systemd[1]: sshd@2-157.180.66.130:22-139.178.68.195:47116.service: Deactivated successfully. Apr 30 03:31:53.839902 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 03:31:53.841091 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Apr 30 03:31:53.843457 systemd-logind[1487]: Removed session 3. Apr 30 03:31:54.006418 systemd[1]: Started sshd@3-157.180.66.130:22-139.178.68.195:47120.service - OpenSSH per-connection server daemon (139.178.68.195:47120). Apr 30 03:31:54.996086 sshd[1829]: Accepted publickey for core from 139.178.68.195 port 47120 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:31:54.998208 sshd[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:55.005276 systemd-logind[1487]: New session 4 of user core. Apr 30 03:31:55.014637 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 03:31:55.677051 sshd[1829]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:55.682788 systemd[1]: sshd@3-157.180.66.130:22-139.178.68.195:47120.service: Deactivated successfully. Apr 30 03:31:55.685247 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 03:31:55.688513 systemd-logind[1487]: Session 4 logged out. Waiting for processes to exit. Apr 30 03:31:55.690393 systemd-logind[1487]: Removed session 4. Apr 30 03:31:55.855981 systemd[1]: Started sshd@4-157.180.66.130:22-139.178.68.195:60654.service - OpenSSH per-connection server daemon (139.178.68.195:60654). Apr 30 03:31:56.840430 sshd[1836]: Accepted publickey for core from 139.178.68.195 port 60654 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:31:56.842582 sshd[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:56.850535 systemd-logind[1487]: New session 5 of user core. Apr 30 03:31:56.860287 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 03:31:57.376502 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 03:31:57.376898 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:31:57.400087 sudo[1839]: pam_unix(sudo:session): session closed for user root Apr 30 03:31:57.560266 sshd[1836]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:57.568693 systemd[1]: sshd@4-157.180.66.130:22-139.178.68.195:60654.service: Deactivated successfully. Apr 30 03:31:57.573640 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 03:31:57.575498 systemd-logind[1487]: Session 5 logged out. Waiting for processes to exit. Apr 30 03:31:57.578348 systemd-logind[1487]: Removed session 5. Apr 30 03:31:57.739409 systemd[1]: Started sshd@5-157.180.66.130:22-139.178.68.195:60658.service - OpenSSH per-connection server daemon (139.178.68.195:60658). Apr 30 03:31:58.719615 sshd[1844]: Accepted publickey for core from 139.178.68.195 port 60658 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:31:58.722885 sshd[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:31:58.733521 systemd-logind[1487]: New session 6 of user core. Apr 30 03:31:58.746348 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 03:31:59.241804 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 03:31:59.242515 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:31:59.249811 sudo[1848]: pam_unix(sudo:session): session closed for user root Apr 30 03:31:59.260770 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 03:31:59.261529 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:31:59.285456 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 03:31:59.299251 auditctl[1851]: No rules Apr 30 03:31:59.299970 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 03:31:59.300377 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 03:31:59.308750 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 03:31:59.357619 augenrules[1869]: No rules Apr 30 03:31:59.359902 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 03:31:59.361908 sudo[1847]: pam_unix(sudo:session): session closed for user root Apr 30 03:31:59.520826 sshd[1844]: pam_unix(sshd:session): session closed for user core Apr 30 03:31:59.525697 systemd[1]: sshd@5-157.180.66.130:22-139.178.68.195:60658.service: Deactivated successfully. Apr 30 03:31:59.528800 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 03:31:59.531418 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Apr 30 03:31:59.533300 systemd-logind[1487]: Removed session 6. Apr 30 03:31:59.695820 systemd[1]: Started sshd@6-157.180.66.130:22-139.178.68.195:60662.service - OpenSSH per-connection server daemon (139.178.68.195:60662). Apr 30 03:32:00.513741 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 30 03:32:00.521208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:32:00.684839 sshd[1877]: Accepted publickey for core from 139.178.68.195 port 60662 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:32:00.684599 sshd[1877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:32:00.690648 systemd-logind[1487]: New session 7 of user core. Apr 30 03:32:00.695113 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 03:32:00.699782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:32:00.710873 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:32:00.778375 kubelet[1887]: E0430 03:32:00.778226 1887 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:32:00.782223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:32:00.782484 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:32:01.205722 sudo[1897]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 03:32:01.206444 sudo[1897]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 03:32:01.632123 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 03:32:01.641417 (dockerd)[1914]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 03:32:02.103511 dockerd[1914]: time="2025-04-30T03:32:02.103215106Z" level=info msg="Starting up" Apr 30 03:32:02.261653 systemd[1]: var-lib-docker-metacopy\x2dcheck961686552-merged.mount: Deactivated successfully. Apr 30 03:32:02.307881 dockerd[1914]: time="2025-04-30T03:32:02.307821889Z" level=info msg="Loading containers: start." Apr 30 03:32:02.450984 kernel: Initializing XFRM netlink socket Apr 30 03:32:02.530843 systemd-networkd[1400]: docker0: Link UP Apr 30 03:32:02.554086 dockerd[1914]: time="2025-04-30T03:32:02.554032389Z" level=info msg="Loading containers: done." Apr 30 03:32:02.577977 dockerd[1914]: time="2025-04-30T03:32:02.577897152Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 03:32:02.578207 dockerd[1914]: time="2025-04-30T03:32:02.578022297Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 03:32:02.578207 dockerd[1914]: time="2025-04-30T03:32:02.578106024Z" level=info msg="Daemon has completed initialization" Apr 30 03:32:02.658716 dockerd[1914]: time="2025-04-30T03:32:02.658623622Z" level=info msg="API listen on /run/docker.sock" Apr 30 03:32:02.659184 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 03:32:04.184324 containerd[1507]: time="2025-04-30T03:32:04.184271849Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 03:32:04.857083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644479662.mount: Deactivated successfully. Apr 30 03:32:06.280032 containerd[1507]: time="2025-04-30T03:32:06.279968155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:06.281152 containerd[1507]: time="2025-04-30T03:32:06.281112571Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674967" Apr 30 03:32:06.282478 containerd[1507]: time="2025-04-30T03:32:06.282442476Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:06.286045 containerd[1507]: time="2025-04-30T03:32:06.285986503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:06.287043 containerd[1507]: time="2025-04-30T03:32:06.286872344Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.102561903s" Apr 30 03:32:06.287043 containerd[1507]: time="2025-04-30T03:32:06.286898794Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 03:32:06.314777 containerd[1507]: time="2025-04-30T03:32:06.314393887Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 03:32:08.715957 containerd[1507]: time="2025-04-30T03:32:08.715867725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:08.717943 containerd[1507]: time="2025-04-30T03:32:08.717880441Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617556" Apr 30 03:32:08.720592 containerd[1507]: time="2025-04-30T03:32:08.720546060Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:08.723559 containerd[1507]: time="2025-04-30T03:32:08.723520719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:08.725358 containerd[1507]: time="2025-04-30T03:32:08.724845885Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.410398759s" Apr 30 03:32:08.725358 containerd[1507]: time="2025-04-30T03:32:08.724896620Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 03:32:08.754808 containerd[1507]: time="2025-04-30T03:32:08.754771076Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 03:32:10.203853 containerd[1507]: time="2025-04-30T03:32:10.203775412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:10.206182 containerd[1507]: time="2025-04-30T03:32:10.206074905Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903704" Apr 30 03:32:10.208384 containerd[1507]: time="2025-04-30T03:32:10.208296481Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:10.215663 containerd[1507]: time="2025-04-30T03:32:10.213807889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:10.215663 containerd[1507]: time="2025-04-30T03:32:10.215496966Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.460413375s" Apr 30 03:32:10.215663 containerd[1507]: time="2025-04-30T03:32:10.215541490Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 03:32:10.243539 containerd[1507]: time="2025-04-30T03:32:10.243486997Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 03:32:10.814389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 30 03:32:10.820625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:32:10.949491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:32:10.954699 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:32:11.017151 kubelet[2140]: E0430 03:32:11.017018 2140 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:32:11.020492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:32:11.020666 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:32:11.492122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1248741439.mount: Deactivated successfully. Apr 30 03:32:11.899739 containerd[1507]: time="2025-04-30T03:32:11.899656986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:11.901840 containerd[1507]: time="2025-04-30T03:32:11.901743620Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185845" Apr 30 03:32:11.904190 containerd[1507]: time="2025-04-30T03:32:11.904095351Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:11.907732 containerd[1507]: time="2025-04-30T03:32:11.907675436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:11.908599 containerd[1507]: time="2025-04-30T03:32:11.908432095Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.664893752s" Apr 30 03:32:11.908599 containerd[1507]: time="2025-04-30T03:32:11.908470908Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 03:32:11.936016 containerd[1507]: time="2025-04-30T03:32:11.935971420Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 03:32:12.539081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2576728159.mount: Deactivated successfully. Apr 30 03:32:13.522801 containerd[1507]: time="2025-04-30T03:32:13.522721226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:13.524747 containerd[1507]: time="2025-04-30T03:32:13.524683826Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185843" Apr 30 03:32:13.526297 containerd[1507]: time="2025-04-30T03:32:13.526236838Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:13.530833 containerd[1507]: time="2025-04-30T03:32:13.530798555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:13.531843 containerd[1507]: time="2025-04-30T03:32:13.531811907Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.595794249s" Apr 30 03:32:13.531894 containerd[1507]: time="2025-04-30T03:32:13.531844399Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 03:32:13.553553 containerd[1507]: time="2025-04-30T03:32:13.553484240Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 03:32:14.087577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582844933.mount: Deactivated successfully. Apr 30 03:32:14.100426 containerd[1507]: time="2025-04-30T03:32:14.100305827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:14.102520 containerd[1507]: time="2025-04-30T03:32:14.102425367Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322312" Apr 30 03:32:14.106475 containerd[1507]: time="2025-04-30T03:32:14.106434091Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:14.112372 containerd[1507]: time="2025-04-30T03:32:14.112262581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:14.113685 containerd[1507]: time="2025-04-30T03:32:14.113633115Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 560.078949ms" Apr 30 03:32:14.113777 containerd[1507]: time="2025-04-30T03:32:14.113686870Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 03:32:14.151325 containerd[1507]: time="2025-04-30T03:32:14.151281753Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 03:32:14.757152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount800849140.mount: Deactivated successfully. Apr 30 03:32:18.626279 containerd[1507]: time="2025-04-30T03:32:18.626162681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:18.629044 containerd[1507]: time="2025-04-30T03:32:18.628948887Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238653" Apr 30 03:32:18.631597 containerd[1507]: time="2025-04-30T03:32:18.631476793Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:18.641342 containerd[1507]: time="2025-04-30T03:32:18.641279057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:18.643351 containerd[1507]: time="2025-04-30T03:32:18.642952759Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.491405661s" Apr 30 03:32:18.643351 containerd[1507]: time="2025-04-30T03:32:18.642991634Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 03:32:21.065169 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 30 03:32:21.076127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:32:21.283164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:32:21.285834 (kubelet)[2331]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 03:32:21.332034 kubelet[2331]: E0430 03:32:21.331623 2331 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 03:32:21.334313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 03:32:21.334454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 03:32:22.385044 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:32:22.394130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:32:22.439797 systemd[1]: Reloading requested from client PID 2346 ('systemctl') (unit session-7.scope)... Apr 30 03:32:22.439814 systemd[1]: Reloading... Apr 30 03:32:22.553010 zram_generator::config[2386]: No configuration found. Apr 30 03:32:22.666485 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:32:22.749294 systemd[1]: Reloading finished in 309 ms. Apr 30 03:32:22.792704 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 03:32:22.792789 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 03:32:22.793102 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:32:22.800632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:32:22.929266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:32:22.937269 (kubelet)[2439]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:32:22.981184 kubelet[2439]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:32:22.981184 kubelet[2439]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:32:22.981184 kubelet[2439]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:32:22.981790 kubelet[2439]: I0430 03:32:22.981210 2439 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:32:23.684767 kubelet[2439]: I0430 03:32:23.684687 2439 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:32:23.684767 kubelet[2439]: I0430 03:32:23.684733 2439 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:32:23.685307 kubelet[2439]: I0430 03:32:23.685213 2439 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:32:23.727161 kubelet[2439]: I0430 03:32:23.726037 2439 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:32:23.729475 kubelet[2439]: E0430 03:32:23.728530 2439 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://157.180.66.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 157.180.66.130:6443: connect: connection refused Apr 30 03:32:23.762845 kubelet[2439]: I0430 03:32:23.762747 2439 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:32:23.770488 kubelet[2439]: I0430 03:32:23.770369 2439 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:32:23.770989 kubelet[2439]: I0430 03:32:23.770480 2439 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-b-f8d40824c9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:32:23.771193 kubelet[2439]: I0430 03:32:23.771001 2439 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:32:23.771193 kubelet[2439]: I0430 03:32:23.771027 2439 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:32:23.771309 kubelet[2439]: I0430 03:32:23.771245 2439 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:32:23.773024 kubelet[2439]: I0430 03:32:23.772978 2439 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:32:23.773024 kubelet[2439]: I0430 03:32:23.773026 2439 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:32:23.773172 kubelet[2439]: I0430 03:32:23.773073 2439 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:32:23.773172 kubelet[2439]: I0430 03:32:23.773110 2439 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:32:23.779836 kubelet[2439]: W0430 03:32:23.779137 2439 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.180.66.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.180.66.130:6443: connect: connection refused Apr 30 03:32:23.779836 kubelet[2439]: E0430 03:32:23.779257 2439 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://157.180.66.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.180.66.130:6443: connect: connection refused Apr 30 03:32:23.782348 kubelet[2439]: W0430 03:32:23.782111 2439 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.180.66.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-b-f8d40824c9&limit=500&resourceVersion=0": dial tcp 157.180.66.130:6443: connect: connection refused Apr 30 03:32:23.782348 kubelet[2439]: E0430 03:32:23.782169 2439 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://157.180.66.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-b-f8d40824c9&limit=500&resourceVersion=0": dial tcp 157.180.66.130:6443: connect: connection refused Apr 30 03:32:23.784769 kubelet[2439]: I0430 03:32:23.784734 2439 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:32:23.789559 kubelet[2439]: I0430 03:32:23.788114 2439 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:32:23.789559 kubelet[2439]: W0430 03:32:23.788230 2439 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 03:32:23.789559 kubelet[2439]: I0430 03:32:23.789509 2439 server.go:1264] "Started kubelet" Apr 30 03:32:23.791539 kubelet[2439]: I0430 03:32:23.791487 2439 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:32:23.795135 kubelet[2439]: I0430 03:32:23.793501 2439 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:32:23.798987 kubelet[2439]: I0430 03:32:23.798051 2439 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:32:23.799200 kubelet[2439]: I0430 03:32:23.799132 2439 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:32:23.799588 kubelet[2439]: I0430 03:32:23.799566 2439 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:32:23.800677 kubelet[2439]: E0430 03:32:23.800529 2439 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.180.66.130:6443/api/v1/namespaces/default/events\": dial tcp 157.180.66.130:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-b-f8d40824c9.183afb25711b75c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-b-f8d40824c9,UID:ci-4081-3-3-b-f8d40824c9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-b-f8d40824c9,},FirstTimestamp:2025-04-30 03:32:23.789475273 +0000 UTC m=+0.849533991,LastTimestamp:2025-04-30 03:32:23.789475273 +0000 UTC m=+0.849533991,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-b-f8d40824c9,}" Apr 30 03:32:23.808505 kubelet[2439]: E0430 03:32:23.808451 2439 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-b-f8d40824c9\" not found" Apr 30 03:32:23.808574 kubelet[2439]: I0430 03:32:23.808551 2439 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:32:23.808854 kubelet[2439]: I0430 03:32:23.808748 2439 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:32:23.808912 kubelet[2439]: I0430 03:32:23.808864 2439 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:32:23.810700 kubelet[2439]: W0430 03:32:23.809473 2439 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.180.66.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.66.130:6443: connect: connection refused Apr 30 03:32:23.810700 kubelet[2439]: E0430 03:32:23.809557 2439 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://157.180.66.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.66.130:6443: connect: connection refused Apr 30 03:32:23.810700 kubelet[2439]: E0430 03:32:23.809886 2439 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.66.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-b-f8d40824c9?timeout=10s\": dial tcp 157.180.66.130:6443: connect: connection refused" interval="200ms" Apr 30 03:32:23.816180 kubelet[2439]: I0430 03:32:23.816158 2439 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:32:23.816398 kubelet[2439]: I0430 03:32:23.816379 2439 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:32:23.817745 kubelet[2439]: E0430 03:32:23.817728 2439 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:32:23.818364 kubelet[2439]: I0430 03:32:23.818352 2439 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:32:23.842986 kubelet[2439]: I0430 03:32:23.842915 2439 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:32:23.842986 kubelet[2439]: I0430 03:32:23.842973 2439 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:32:23.842986 kubelet[2439]: I0430 03:32:23.842992 2439 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:32:23.850698 kubelet[2439]: I0430 03:32:23.850628 2439 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:32:23.856002 kubelet[2439]: I0430 03:32:23.852256 2439 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:32:23.856002 kubelet[2439]: I0430 03:32:23.852282 2439 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:32:23.856002 kubelet[2439]: I0430 03:32:23.852304 2439 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:32:23.856002 kubelet[2439]: E0430 03:32:23.852349 2439 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:32:23.858140 kubelet[2439]: W0430 03:32:23.858078 2439 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.66.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.66.130:6443: connect: connection refused Apr 30 03:32:23.858210 kubelet[2439]: E0430 03:32:23.858153 2439 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://157.180.66.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.66.130:6443: connect: connection refused Apr 30 03:32:23.859741 kubelet[2439]: I0430 03:32:23.859716 2439 policy_none.go:49] "None policy: Start" Apr 30 03:32:23.861535 kubelet[2439]: I0430 03:32:23.861460 2439 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:32:23.861535 kubelet[2439]: I0430 03:32:23.861480 2439 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:32:23.873510 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 03:32:23.881441 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 03:32:23.884428 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 03:32:23.894687 kubelet[2439]: I0430 03:32:23.894659 2439 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:32:23.895088 kubelet[2439]: I0430 03:32:23.895049 2439 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:32:23.895296 kubelet[2439]: I0430 03:32:23.895281 2439 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:32:23.897260 kubelet[2439]: E0430 03:32:23.897177 2439 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-b-f8d40824c9\" not found" Apr 30 03:32:23.910969 kubelet[2439]: I0430 03:32:23.910893 2439 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:23.911363 kubelet[2439]: E0430 03:32:23.911323 2439 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.180.66.130:6443/api/v1/nodes\": dial tcp 157.180.66.130:6443: connect: connection refused" node="ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:23.954980 kubelet[2439]: I0430 03:32:23.952921 2439 topology_manager.go:215] "Topology Admit Handler" podUID="8ab112eb59c9c4210d7e3645255bcbfe" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:23.957044 kubelet[2439]: I0430 03:32:23.956485 2439 topology_manager.go:215] "Topology Admit Handler" podUID="78b92987fd6c0b6b93238ddf79ccc03f" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:23.959052 kubelet[2439]: I0430 03:32:23.958998 2439 topology_manager.go:215] "Topology Admit Handler" podUID="57519a1b8082a6aae12704e1abd8078b" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:23.973707 systemd[1]: Created slice kubepods-burstable-pod8ab112eb59c9c4210d7e3645255bcbfe.slice - libcontainer container kubepods-burstable-pod8ab112eb59c9c4210d7e3645255bcbfe.slice. Apr 30 03:32:23.999299 systemd[1]: Created slice kubepods-burstable-pod78b92987fd6c0b6b93238ddf79ccc03f.slice - libcontainer container kubepods-burstable-pod78b92987fd6c0b6b93238ddf79ccc03f.slice. Apr 30 03:32:24.009511 kubelet[2439]: I0430 03:32:24.009382 2439 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ab112eb59c9c4210d7e3645255bcbfe-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-b-f8d40824c9\" (UID: \"8ab112eb59c9c4210d7e3645255bcbfe\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:24.009511 kubelet[2439]: I0430 03:32:24.009441 2439 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57519a1b8082a6aae12704e1abd8078b-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-b-f8d40824c9\" (UID: \"57519a1b8082a6aae12704e1abd8078b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:24.009511 kubelet[2439]: I0430 03:32:24.009472 2439 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57519a1b8082a6aae12704e1abd8078b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-b-f8d40824c9\" (UID: \"57519a1b8082a6aae12704e1abd8078b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:24.010196 kubelet[2439]: I0430 03:32:24.009547 2439 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ab112eb59c9c4210d7e3645255bcbfe-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-b-f8d40824c9\" (UID: \"8ab112eb59c9c4210d7e3645255bcbfe\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:24.010196 kubelet[2439]: I0430 03:32:24.009574 2439 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8ab112eb59c9c4210d7e3645255bcbfe-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-b-f8d40824c9\" (UID: \"8ab112eb59c9c4210d7e3645255bcbfe\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:24.010196 kubelet[2439]: I0430 03:32:24.009614 2439 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ab112eb59c9c4210d7e3645255bcbfe-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-b-f8d40824c9\" (UID: \"8ab112eb59c9c4210d7e3645255bcbfe\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:24.010196 kubelet[2439]: I0430 03:32:24.009663 2439 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ab112eb59c9c4210d7e3645255bcbfe-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-b-f8d40824c9\" (UID: \"8ab112eb59c9c4210d7e3645255bcbfe\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:24.010196 kubelet[2439]: I0430 03:32:24.009705 2439 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78b92987fd6c0b6b93238ddf79ccc03f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-b-f8d40824c9\" (UID: \"78b92987fd6c0b6b93238ddf79ccc03f\") " pod="kube-system/kube-scheduler-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:24.010422 kubelet[2439]: I0430 03:32:24.009772 2439 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57519a1b8082a6aae12704e1abd8078b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-b-f8d40824c9\" (UID: \"57519a1b8082a6aae12704e1abd8078b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:24.010762 kubelet[2439]: E0430 03:32:24.010616 2439 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.66.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-b-f8d40824c9?timeout=10s\": dial tcp 157.180.66.130:6443: connect: connection refused" interval="400ms" Apr 30 03:32:24.019786 systemd[1]: Created slice kubepods-burstable-pod57519a1b8082a6aae12704e1abd8078b.slice - libcontainer container kubepods-burstable-pod57519a1b8082a6aae12704e1abd8078b.slice. Apr 30 03:32:24.114997 kubelet[2439]: I0430 03:32:24.114917 2439 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:24.115861 kubelet[2439]: E0430 03:32:24.115762 2439 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.180.66.130:6443/api/v1/nodes\": dial tcp 157.180.66.130:6443: connect: connection refused" node="ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:24.295569 containerd[1507]: time="2025-04-30T03:32:24.295383973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-b-f8d40824c9,Uid:8ab112eb59c9c4210d7e3645255bcbfe,Namespace:kube-system,Attempt:0,}" Apr 30 03:32:24.317356 containerd[1507]: time="2025-04-30T03:32:24.317144276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-b-f8d40824c9,Uid:78b92987fd6c0b6b93238ddf79ccc03f,Namespace:kube-system,Attempt:0,}" Apr 30 03:32:24.334000 containerd[1507]: time="2025-04-30T03:32:24.333917936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-b-f8d40824c9,Uid:57519a1b8082a6aae12704e1abd8078b,Namespace:kube-system,Attempt:0,}" Apr 30 03:32:24.411339 kubelet[2439]: E0430 03:32:24.411250 2439 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.66.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-b-f8d40824c9?timeout=10s\": dial tcp 157.180.66.130:6443: connect: connection refused" interval="800ms" Apr 30 03:32:24.518991 kubelet[2439]: I0430 03:32:24.518900 2439 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:24.520102 kubelet[2439]: E0430 03:32:24.520005 2439 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.180.66.130:6443/api/v1/nodes\": dial tcp 157.180.66.130:6443: connect: connection refused" node="ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:24.703999 kubelet[2439]: W0430 03:32:24.703895 2439 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.180.66.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.66.130:6443: connect: connection refused Apr 30 03:32:24.704223 kubelet[2439]: E0430 03:32:24.704040 2439 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://157.180.66.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.180.66.130:6443: connect: connection refused Apr 30 03:32:24.707445 kubelet[2439]: W0430 03:32:24.707349 2439 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.180.66.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.66.130:6443: connect: connection refused Apr 30 03:32:24.707445 kubelet[2439]: E0430 03:32:24.707438 2439 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://157.180.66.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.180.66.130:6443: connect: connection refused Apr 30 03:32:24.843946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount860659712.mount: Deactivated successfully. Apr 30 03:32:24.876049 containerd[1507]: time="2025-04-30T03:32:24.875852051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:32:24.883176 containerd[1507]: time="2025-04-30T03:32:24.883054574Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Apr 30 03:32:24.888563 kubelet[2439]: W0430 03:32:24.888464 2439 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.180.66.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.180.66.130:6443: connect: connection refused Apr 30 03:32:24.888664 kubelet[2439]: E0430 03:32:24.888569 2439 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://157.180.66.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.180.66.130:6443: connect: connection refused Apr 30 03:32:24.891445 containerd[1507]: time="2025-04-30T03:32:24.891373148Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:32:24.900173 containerd[1507]: time="2025-04-30T03:32:24.900111791Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:32:24.909065 containerd[1507]: time="2025-04-30T03:32:24.908986818Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:32:24.913746 containerd[1507]: time="2025-04-30T03:32:24.913651440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:32:24.918853 containerd[1507]: time="2025-04-30T03:32:24.918777970Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 03:32:24.925325 containerd[1507]: time="2025-04-30T03:32:24.925237461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 03:32:24.928314 containerd[1507]: time="2025-04-30T03:32:24.927176068Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 631.641735ms" Apr 30 03:32:24.928412 kubelet[2439]: W0430 03:32:24.928017 2439 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.180.66.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-b-f8d40824c9&limit=500&resourceVersion=0": dial tcp 157.180.66.130:6443: connect: connection refused Apr 30 03:32:24.928412 kubelet[2439]: E0430 03:32:24.928104 2439 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://157.180.66.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-b-f8d40824c9&limit=500&resourceVersion=0": dial tcp 157.180.66.130:6443: connect: connection refused Apr 30 03:32:24.932343 containerd[1507]: time="2025-04-30T03:32:24.932253134Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 614.880488ms" Apr 30 03:32:24.939673 containerd[1507]: time="2025-04-30T03:32:24.939590687Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 605.539013ms" Apr 30 03:32:25.172032 containerd[1507]: time="2025-04-30T03:32:25.171226549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:32:25.172032 containerd[1507]: time="2025-04-30T03:32:25.171303007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:32:25.172032 containerd[1507]: time="2025-04-30T03:32:25.171324698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:25.172032 containerd[1507]: time="2025-04-30T03:32:25.171424741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:25.174527 containerd[1507]: time="2025-04-30T03:32:25.174366297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:32:25.175952 containerd[1507]: time="2025-04-30T03:32:25.174615317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:32:25.175952 containerd[1507]: time="2025-04-30T03:32:25.174781697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:25.175952 containerd[1507]: time="2025-04-30T03:32:25.175256101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:25.187844 containerd[1507]: time="2025-04-30T03:32:25.187553969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:32:25.187844 containerd[1507]: time="2025-04-30T03:32:25.187603965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:32:25.187844 containerd[1507]: time="2025-04-30T03:32:25.187624113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:25.187844 containerd[1507]: time="2025-04-30T03:32:25.187704759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:25.203681 systemd[1]: Started cri-containerd-994fa2749bceb3c96269efda94b44c74addad02c95a7c9fa39ede5746f073d90.scope - libcontainer container 994fa2749bceb3c96269efda94b44c74addad02c95a7c9fa39ede5746f073d90. Apr 30 03:32:25.210119 systemd[1]: Started cri-containerd-7ac954839c37f1a9e2eda991fc25932a14090b78cfb932ce01f672c22b1b7bf1.scope - libcontainer container 7ac954839c37f1a9e2eda991fc25932a14090b78cfb932ce01f672c22b1b7bf1. Apr 30 03:32:25.213751 systemd[1]: Started cri-containerd-ba78e55a756c4f91ca1bad9b4faf2a6efd369f4458a8fb37aba3bbcb305b1625.scope - libcontainer container ba78e55a756c4f91ca1bad9b4faf2a6efd369f4458a8fb37aba3bbcb305b1625. Apr 30 03:32:25.215990 kubelet[2439]: E0430 03:32:25.215679 2439 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.66.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-b-f8d40824c9?timeout=10s\": dial tcp 157.180.66.130:6443: connect: connection refused" interval="1.6s" Apr 30 03:32:25.257296 containerd[1507]: time="2025-04-30T03:32:25.257258007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-b-f8d40824c9,Uid:78b92987fd6c0b6b93238ddf79ccc03f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ac954839c37f1a9e2eda991fc25932a14090b78cfb932ce01f672c22b1b7bf1\"" Apr 30 03:32:25.272674 containerd[1507]: time="2025-04-30T03:32:25.272627861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-b-f8d40824c9,Uid:8ab112eb59c9c4210d7e3645255bcbfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"994fa2749bceb3c96269efda94b44c74addad02c95a7c9fa39ede5746f073d90\"" Apr 30 03:32:25.278759 containerd[1507]: time="2025-04-30T03:32:25.278709133Z" level=info msg="CreateContainer within sandbox \"7ac954839c37f1a9e2eda991fc25932a14090b78cfb932ce01f672c22b1b7bf1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 03:32:25.279623 containerd[1507]: time="2025-04-30T03:32:25.279368344Z" level=info msg="CreateContainer within sandbox \"994fa2749bceb3c96269efda94b44c74addad02c95a7c9fa39ede5746f073d90\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 03:32:25.291363 containerd[1507]: time="2025-04-30T03:32:25.291324332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-b-f8d40824c9,Uid:57519a1b8082a6aae12704e1abd8078b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba78e55a756c4f91ca1bad9b4faf2a6efd369f4458a8fb37aba3bbcb305b1625\"" Apr 30 03:32:25.294839 containerd[1507]: time="2025-04-30T03:32:25.294820637Z" level=info msg="CreateContainer within sandbox \"ba78e55a756c4f91ca1bad9b4faf2a6efd369f4458a8fb37aba3bbcb305b1625\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 03:32:25.322437 kubelet[2439]: I0430 03:32:25.322404 2439 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:25.322806 kubelet[2439]: E0430 03:32:25.322753 2439 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.180.66.130:6443/api/v1/nodes\": dial tcp 157.180.66.130:6443: connect: connection refused" node="ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:25.330543 containerd[1507]: time="2025-04-30T03:32:25.330405958Z" level=info msg="CreateContainer within sandbox \"7ac954839c37f1a9e2eda991fc25932a14090b78cfb932ce01f672c22b1b7bf1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dce657e9fe74050eb29b2c3a8e24ae12a76b2b17947c4761678492c33ef70a60\"" Apr 30 03:32:25.331410 containerd[1507]: time="2025-04-30T03:32:25.331314829Z" level=info msg="StartContainer for \"dce657e9fe74050eb29b2c3a8e24ae12a76b2b17947c4761678492c33ef70a60\"" Apr 30 03:32:25.337496 containerd[1507]: time="2025-04-30T03:32:25.337426080Z" level=info msg="CreateContainer within sandbox \"994fa2749bceb3c96269efda94b44c74addad02c95a7c9fa39ede5746f073d90\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0decd15a8a7360a9f043ec3030a43178890f1091c4a185999211efcbc9157c52\"" Apr 30 03:32:25.338839 containerd[1507]: time="2025-04-30T03:32:25.338005096Z" level=info msg="StartContainer for \"0decd15a8a7360a9f043ec3030a43178890f1091c4a185999211efcbc9157c52\"" Apr 30 03:32:25.342740 containerd[1507]: time="2025-04-30T03:32:25.342710539Z" level=info msg="CreateContainer within sandbox \"ba78e55a756c4f91ca1bad9b4faf2a6efd369f4458a8fb37aba3bbcb305b1625\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"00f1613831718c6b12b179acdad5be46df514590b7865cd8607eb20a104b852c\"" Apr 30 03:32:25.343513 containerd[1507]: time="2025-04-30T03:32:25.343465994Z" level=info msg="StartContainer for \"00f1613831718c6b12b179acdad5be46df514590b7865cd8607eb20a104b852c\"" Apr 30 03:32:25.360137 systemd[1]: Started cri-containerd-dce657e9fe74050eb29b2c3a8e24ae12a76b2b17947c4761678492c33ef70a60.scope - libcontainer container dce657e9fe74050eb29b2c3a8e24ae12a76b2b17947c4761678492c33ef70a60. Apr 30 03:32:25.376091 systemd[1]: Started cri-containerd-00f1613831718c6b12b179acdad5be46df514590b7865cd8607eb20a104b852c.scope - libcontainer container 00f1613831718c6b12b179acdad5be46df514590b7865cd8607eb20a104b852c. Apr 30 03:32:25.379205 systemd[1]: Started cri-containerd-0decd15a8a7360a9f043ec3030a43178890f1091c4a185999211efcbc9157c52.scope - libcontainer container 0decd15a8a7360a9f043ec3030a43178890f1091c4a185999211efcbc9157c52. Apr 30 03:32:25.419743 containerd[1507]: time="2025-04-30T03:32:25.419616969Z" level=info msg="StartContainer for \"dce657e9fe74050eb29b2c3a8e24ae12a76b2b17947c4761678492c33ef70a60\" returns successfully" Apr 30 03:32:25.441396 containerd[1507]: time="2025-04-30T03:32:25.441286928Z" level=info msg="StartContainer for \"0decd15a8a7360a9f043ec3030a43178890f1091c4a185999211efcbc9157c52\" returns successfully" Apr 30 03:32:25.457823 containerd[1507]: time="2025-04-30T03:32:25.457772923Z" level=info msg="StartContainer for \"00f1613831718c6b12b179acdad5be46df514590b7865cd8607eb20a104b852c\" returns successfully" Apr 30 03:32:26.928192 kubelet[2439]: I0430 03:32:26.927492 2439 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:27.220718 kubelet[2439]: E0430 03:32:27.220548 2439 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-b-f8d40824c9\" not found" node="ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:27.328554 kubelet[2439]: I0430 03:32:27.328351 2439 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:27.339741 kubelet[2439]: E0430 03:32:27.339708 2439 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-b-f8d40824c9\" not found" Apr 30 03:32:27.441079 kubelet[2439]: E0430 03:32:27.440960 2439 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-b-f8d40824c9\" not found" Apr 30 03:32:27.541801 kubelet[2439]: E0430 03:32:27.541584 2439 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-b-f8d40824c9\" not found" Apr 30 03:32:27.642473 kubelet[2439]: E0430 03:32:27.642374 2439 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-b-f8d40824c9\" not found" Apr 30 03:32:27.743600 kubelet[2439]: E0430 03:32:27.743543 2439 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-b-f8d40824c9\" not found" Apr 30 03:32:27.844723 kubelet[2439]: E0430 03:32:27.844662 2439 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-b-f8d40824c9\" not found" Apr 30 03:32:27.945878 kubelet[2439]: E0430 03:32:27.945796 2439 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-b-f8d40824c9\" not found" Apr 30 03:32:28.046498 kubelet[2439]: E0430 03:32:28.046414 2439 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-b-f8d40824c9\" not found" Apr 30 03:32:28.779923 kubelet[2439]: I0430 03:32:28.779875 2439 apiserver.go:52] "Watching apiserver" Apr 30 03:32:28.809960 kubelet[2439]: I0430 03:32:28.809892 2439 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:32:29.405310 systemd[1]: Reloading requested from client PID 2716 ('systemctl') (unit session-7.scope)... Apr 30 03:32:29.405346 systemd[1]: Reloading... Apr 30 03:32:29.556031 zram_generator::config[2756]: No configuration found. Apr 30 03:32:29.680249 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 03:32:29.774340 systemd[1]: Reloading finished in 368 ms. Apr 30 03:32:29.813842 kubelet[2439]: E0430 03:32:29.812349 2439 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081-3-3-b-f8d40824c9.183afb25711b75c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-b-f8d40824c9,UID:ci-4081-3-3-b-f8d40824c9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-b-f8d40824c9,},FirstTimestamp:2025-04-30 03:32:23.789475273 +0000 UTC m=+0.849533991,LastTimestamp:2025-04-30 03:32:23.789475273 +0000 UTC m=+0.849533991,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-b-f8d40824c9,}" Apr 30 03:32:29.813842 kubelet[2439]: I0430 03:32:29.813307 2439 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:32:29.812607 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:32:29.824124 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 03:32:29.824416 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:32:29.824488 systemd[1]: kubelet.service: Consumed 1.312s CPU time, 110.4M memory peak, 0B memory swap peak. Apr 30 03:32:29.829271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 03:32:29.939777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 03:32:29.950351 (kubelet)[2807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 03:32:30.016405 kubelet[2807]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:32:30.016802 kubelet[2807]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 03:32:30.016802 kubelet[2807]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 03:32:30.019432 kubelet[2807]: I0430 03:32:30.018205 2807 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 03:32:30.023567 kubelet[2807]: I0430 03:32:30.023530 2807 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 03:32:30.023567 kubelet[2807]: I0430 03:32:30.023547 2807 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 03:32:30.023761 kubelet[2807]: I0430 03:32:30.023697 2807 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 03:32:30.025007 kubelet[2807]: I0430 03:32:30.024965 2807 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 03:32:30.028054 kubelet[2807]: I0430 03:32:30.027745 2807 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 03:32:30.034320 kubelet[2807]: I0430 03:32:30.034289 2807 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 03:32:30.034482 kubelet[2807]: I0430 03:32:30.034436 2807 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 03:32:30.034610 kubelet[2807]: I0430 03:32:30.034456 2807 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-b-f8d40824c9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 03:32:30.034610 kubelet[2807]: I0430 03:32:30.034604 2807 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 03:32:30.034610 kubelet[2807]: I0430 03:32:30.034613 2807 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 03:32:30.034875 kubelet[2807]: I0430 03:32:30.034649 2807 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:32:30.034875 kubelet[2807]: I0430 03:32:30.034751 2807 kubelet.go:400] "Attempting to sync node with API server" Apr 30 03:32:30.034875 kubelet[2807]: I0430 03:32:30.034764 2807 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 03:32:30.034875 kubelet[2807]: I0430 03:32:30.034781 2807 kubelet.go:312] "Adding apiserver pod source" Apr 30 03:32:30.037183 kubelet[2807]: I0430 03:32:30.036017 2807 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 03:32:30.039283 kubelet[2807]: I0430 03:32:30.039235 2807 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 03:32:30.039440 kubelet[2807]: I0430 03:32:30.039416 2807 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 03:32:30.040147 kubelet[2807]: I0430 03:32:30.040094 2807 server.go:1264] "Started kubelet" Apr 30 03:32:30.040739 kubelet[2807]: I0430 03:32:30.040637 2807 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 03:32:30.041555 kubelet[2807]: I0430 03:32:30.041434 2807 server.go:455] "Adding debug handlers to kubelet server" Apr 30 03:32:30.047739 kubelet[2807]: I0430 03:32:30.045221 2807 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 03:32:30.047739 kubelet[2807]: I0430 03:32:30.046892 2807 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 03:32:30.048427 kubelet[2807]: I0430 03:32:30.048409 2807 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 03:32:30.057007 kubelet[2807]: E0430 03:32:30.056277 2807 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 03:32:30.058741 kubelet[2807]: I0430 03:32:30.058705 2807 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 03:32:30.061107 kubelet[2807]: I0430 03:32:30.061015 2807 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 03:32:30.061206 kubelet[2807]: I0430 03:32:30.061134 2807 reconciler.go:26] "Reconciler: start to sync state" Apr 30 03:32:30.066362 kubelet[2807]: I0430 03:32:30.066319 2807 factory.go:221] Registration of the containerd container factory successfully Apr 30 03:32:30.066362 kubelet[2807]: I0430 03:32:30.066337 2807 factory.go:221] Registration of the systemd container factory successfully Apr 30 03:32:30.066681 kubelet[2807]: I0430 03:32:30.066396 2807 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 03:32:30.070854 kubelet[2807]: I0430 03:32:30.070225 2807 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 03:32:30.071751 kubelet[2807]: I0430 03:32:30.071691 2807 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 03:32:30.071751 kubelet[2807]: I0430 03:32:30.071726 2807 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 03:32:30.071751 kubelet[2807]: I0430 03:32:30.071747 2807 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 03:32:30.071847 kubelet[2807]: E0430 03:32:30.071783 2807 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 03:32:30.117614 kubelet[2807]: I0430 03:32:30.117580 2807 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 03:32:30.117614 kubelet[2807]: I0430 03:32:30.117596 2807 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 03:32:30.117614 kubelet[2807]: I0430 03:32:30.117611 2807 state_mem.go:36] "Initialized new in-memory state store" Apr 30 03:32:30.117780 kubelet[2807]: I0430 03:32:30.117730 2807 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 03:32:30.117780 kubelet[2807]: I0430 03:32:30.117738 2807 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 03:32:30.117780 kubelet[2807]: I0430 03:32:30.117754 2807 policy_none.go:49] "None policy: Start" Apr 30 03:32:30.118430 kubelet[2807]: I0430 03:32:30.118406 2807 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 03:32:30.118430 kubelet[2807]: I0430 03:32:30.118426 2807 state_mem.go:35] "Initializing new in-memory state store" Apr 30 03:32:30.118548 kubelet[2807]: I0430 03:32:30.118524 2807 state_mem.go:75] "Updated machine memory state" Apr 30 03:32:30.123480 kubelet[2807]: I0430 03:32:30.123442 2807 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 03:32:30.123745 kubelet[2807]: I0430 03:32:30.123595 2807 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 03:32:30.123871 kubelet[2807]: I0430 03:32:30.123856 2807 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 03:32:30.162481 kubelet[2807]: I0430 03:32:30.162418 2807 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:30.171919 kubelet[2807]: I0430 03:32:30.171870 2807 topology_manager.go:215] "Topology Admit Handler" podUID="57519a1b8082a6aae12704e1abd8078b" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:30.172059 kubelet[2807]: I0430 03:32:30.171967 2807 topology_manager.go:215] "Topology Admit Handler" podUID="8ab112eb59c9c4210d7e3645255bcbfe" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:30.172059 kubelet[2807]: I0430 03:32:30.172016 2807 topology_manager.go:215] "Topology Admit Handler" podUID="78b92987fd6c0b6b93238ddf79ccc03f" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:30.174972 kubelet[2807]: I0430 03:32:30.174312 2807 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:30.174972 kubelet[2807]: I0430 03:32:30.174369 2807 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:30.183868 kubelet[2807]: E0430 03:32:30.183821 2807 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-3-b-f8d40824c9\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:30.184008 kubelet[2807]: E0430 03:32:30.183985 2807 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-3-b-f8d40824c9\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:30.262250 kubelet[2807]: I0430 03:32:30.262049 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ab112eb59c9c4210d7e3645255bcbfe-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-b-f8d40824c9\" (UID: \"8ab112eb59c9c4210d7e3645255bcbfe\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:30.362936 kubelet[2807]: I0430 03:32:30.362856 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ab112eb59c9c4210d7e3645255bcbfe-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-b-f8d40824c9\" (UID: \"8ab112eb59c9c4210d7e3645255bcbfe\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:30.363069 kubelet[2807]: I0430 03:32:30.362953 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78b92987fd6c0b6b93238ddf79ccc03f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-b-f8d40824c9\" (UID: \"78b92987fd6c0b6b93238ddf79ccc03f\") " pod="kube-system/kube-scheduler-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:30.363069 kubelet[2807]: I0430 03:32:30.362979 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57519a1b8082a6aae12704e1abd8078b-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-b-f8d40824c9\" (UID: \"57519a1b8082a6aae12704e1abd8078b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:30.363069 kubelet[2807]: I0430 03:32:30.363000 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57519a1b8082a6aae12704e1abd8078b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-b-f8d40824c9\" (UID: \"57519a1b8082a6aae12704e1abd8078b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:30.363069 kubelet[2807]: I0430 03:32:30.363024 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57519a1b8082a6aae12704e1abd8078b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-b-f8d40824c9\" (UID: \"57519a1b8082a6aae12704e1abd8078b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:30.363207 kubelet[2807]: I0430 03:32:30.363069 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8ab112eb59c9c4210d7e3645255bcbfe-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-b-f8d40824c9\" (UID: \"8ab112eb59c9c4210d7e3645255bcbfe\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:30.363207 kubelet[2807]: I0430 03:32:30.363100 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ab112eb59c9c4210d7e3645255bcbfe-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-b-f8d40824c9\" (UID: \"8ab112eb59c9c4210d7e3645255bcbfe\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:30.363207 kubelet[2807]: I0430 03:32:30.363148 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ab112eb59c9c4210d7e3645255bcbfe-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-b-f8d40824c9\" (UID: \"8ab112eb59c9c4210d7e3645255bcbfe\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-b-f8d40824c9" Apr 30 03:32:30.411703 sudo[2838]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 03:32:30.412264 sudo[2838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 03:32:30.966479 sudo[2838]: pam_unix(sudo:session): session closed for user root Apr 30 03:32:31.036755 kubelet[2807]: I0430 03:32:31.036694 2807 apiserver.go:52] "Watching apiserver" Apr 30 03:32:31.062013 kubelet[2807]: I0430 03:32:31.061952 2807 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 03:32:31.142986 kubelet[2807]: I0430 03:32:31.141826 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-b-f8d40824c9" podStartSLOduration=3.1418056 podStartE2EDuration="3.1418056s" podCreationTimestamp="2025-04-30 03:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:32:31.128448602 +0000 UTC m=+1.170195477" watchObservedRunningTime="2025-04-30 03:32:31.1418056 +0000 UTC m=+1.183552475" Apr 30 03:32:31.154544 kubelet[2807]: I0430 03:32:31.154383 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-b-f8d40824c9" podStartSLOduration=3.154340642 podStartE2EDuration="3.154340642s" podCreationTimestamp="2025-04-30 03:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:32:31.142049589 +0000 UTC m=+1.183796464" watchObservedRunningTime="2025-04-30 03:32:31.154340642 +0000 UTC m=+1.196087527" Apr 30 03:32:31.168093 kubelet[2807]: I0430 03:32:31.167678 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-b-f8d40824c9" podStartSLOduration=1.167656421 podStartE2EDuration="1.167656421s" podCreationTimestamp="2025-04-30 03:32:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:32:31.15503363 +0000 UTC m=+1.196780506" watchObservedRunningTime="2025-04-30 03:32:31.167656421 +0000 UTC m=+1.209403306" Apr 30 03:32:32.806506 sudo[1897]: pam_unix(sudo:session): session closed for user root Apr 30 03:32:32.966134 sshd[1877]: pam_unix(sshd:session): session closed for user core Apr 30 03:32:32.973381 systemd[1]: sshd@6-157.180.66.130:22-139.178.68.195:60662.service: Deactivated successfully. Apr 30 03:32:32.976399 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 03:32:32.976658 systemd[1]: session-7.scope: Consumed 6.457s CPU time, 190.1M memory peak, 0B memory swap peak. Apr 30 03:32:32.978189 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Apr 30 03:32:32.980807 systemd-logind[1487]: Removed session 7. Apr 30 03:32:44.489137 kubelet[2807]: I0430 03:32:44.489104 2807 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 03:32:44.489533 containerd[1507]: time="2025-04-30T03:32:44.489499124Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 03:32:44.489708 kubelet[2807]: I0430 03:32:44.489616 2807 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 03:32:45.489488 kubelet[2807]: I0430 03:32:45.485845 2807 topology_manager.go:215] "Topology Admit Handler" podUID="c248d004-745e-4042-8043-dd144de849c5" podNamespace="kube-system" podName="cilium-q4d8m" Apr 30 03:32:45.490702 kubelet[2807]: I0430 03:32:45.490646 2807 topology_manager.go:215] "Topology Admit Handler" podUID="4cdce5db-5420-40e0-9424-9e6e89b48cf6" podNamespace="kube-system" podName="kube-proxy-t96t6" Apr 30 03:32:45.504005 systemd[1]: Created slice kubepods-burstable-podc248d004_745e_4042_8043_dd144de849c5.slice - libcontainer container kubepods-burstable-podc248d004_745e_4042_8043_dd144de849c5.slice. Apr 30 03:32:45.514886 systemd[1]: Created slice kubepods-besteffort-pod4cdce5db_5420_40e0_9424_9e6e89b48cf6.slice - libcontainer container kubepods-besteffort-pod4cdce5db_5420_40e0_9424_9e6e89b48cf6.slice. Apr 30 03:32:45.556821 kubelet[2807]: I0430 03:32:45.556749 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cdce5db-5420-40e0-9424-9e6e89b48cf6-xtables-lock\") pod \"kube-proxy-t96t6\" (UID: \"4cdce5db-5420-40e0-9424-9e6e89b48cf6\") " pod="kube-system/kube-proxy-t96t6" Apr 30 03:32:45.556821 kubelet[2807]: I0430 03:32:45.556838 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-cni-path\") pod \"cilium-q4d8m\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " pod="kube-system/cilium-q4d8m" Apr 30 03:32:45.556821 kubelet[2807]: I0430 03:32:45.556872 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cdce5db-5420-40e0-9424-9e6e89b48cf6-lib-modules\") pod \"kube-proxy-t96t6\" (UID: \"4cdce5db-5420-40e0-9424-9e6e89b48cf6\") " pod="kube-system/kube-proxy-t96t6" Apr 30 03:32:45.557245 kubelet[2807]: I0430 03:32:45.556908 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-bpf-maps\") pod \"cilium-q4d8m\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " pod="kube-system/cilium-q4d8m" Apr 30 03:32:45.557245 kubelet[2807]: I0430 03:32:45.556955 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c248d004-745e-4042-8043-dd144de849c5-cilium-config-path\") pod \"cilium-q4d8m\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " pod="kube-system/cilium-q4d8m" Apr 30 03:32:45.557245 kubelet[2807]: I0430 03:32:45.556970 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c248d004-745e-4042-8043-dd144de849c5-hubble-tls\") pod \"cilium-q4d8m\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " pod="kube-system/cilium-q4d8m" Apr 30 03:32:45.557245 kubelet[2807]: I0430 03:32:45.556982 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-etc-cni-netd\") pod \"cilium-q4d8m\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " pod="kube-system/cilium-q4d8m" Apr 30 03:32:45.557245 kubelet[2807]: I0430 03:32:45.556996 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-host-proc-sys-kernel\") pod \"cilium-q4d8m\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " pod="kube-system/cilium-q4d8m" Apr 30 03:32:45.557245 kubelet[2807]: I0430 03:32:45.557017 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tww75\" (UniqueName: \"kubernetes.io/projected/c248d004-745e-4042-8043-dd144de849c5-kube-api-access-tww75\") pod \"cilium-q4d8m\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " pod="kube-system/cilium-q4d8m" Apr 30 03:32:45.557487 kubelet[2807]: I0430 03:32:45.557039 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-hostproc\") pod \"cilium-q4d8m\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " pod="kube-system/cilium-q4d8m" Apr 30 03:32:45.557487 kubelet[2807]: I0430 03:32:45.557056 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-lib-modules\") pod \"cilium-q4d8m\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " pod="kube-system/cilium-q4d8m" Apr 30 03:32:45.557487 kubelet[2807]: I0430 03:32:45.557071 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c248d004-745e-4042-8043-dd144de849c5-clustermesh-secrets\") pod \"cilium-q4d8m\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " pod="kube-system/cilium-q4d8m" Apr 30 03:32:45.557487 kubelet[2807]: I0430 03:32:45.557084 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-cilium-cgroup\") pod \"cilium-q4d8m\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " pod="kube-system/cilium-q4d8m" Apr 30 03:32:45.557487 kubelet[2807]: I0430 03:32:45.557096 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-xtables-lock\") pod \"cilium-q4d8m\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " pod="kube-system/cilium-q4d8m" Apr 30 03:32:45.557487 kubelet[2807]: I0430 03:32:45.557109 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4cdce5db-5420-40e0-9424-9e6e89b48cf6-kube-proxy\") pod \"kube-proxy-t96t6\" (UID: \"4cdce5db-5420-40e0-9424-9e6e89b48cf6\") " pod="kube-system/kube-proxy-t96t6" Apr 30 03:32:45.557762 kubelet[2807]: I0430 03:32:45.557122 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt2x5\" (UniqueName: \"kubernetes.io/projected/4cdce5db-5420-40e0-9424-9e6e89b48cf6-kube-api-access-qt2x5\") pod \"kube-proxy-t96t6\" (UID: \"4cdce5db-5420-40e0-9424-9e6e89b48cf6\") " pod="kube-system/kube-proxy-t96t6" Apr 30 03:32:45.557762 kubelet[2807]: I0430 03:32:45.557135 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-cilium-run\") pod \"cilium-q4d8m\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " pod="kube-system/cilium-q4d8m" Apr 30 03:32:45.557762 kubelet[2807]: I0430 03:32:45.557147 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-host-proc-sys-net\") pod \"cilium-q4d8m\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " pod="kube-system/cilium-q4d8m" Apr 30 03:32:45.568877 kubelet[2807]: I0430 03:32:45.568818 2807 topology_manager.go:215] "Topology Admit Handler" podUID="94e027fd-84c6-489d-954b-6ae05b7d5370" podNamespace="kube-system" podName="cilium-operator-599987898-8jzbt" Apr 30 03:32:45.577898 systemd[1]: Created slice kubepods-besteffort-pod94e027fd_84c6_489d_954b_6ae05b7d5370.slice - libcontainer container kubepods-besteffort-pod94e027fd_84c6_489d_954b_6ae05b7d5370.slice. Apr 30 03:32:45.658297 kubelet[2807]: I0430 03:32:45.658231 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjvlv\" (UniqueName: \"kubernetes.io/projected/94e027fd-84c6-489d-954b-6ae05b7d5370-kube-api-access-kjvlv\") pod \"cilium-operator-599987898-8jzbt\" (UID: \"94e027fd-84c6-489d-954b-6ae05b7d5370\") " pod="kube-system/cilium-operator-599987898-8jzbt" Apr 30 03:32:45.658464 kubelet[2807]: I0430 03:32:45.658408 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94e027fd-84c6-489d-954b-6ae05b7d5370-cilium-config-path\") pod \"cilium-operator-599987898-8jzbt\" (UID: \"94e027fd-84c6-489d-954b-6ae05b7d5370\") " pod="kube-system/cilium-operator-599987898-8jzbt" Apr 30 03:32:45.814418 containerd[1507]: time="2025-04-30T03:32:45.813175464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q4d8m,Uid:c248d004-745e-4042-8043-dd144de849c5,Namespace:kube-system,Attempt:0,}" Apr 30 03:32:45.823118 containerd[1507]: time="2025-04-30T03:32:45.823055546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t96t6,Uid:4cdce5db-5420-40e0-9424-9e6e89b48cf6,Namespace:kube-system,Attempt:0,}" Apr 30 03:32:45.848254 containerd[1507]: time="2025-04-30T03:32:45.847705841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:32:45.848254 containerd[1507]: time="2025-04-30T03:32:45.847773340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:32:45.848254 containerd[1507]: time="2025-04-30T03:32:45.847802355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:45.848254 containerd[1507]: time="2025-04-30T03:32:45.847902546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:45.880206 systemd[1]: Started cri-containerd-94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8.scope - libcontainer container 94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8. Apr 30 03:32:45.882620 containerd[1507]: time="2025-04-30T03:32:45.882210729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:32:45.883461 containerd[1507]: time="2025-04-30T03:32:45.882521331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:32:45.883461 containerd[1507]: time="2025-04-30T03:32:45.883026825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-8jzbt,Uid:94e027fd-84c6-489d-954b-6ae05b7d5370,Namespace:kube-system,Attempt:0,}" Apr 30 03:32:45.883600 containerd[1507]: time="2025-04-30T03:32:45.883177511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:45.883600 containerd[1507]: time="2025-04-30T03:32:45.883357214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:45.915157 systemd[1]: Started cri-containerd-074ff1bc8b59994a0d901a948cf420d0fd1a5976cec96df78fce6ba2a68da394.scope - libcontainer container 074ff1bc8b59994a0d901a948cf420d0fd1a5976cec96df78fce6ba2a68da394. Apr 30 03:32:45.919672 containerd[1507]: time="2025-04-30T03:32:45.919519962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:32:45.919913 containerd[1507]: time="2025-04-30T03:32:45.919863897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:32:45.920059 containerd[1507]: time="2025-04-30T03:32:45.920032129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:45.920447 containerd[1507]: time="2025-04-30T03:32:45.920414597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:32:45.939573 containerd[1507]: time="2025-04-30T03:32:45.939507459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q4d8m,Uid:c248d004-745e-4042-8043-dd144de849c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\"" Apr 30 03:32:45.949115 systemd[1]: Started cri-containerd-cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3.scope - libcontainer container cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3. Apr 30 03:32:45.955719 containerd[1507]: time="2025-04-30T03:32:45.955517134Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 03:32:45.977845 containerd[1507]: time="2025-04-30T03:32:45.977794836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t96t6,Uid:4cdce5db-5420-40e0-9424-9e6e89b48cf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"074ff1bc8b59994a0d901a948cf420d0fd1a5976cec96df78fce6ba2a68da394\"" Apr 30 03:32:45.986874 containerd[1507]: time="2025-04-30T03:32:45.986672448Z" level=info msg="CreateContainer within sandbox \"074ff1bc8b59994a0d901a948cf420d0fd1a5976cec96df78fce6ba2a68da394\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 03:32:46.001974 containerd[1507]: time="2025-04-30T03:32:46.001713015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-8jzbt,Uid:94e027fd-84c6-489d-954b-6ae05b7d5370,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3\"" Apr 30 03:32:46.007822 containerd[1507]: time="2025-04-30T03:32:46.007742373Z" level=info msg="CreateContainer within sandbox \"074ff1bc8b59994a0d901a948cf420d0fd1a5976cec96df78fce6ba2a68da394\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"75e4fe0731f6f75b73d40f3aa261126d94774bd1b7d6c09cbe08fc8dace97e86\"" Apr 30 03:32:46.008500 containerd[1507]: time="2025-04-30T03:32:46.008451434Z" level=info msg="StartContainer for \"75e4fe0731f6f75b73d40f3aa261126d94774bd1b7d6c09cbe08fc8dace97e86\"" Apr 30 03:32:46.033081 systemd[1]: Started cri-containerd-75e4fe0731f6f75b73d40f3aa261126d94774bd1b7d6c09cbe08fc8dace97e86.scope - libcontainer container 75e4fe0731f6f75b73d40f3aa261126d94774bd1b7d6c09cbe08fc8dace97e86. Apr 30 03:32:46.060205 containerd[1507]: time="2025-04-30T03:32:46.060149076Z" level=info msg="StartContainer for \"75e4fe0731f6f75b73d40f3aa261126d94774bd1b7d6c09cbe08fc8dace97e86\" returns successfully" Apr 30 03:32:46.163265 kubelet[2807]: I0430 03:32:46.163053 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t96t6" podStartSLOduration=1.161922649 podStartE2EDuration="1.161922649s" podCreationTimestamp="2025-04-30 03:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:32:46.160176873 +0000 UTC m=+16.201923748" watchObservedRunningTime="2025-04-30 03:32:46.161922649 +0000 UTC m=+16.203669524" Apr 30 03:32:53.265789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3655474920.mount: Deactivated successfully. Apr 30 03:32:54.841346 containerd[1507]: time="2025-04-30T03:32:54.841253451Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:54.843828 containerd[1507]: time="2025-04-30T03:32:54.843617464Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 03:32:54.843828 containerd[1507]: time="2025-04-30T03:32:54.843723165Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:32:54.846400 containerd[1507]: time="2025-04-30T03:32:54.845750347Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.890181133s" Apr 30 03:32:54.846400 containerd[1507]: time="2025-04-30T03:32:54.845794260Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 03:32:54.855034 containerd[1507]: time="2025-04-30T03:32:54.854829572Z" level=info msg="CreateContainer within sandbox \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 03:32:54.857376 containerd[1507]: time="2025-04-30T03:32:54.856463667Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 03:32:54.940069 containerd[1507]: time="2025-04-30T03:32:54.939925149Z" level=info msg="CreateContainer within sandbox \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7\"" Apr 30 03:32:54.941737 containerd[1507]: time="2025-04-30T03:32:54.941035289Z" level=info msg="StartContainer for \"4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7\"" Apr 30 03:32:55.070049 systemd[1]: run-containerd-runc-k8s.io-4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7-runc.9hZt42.mount: Deactivated successfully. Apr 30 03:32:55.079167 systemd[1]: Started cri-containerd-4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7.scope - libcontainer container 4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7. Apr 30 03:32:55.129305 containerd[1507]: time="2025-04-30T03:32:55.129177970Z" level=info msg="StartContainer for \"4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7\" returns successfully" Apr 30 03:32:55.137506 systemd[1]: cri-containerd-4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7.scope: Deactivated successfully. Apr 30 03:32:55.332424 containerd[1507]: time="2025-04-30T03:32:55.305043055Z" level=info msg="shim disconnected" id=4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7 namespace=k8s.io Apr 30 03:32:55.332424 containerd[1507]: time="2025-04-30T03:32:55.332399269Z" level=warning msg="cleaning up after shim disconnected" id=4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7 namespace=k8s.io Apr 30 03:32:55.332424 containerd[1507]: time="2025-04-30T03:32:55.332426861Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:32:55.929911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7-rootfs.mount: Deactivated successfully. Apr 30 03:32:56.183586 containerd[1507]: time="2025-04-30T03:32:56.183215813Z" level=info msg="CreateContainer within sandbox \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 03:32:56.221652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60638881.mount: Deactivated successfully. Apr 30 03:32:56.226265 containerd[1507]: time="2025-04-30T03:32:56.226189356Z" level=info msg="CreateContainer within sandbox \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d\"" Apr 30 03:32:56.228253 containerd[1507]: time="2025-04-30T03:32:56.228198362Z" level=info msg="StartContainer for \"ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d\"" Apr 30 03:32:56.294148 systemd[1]: Started cri-containerd-ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d.scope - libcontainer container ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d. Apr 30 03:32:56.333260 containerd[1507]: time="2025-04-30T03:32:56.333189559Z" level=info msg="StartContainer for \"ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d\" returns successfully" Apr 30 03:32:56.347246 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 03:32:56.347496 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:32:56.348048 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:32:56.353337 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 03:32:56.353520 systemd[1]: cri-containerd-ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d.scope: Deactivated successfully. Apr 30 03:32:56.378652 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 03:32:56.379315 containerd[1507]: time="2025-04-30T03:32:56.379122524Z" level=info msg="shim disconnected" id=ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d namespace=k8s.io Apr 30 03:32:56.379315 containerd[1507]: time="2025-04-30T03:32:56.379199579Z" level=warning msg="cleaning up after shim disconnected" id=ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d namespace=k8s.io Apr 30 03:32:56.379315 containerd[1507]: time="2025-04-30T03:32:56.379207936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:32:56.390429 containerd[1507]: time="2025-04-30T03:32:56.390354948Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:32:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:32:56.929712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d-rootfs.mount: Deactivated successfully. Apr 30 03:32:57.248965 containerd[1507]: time="2025-04-30T03:32:57.248647193Z" level=info msg="CreateContainer within sandbox \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 03:32:57.284606 containerd[1507]: time="2025-04-30T03:32:57.284531753Z" level=info msg="CreateContainer within sandbox \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe\"" Apr 30 03:32:57.286969 containerd[1507]: time="2025-04-30T03:32:57.285222665Z" level=info msg="StartContainer for \"b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe\"" Apr 30 03:32:57.338107 systemd[1]: Started cri-containerd-b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe.scope - libcontainer container b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe. Apr 30 03:32:57.375317 containerd[1507]: time="2025-04-30T03:32:57.375145527Z" level=info msg="StartContainer for \"b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe\" returns successfully" Apr 30 03:32:57.377196 systemd[1]: cri-containerd-b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe.scope: Deactivated successfully. Apr 30 03:32:57.415518 containerd[1507]: time="2025-04-30T03:32:57.415437536Z" level=info msg="shim disconnected" id=b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe namespace=k8s.io Apr 30 03:32:57.415518 containerd[1507]: time="2025-04-30T03:32:57.415495567Z" level=warning msg="cleaning up after shim disconnected" id=b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe namespace=k8s.io Apr 30 03:32:57.415518 containerd[1507]: time="2025-04-30T03:32:57.415503201Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:32:57.927997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe-rootfs.mount: Deactivated successfully. Apr 30 03:32:58.194040 containerd[1507]: time="2025-04-30T03:32:58.193848589Z" level=info msg="CreateContainer within sandbox \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 03:32:58.228075 containerd[1507]: time="2025-04-30T03:32:58.227246903Z" level=info msg="CreateContainer within sandbox \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909\"" Apr 30 03:32:58.232091 containerd[1507]: time="2025-04-30T03:32:58.228960565Z" level=info msg="StartContainer for \"d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909\"" Apr 30 03:32:58.229368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount543431198.mount: Deactivated successfully. Apr 30 03:32:58.280176 systemd[1]: Started cri-containerd-d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909.scope - libcontainer container d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909. Apr 30 03:32:58.308008 systemd[1]: cri-containerd-d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909.scope: Deactivated successfully. Apr 30 03:32:58.310270 containerd[1507]: time="2025-04-30T03:32:58.310223394Z" level=info msg="StartContainer for \"d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909\" returns successfully" Apr 30 03:32:58.335272 containerd[1507]: time="2025-04-30T03:32:58.335117012Z" level=info msg="shim disconnected" id=d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909 namespace=k8s.io Apr 30 03:32:58.335272 containerd[1507]: time="2025-04-30T03:32:58.335188097Z" level=warning msg="cleaning up after shim disconnected" id=d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909 namespace=k8s.io Apr 30 03:32:58.335272 containerd[1507]: time="2025-04-30T03:32:58.335200951Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:32:58.929834 systemd[1]: run-containerd-runc-k8s.io-d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909-runc.6ruaQV.mount: Deactivated successfully. Apr 30 03:32:58.930091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909-rootfs.mount: Deactivated successfully. Apr 30 03:32:59.203098 containerd[1507]: time="2025-04-30T03:32:59.202837743Z" level=info msg="CreateContainer within sandbox \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 03:32:59.267265 containerd[1507]: time="2025-04-30T03:32:59.267176052Z" level=info msg="CreateContainer within sandbox \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb\"" Apr 30 03:32:59.269208 containerd[1507]: time="2025-04-30T03:32:59.268316877Z" level=info msg="StartContainer for \"bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb\"" Apr 30 03:32:59.318219 systemd[1]: Started cri-containerd-bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb.scope - libcontainer container bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb. Apr 30 03:32:59.357121 containerd[1507]: time="2025-04-30T03:32:59.356914529Z" level=info msg="StartContainer for \"bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb\" returns successfully" Apr 30 03:32:59.532355 kubelet[2807]: I0430 03:32:59.531597 2807 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 03:32:59.564515 kubelet[2807]: I0430 03:32:59.563388 2807 topology_manager.go:215] "Topology Admit Handler" podUID="dbd3cdfd-8cb2-4571-a3b0-0847d283f8d5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pvwfl" Apr 30 03:32:59.570394 kubelet[2807]: I0430 03:32:59.567917 2807 topology_manager.go:215] "Topology Admit Handler" podUID="3f8415f9-bd34-4ffd-b599-7c6a3031b4ae" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cqgx2" Apr 30 03:32:59.574987 systemd[1]: Created slice kubepods-burstable-poddbd3cdfd_8cb2_4571_a3b0_0847d283f8d5.slice - libcontainer container kubepods-burstable-poddbd3cdfd_8cb2_4571_a3b0_0847d283f8d5.slice. Apr 30 03:32:59.583988 systemd[1]: Created slice kubepods-burstable-pod3f8415f9_bd34_4ffd_b599_7c6a3031b4ae.slice - libcontainer container kubepods-burstable-pod3f8415f9_bd34_4ffd_b599_7c6a3031b4ae.slice. Apr 30 03:32:59.659598 kubelet[2807]: I0430 03:32:59.659553 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbd3cdfd-8cb2-4571-a3b0-0847d283f8d5-config-volume\") pod \"coredns-7db6d8ff4d-pvwfl\" (UID: \"dbd3cdfd-8cb2-4571-a3b0-0847d283f8d5\") " pod="kube-system/coredns-7db6d8ff4d-pvwfl" Apr 30 03:32:59.659598 kubelet[2807]: I0430 03:32:59.659605 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f8415f9-bd34-4ffd-b599-7c6a3031b4ae-config-volume\") pod \"coredns-7db6d8ff4d-cqgx2\" (UID: \"3f8415f9-bd34-4ffd-b599-7c6a3031b4ae\") " pod="kube-system/coredns-7db6d8ff4d-cqgx2" Apr 30 03:32:59.659840 kubelet[2807]: I0430 03:32:59.659624 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxltk\" (UniqueName: \"kubernetes.io/projected/dbd3cdfd-8cb2-4571-a3b0-0847d283f8d5-kube-api-access-mxltk\") pod \"coredns-7db6d8ff4d-pvwfl\" (UID: \"dbd3cdfd-8cb2-4571-a3b0-0847d283f8d5\") " pod="kube-system/coredns-7db6d8ff4d-pvwfl" Apr 30 03:32:59.659840 kubelet[2807]: I0430 03:32:59.659642 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmjkt\" (UniqueName: \"kubernetes.io/projected/3f8415f9-bd34-4ffd-b599-7c6a3031b4ae-kube-api-access-gmjkt\") pod \"coredns-7db6d8ff4d-cqgx2\" (UID: \"3f8415f9-bd34-4ffd-b599-7c6a3031b4ae\") " pod="kube-system/coredns-7db6d8ff4d-cqgx2" Apr 30 03:32:59.887301 containerd[1507]: time="2025-04-30T03:32:59.887234826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pvwfl,Uid:dbd3cdfd-8cb2-4571-a3b0-0847d283f8d5,Namespace:kube-system,Attempt:0,}" Apr 30 03:32:59.890016 containerd[1507]: time="2025-04-30T03:32:59.889965759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cqgx2,Uid:3f8415f9-bd34-4ffd-b599-7c6a3031b4ae,Namespace:kube-system,Attempt:0,}" Apr 30 03:33:00.238561 kubelet[2807]: I0430 03:33:00.237711 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q4d8m" podStartSLOduration=6.342514319 podStartE2EDuration="15.237683744s" podCreationTimestamp="2025-04-30 03:32:45 +0000 UTC" firstStartedPulling="2025-04-30 03:32:45.95196293 +0000 UTC m=+15.993709784" lastFinishedPulling="2025-04-30 03:32:54.847132354 +0000 UTC m=+24.888879209" observedRunningTime="2025-04-30 03:33:00.236804026 +0000 UTC m=+30.278550921" watchObservedRunningTime="2025-04-30 03:33:00.237683744 +0000 UTC m=+30.279430630" Apr 30 03:33:01.997202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1414096655.mount: Deactivated successfully. Apr 30 03:33:02.492647 containerd[1507]: time="2025-04-30T03:33:02.492512753Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:02.494587 containerd[1507]: time="2025-04-30T03:33:02.494359837Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 03:33:02.497529 containerd[1507]: time="2025-04-30T03:33:02.496438980Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 03:33:02.498744 containerd[1507]: time="2025-04-30T03:33:02.498596111Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 7.642087129s" Apr 30 03:33:02.498744 containerd[1507]: time="2025-04-30T03:33:02.498636668Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 03:33:02.501272 containerd[1507]: time="2025-04-30T03:33:02.501207083Z" level=info msg="CreateContainer within sandbox \"cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 03:33:02.520709 containerd[1507]: time="2025-04-30T03:33:02.520666811Z" level=info msg="CreateContainer within sandbox \"cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565\"" Apr 30 03:33:02.522275 containerd[1507]: time="2025-04-30T03:33:02.521213227Z" level=info msg="StartContainer for \"8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565\"" Apr 30 03:33:02.548428 systemd[1]: Started cri-containerd-8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565.scope - libcontainer container 8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565. Apr 30 03:33:02.571486 containerd[1507]: time="2025-04-30T03:33:02.571394491Z" level=info msg="StartContainer for \"8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565\" returns successfully" Apr 30 03:33:06.412079 systemd-networkd[1400]: cilium_host: Link UP Apr 30 03:33:06.412377 systemd-networkd[1400]: cilium_net: Link UP Apr 30 03:33:06.412623 systemd-networkd[1400]: cilium_net: Gained carrier Apr 30 03:33:06.412872 systemd-networkd[1400]: cilium_host: Gained carrier Apr 30 03:33:06.579426 systemd-networkd[1400]: cilium_vxlan: Link UP Apr 30 03:33:06.580012 systemd-networkd[1400]: cilium_vxlan: Gained carrier Apr 30 03:33:06.911883 systemd-networkd[1400]: cilium_host: Gained IPv6LL Apr 30 03:33:07.111216 systemd-networkd[1400]: cilium_net: Gained IPv6LL Apr 30 03:33:07.143013 kernel: NET: Registered PF_ALG protocol family Apr 30 03:33:07.941150 systemd-networkd[1400]: lxc_health: Link UP Apr 30 03:33:07.948332 systemd-networkd[1400]: lxc_health: Gained carrier Apr 30 03:33:08.526892 systemd-networkd[1400]: lxcf55d17475890: Link UP Apr 30 03:33:08.534086 kernel: eth0: renamed from tmpbcd5d Apr 30 03:33:08.558550 systemd-networkd[1400]: lxcf55d17475890: Gained carrier Apr 30 03:33:08.564181 kernel: eth0: renamed from tmp8e802 Apr 30 03:33:08.558688 systemd-networkd[1400]: lxca019abbd53e6: Link UP Apr 30 03:33:08.574622 systemd-networkd[1400]: lxca019abbd53e6: Gained carrier Apr 30 03:33:08.585010 systemd-networkd[1400]: cilium_vxlan: Gained IPv6LL Apr 30 03:33:09.671348 systemd-networkd[1400]: lxc_health: Gained IPv6LL Apr 30 03:33:09.850818 kubelet[2807]: I0430 03:33:09.850701 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-8jzbt" podStartSLOduration=8.35487912 podStartE2EDuration="24.850674235s" podCreationTimestamp="2025-04-30 03:32:45 +0000 UTC" firstStartedPulling="2025-04-30 03:32:46.003794991 +0000 UTC m=+16.045541847" lastFinishedPulling="2025-04-30 03:33:02.499590107 +0000 UTC m=+32.541336962" observedRunningTime="2025-04-30 03:33:03.289226398 +0000 UTC m=+33.330973252" watchObservedRunningTime="2025-04-30 03:33:09.850674235 +0000 UTC m=+39.892421130" Apr 30 03:33:09.927642 systemd-networkd[1400]: lxcf55d17475890: Gained IPv6LL Apr 30 03:33:10.119482 systemd-networkd[1400]: lxca019abbd53e6: Gained IPv6LL Apr 30 03:33:12.186850 containerd[1507]: time="2025-04-30T03:33:12.186754359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:33:12.189332 containerd[1507]: time="2025-04-30T03:33:12.186815354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:33:12.189332 containerd[1507]: time="2025-04-30T03:33:12.186827738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:12.189332 containerd[1507]: time="2025-04-30T03:33:12.186883063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:12.214067 systemd[1]: Started cri-containerd-8e8024c6e040ea3c4667b632f5bc95041442a5b5b845061b5547801eb3151b05.scope - libcontainer container 8e8024c6e040ea3c4667b632f5bc95041442a5b5b845061b5547801eb3151b05. Apr 30 03:33:12.268281 containerd[1507]: time="2025-04-30T03:33:12.267999828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:33:12.268281 containerd[1507]: time="2025-04-30T03:33:12.268044493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:33:12.268281 containerd[1507]: time="2025-04-30T03:33:12.268053720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:12.268281 containerd[1507]: time="2025-04-30T03:33:12.268115888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:33:12.279083 containerd[1507]: time="2025-04-30T03:33:12.278425745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cqgx2,Uid:3f8415f9-bd34-4ffd-b599-7c6a3031b4ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e8024c6e040ea3c4667b632f5bc95041442a5b5b845061b5547801eb3151b05\"" Apr 30 03:33:12.282285 containerd[1507]: time="2025-04-30T03:33:12.282188107Z" level=info msg="CreateContainer within sandbox \"8e8024c6e040ea3c4667b632f5bc95041442a5b5b845061b5547801eb3151b05\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:33:12.306763 systemd[1]: Started cri-containerd-bcd5deb201e5cc2874fe003457e92a87fece2a43e37aec068200e4a325e400c6.scope - libcontainer container bcd5deb201e5cc2874fe003457e92a87fece2a43e37aec068200e4a325e400c6. Apr 30 03:33:12.314097 containerd[1507]: time="2025-04-30T03:33:12.313000575Z" level=info msg="CreateContainer within sandbox \"8e8024c6e040ea3c4667b632f5bc95041442a5b5b845061b5547801eb3151b05\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da9d22d15f1d63ea1bafbc79850958ac345006c3b22a8d5a2716d9b207494722\"" Apr 30 03:33:12.314097 containerd[1507]: time="2025-04-30T03:33:12.313688387Z" level=info msg="StartContainer for \"da9d22d15f1d63ea1bafbc79850958ac345006c3b22a8d5a2716d9b207494722\"" Apr 30 03:33:12.348397 systemd[1]: Started cri-containerd-da9d22d15f1d63ea1bafbc79850958ac345006c3b22a8d5a2716d9b207494722.scope - libcontainer container da9d22d15f1d63ea1bafbc79850958ac345006c3b22a8d5a2716d9b207494722. Apr 30 03:33:12.370912 containerd[1507]: time="2025-04-30T03:33:12.370847341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pvwfl,Uid:dbd3cdfd-8cb2-4571-a3b0-0847d283f8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcd5deb201e5cc2874fe003457e92a87fece2a43e37aec068200e4a325e400c6\"" Apr 30 03:33:12.374188 containerd[1507]: time="2025-04-30T03:33:12.374060203Z" level=info msg="CreateContainer within sandbox \"bcd5deb201e5cc2874fe003457e92a87fece2a43e37aec068200e4a325e400c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 03:33:12.395084 containerd[1507]: time="2025-04-30T03:33:12.394886577Z" level=info msg="CreateContainer within sandbox \"bcd5deb201e5cc2874fe003457e92a87fece2a43e37aec068200e4a325e400c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"273fb535b2e60992ada39e60af7304d78bc2fb7bf8d96d4853997679ea57731f\"" Apr 30 03:33:12.399136 containerd[1507]: time="2025-04-30T03:33:12.399108489Z" level=info msg="StartContainer for \"273fb535b2e60992ada39e60af7304d78bc2fb7bf8d96d4853997679ea57731f\"" Apr 30 03:33:12.406906 containerd[1507]: time="2025-04-30T03:33:12.406784631Z" level=info msg="StartContainer for \"da9d22d15f1d63ea1bafbc79850958ac345006c3b22a8d5a2716d9b207494722\" returns successfully" Apr 30 03:33:12.425121 systemd[1]: Started cri-containerd-273fb535b2e60992ada39e60af7304d78bc2fb7bf8d96d4853997679ea57731f.scope - libcontainer container 273fb535b2e60992ada39e60af7304d78bc2fb7bf8d96d4853997679ea57731f. Apr 30 03:33:12.459045 containerd[1507]: time="2025-04-30T03:33:12.458788374Z" level=info msg="StartContainer for \"273fb535b2e60992ada39e60af7304d78bc2fb7bf8d96d4853997679ea57731f\" returns successfully" Apr 30 03:33:13.286577 kubelet[2807]: I0430 03:33:13.286273 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pvwfl" podStartSLOduration=28.286252794 podStartE2EDuration="28.286252794s" podCreationTimestamp="2025-04-30 03:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:33:13.286109183 +0000 UTC m=+43.327856068" watchObservedRunningTime="2025-04-30 03:33:13.286252794 +0000 UTC m=+43.327999659" Apr 30 03:33:13.326015 kubelet[2807]: I0430 03:33:13.325919 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cqgx2" podStartSLOduration=28.325900416 podStartE2EDuration="28.325900416s" podCreationTimestamp="2025-04-30 03:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:33:13.325729222 +0000 UTC m=+43.367476097" watchObservedRunningTime="2025-04-30 03:33:13.325900416 +0000 UTC m=+43.367647281" Apr 30 03:37:07.164089 systemd[1]: Started sshd@7-157.180.66.130:22-139.178.68.195:53622.service - OpenSSH per-connection server daemon (139.178.68.195:53622). Apr 30 03:37:08.182213 sshd[4201]: Accepted publickey for core from 139.178.68.195 port 53622 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:08.186143 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:08.195825 systemd-logind[1487]: New session 8 of user core. Apr 30 03:37:08.201193 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 03:37:09.595437 sshd[4201]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:09.601435 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Apr 30 03:37:09.602359 systemd[1]: sshd@7-157.180.66.130:22-139.178.68.195:53622.service: Deactivated successfully. Apr 30 03:37:09.606793 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 03:37:09.609785 systemd-logind[1487]: Removed session 8. Apr 30 03:37:14.767447 systemd[1]: Started sshd@8-157.180.66.130:22-139.178.68.195:53638.service - OpenSSH per-connection server daemon (139.178.68.195:53638). Apr 30 03:37:15.737383 sshd[4215]: Accepted publickey for core from 139.178.68.195 port 53638 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:15.740062 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:15.749312 systemd-logind[1487]: New session 9 of user core. Apr 30 03:37:15.757267 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 03:37:16.550495 sshd[4215]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:16.557426 systemd[1]: sshd@8-157.180.66.130:22-139.178.68.195:53638.service: Deactivated successfully. Apr 30 03:37:16.562124 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 03:37:16.563680 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Apr 30 03:37:16.565968 systemd-logind[1487]: Removed session 9. Apr 30 03:37:21.719330 systemd[1]: Started sshd@9-157.180.66.130:22-139.178.68.195:40248.service - OpenSSH per-connection server daemon (139.178.68.195:40248). Apr 30 03:37:22.709097 sshd[4231]: Accepted publickey for core from 139.178.68.195 port 40248 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:22.712200 sshd[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:22.718860 systemd-logind[1487]: New session 10 of user core. Apr 30 03:37:22.725271 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 03:37:23.508885 sshd[4231]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:23.515165 systemd[1]: sshd@9-157.180.66.130:22-139.178.68.195:40248.service: Deactivated successfully. Apr 30 03:37:23.518461 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 03:37:23.520255 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Apr 30 03:37:23.522068 systemd-logind[1487]: Removed session 10. Apr 30 03:37:23.684511 systemd[1]: Started sshd@10-157.180.66.130:22-139.178.68.195:40262.service - OpenSSH per-connection server daemon (139.178.68.195:40262). Apr 30 03:37:24.684001 sshd[4245]: Accepted publickey for core from 139.178.68.195 port 40262 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:24.686385 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:24.695458 systemd-logind[1487]: New session 11 of user core. Apr 30 03:37:24.700215 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 03:37:25.513313 sshd[4245]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:25.519267 systemd[1]: sshd@10-157.180.66.130:22-139.178.68.195:40262.service: Deactivated successfully. Apr 30 03:37:25.523235 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 03:37:25.527038 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Apr 30 03:37:25.529599 systemd-logind[1487]: Removed session 11. Apr 30 03:37:25.693462 systemd[1]: Started sshd@11-157.180.66.130:22-139.178.68.195:42364.service - OpenSSH per-connection server daemon (139.178.68.195:42364). Apr 30 03:37:26.708836 sshd[4256]: Accepted publickey for core from 139.178.68.195 port 42364 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:26.711346 sshd[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:26.720279 systemd-logind[1487]: New session 12 of user core. Apr 30 03:37:26.725204 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 03:37:27.518432 sshd[4256]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:27.522716 systemd[1]: sshd@11-157.180.66.130:22-139.178.68.195:42364.service: Deactivated successfully. Apr 30 03:37:27.526514 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 03:37:27.529759 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Apr 30 03:37:27.531979 systemd-logind[1487]: Removed session 12. Apr 30 03:37:32.686077 systemd[1]: Started sshd@12-157.180.66.130:22-139.178.68.195:42376.service - OpenSSH per-connection server daemon (139.178.68.195:42376). Apr 30 03:37:33.671651 sshd[4272]: Accepted publickey for core from 139.178.68.195 port 42376 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:33.674330 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:33.684362 systemd-logind[1487]: New session 13 of user core. Apr 30 03:37:33.691260 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 03:37:34.462847 sshd[4272]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:34.467292 systemd[1]: sshd@12-157.180.66.130:22-139.178.68.195:42376.service: Deactivated successfully. Apr 30 03:37:34.470963 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 03:37:34.473374 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Apr 30 03:37:34.475500 systemd-logind[1487]: Removed session 13. Apr 30 03:37:34.640359 systemd[1]: Started sshd@13-157.180.66.130:22-139.178.68.195:42382.service - OpenSSH per-connection server daemon (139.178.68.195:42382). Apr 30 03:37:35.619320 sshd[4285]: Accepted publickey for core from 139.178.68.195 port 42382 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:35.622239 sshd[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:35.631068 systemd-logind[1487]: New session 14 of user core. Apr 30 03:37:35.641299 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 03:37:36.678358 sshd[4285]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:36.686990 systemd[1]: sshd@13-157.180.66.130:22-139.178.68.195:42382.service: Deactivated successfully. Apr 30 03:37:36.690063 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 03:37:36.692067 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Apr 30 03:37:36.695337 systemd-logind[1487]: Removed session 14. Apr 30 03:37:36.853514 systemd[1]: Started sshd@14-157.180.66.130:22-139.178.68.195:33714.service - OpenSSH per-connection server daemon (139.178.68.195:33714). Apr 30 03:37:37.850121 sshd[4296]: Accepted publickey for core from 139.178.68.195 port 33714 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:37.852473 sshd[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:37.861485 systemd-logind[1487]: New session 15 of user core. Apr 30 03:37:37.865197 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 03:37:40.371288 sshd[4296]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:40.376379 systemd[1]: sshd@14-157.180.66.130:22-139.178.68.195:33714.service: Deactivated successfully. Apr 30 03:37:40.379493 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 03:37:40.382417 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Apr 30 03:37:40.384576 systemd-logind[1487]: Removed session 15. Apr 30 03:37:40.547346 systemd[1]: Started sshd@15-157.180.66.130:22-139.178.68.195:33724.service - OpenSSH per-connection server daemon (139.178.68.195:33724). Apr 30 03:37:41.539284 sshd[4314]: Accepted publickey for core from 139.178.68.195 port 33724 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:41.541721 sshd[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:41.549273 systemd-logind[1487]: New session 16 of user core. Apr 30 03:37:41.555176 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 03:37:42.471407 sshd[4314]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:42.476321 systemd[1]: sshd@15-157.180.66.130:22-139.178.68.195:33724.service: Deactivated successfully. Apr 30 03:37:42.480232 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 03:37:42.483048 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Apr 30 03:37:42.485818 systemd-logind[1487]: Removed session 16. Apr 30 03:37:42.643321 systemd[1]: Started sshd@16-157.180.66.130:22-139.178.68.195:33730.service - OpenSSH per-connection server daemon (139.178.68.195:33730). Apr 30 03:37:43.616397 sshd[4324]: Accepted publickey for core from 139.178.68.195 port 33730 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:43.618520 sshd[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:43.624656 systemd-logind[1487]: New session 17 of user core. Apr 30 03:37:43.630159 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 03:37:44.366890 sshd[4324]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:44.373596 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Apr 30 03:37:44.374856 systemd[1]: sshd@16-157.180.66.130:22-139.178.68.195:33730.service: Deactivated successfully. Apr 30 03:37:44.378197 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 03:37:44.380136 systemd-logind[1487]: Removed session 17. Apr 30 03:37:49.545338 systemd[1]: Started sshd@17-157.180.66.130:22-139.178.68.195:44796.service - OpenSSH per-connection server daemon (139.178.68.195:44796). Apr 30 03:37:50.521163 sshd[4342]: Accepted publickey for core from 139.178.68.195 port 44796 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:50.523283 sshd[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:50.529385 systemd-logind[1487]: New session 18 of user core. Apr 30 03:37:50.534211 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 03:37:51.273361 sshd[4342]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:51.278803 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Apr 30 03:37:51.280493 systemd[1]: sshd@17-157.180.66.130:22-139.178.68.195:44796.service: Deactivated successfully. Apr 30 03:37:51.283896 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 03:37:51.285624 systemd-logind[1487]: Removed session 18. Apr 30 03:37:56.449493 systemd[1]: Started sshd@18-157.180.66.130:22-139.178.68.195:42024.service - OpenSSH per-connection server daemon (139.178.68.195:42024). Apr 30 03:37:57.438414 sshd[4354]: Accepted publickey for core from 139.178.68.195 port 42024 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:57.441453 sshd[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:57.450220 systemd-logind[1487]: New session 19 of user core. Apr 30 03:37:57.458364 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 03:37:58.223302 sshd[4354]: pam_unix(sshd:session): session closed for user core Apr 30 03:37:58.229516 systemd[1]: sshd@18-157.180.66.130:22-139.178.68.195:42024.service: Deactivated successfully. Apr 30 03:37:58.233521 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 03:37:58.235185 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Apr 30 03:37:58.237692 systemd-logind[1487]: Removed session 19. Apr 30 03:37:58.399799 systemd[1]: Started sshd@19-157.180.66.130:22-139.178.68.195:42028.service - OpenSSH per-connection server daemon (139.178.68.195:42028). Apr 30 03:37:59.396413 sshd[4367]: Accepted publickey for core from 139.178.68.195 port 42028 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:37:59.398988 sshd[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:37:59.407231 systemd-logind[1487]: New session 20 of user core. Apr 30 03:37:59.415407 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 03:38:01.756819 containerd[1507]: time="2025-04-30T03:38:01.756770817Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 03:38:01.796366 containerd[1507]: time="2025-04-30T03:38:01.796329080Z" level=info msg="StopContainer for \"bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb\" with timeout 2 (s)" Apr 30 03:38:01.796681 containerd[1507]: time="2025-04-30T03:38:01.796585964Z" level=info msg="StopContainer for \"8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565\" with timeout 30 (s)" Apr 30 03:38:01.797157 containerd[1507]: time="2025-04-30T03:38:01.797144256Z" level=info msg="Stop container \"8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565\" with signal terminated" Apr 30 03:38:01.797466 containerd[1507]: time="2025-04-30T03:38:01.797427570Z" level=info msg="Stop container \"bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb\" with signal terminated" Apr 30 03:38:01.811727 systemd-networkd[1400]: lxc_health: Link DOWN Apr 30 03:38:01.811734 systemd-networkd[1400]: lxc_health: Lost carrier Apr 30 03:38:01.823329 systemd[1]: cri-containerd-8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565.scope: Deactivated successfully. Apr 30 03:38:01.844067 systemd[1]: cri-containerd-bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb.scope: Deactivated successfully. Apr 30 03:38:01.844826 systemd[1]: cri-containerd-bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb.scope: Consumed 8.255s CPU time. Apr 30 03:38:01.866002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565-rootfs.mount: Deactivated successfully. Apr 30 03:38:01.869260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb-rootfs.mount: Deactivated successfully. Apr 30 03:38:01.884741 containerd[1507]: time="2025-04-30T03:38:01.884529058Z" level=info msg="shim disconnected" id=8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565 namespace=k8s.io Apr 30 03:38:01.885142 containerd[1507]: time="2025-04-30T03:38:01.884697926Z" level=warning msg="cleaning up after shim disconnected" id=8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565 namespace=k8s.io Apr 30 03:38:01.885142 containerd[1507]: time="2025-04-30T03:38:01.885013070Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:38:01.885142 containerd[1507]: time="2025-04-30T03:38:01.884974868Z" level=info msg="shim disconnected" id=bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb namespace=k8s.io Apr 30 03:38:01.885142 containerd[1507]: time="2025-04-30T03:38:01.885116756Z" level=warning msg="cleaning up after shim disconnected" id=bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb namespace=k8s.io Apr 30 03:38:01.885142 containerd[1507]: time="2025-04-30T03:38:01.885124691Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:38:01.900385 containerd[1507]: time="2025-04-30T03:38:01.900212036Z" level=info msg="StopContainer for \"bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb\" returns successfully" Apr 30 03:38:01.901216 containerd[1507]: time="2025-04-30T03:38:01.901050765Z" level=info msg="StopPodSandbox for \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\"" Apr 30 03:38:01.901216 containerd[1507]: time="2025-04-30T03:38:01.901093046Z" level=info msg="Container to stop \"ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:38:01.901216 containerd[1507]: time="2025-04-30T03:38:01.901103505Z" level=info msg="Container to stop \"d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:38:01.901216 containerd[1507]: time="2025-04-30T03:38:01.901110989Z" level=info msg="Container to stop \"bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:38:01.901216 containerd[1507]: time="2025-04-30T03:38:01.901118303Z" level=info msg="Container to stop \"4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:38:01.901216 containerd[1507]: time="2025-04-30T03:38:01.901126749Z" level=info msg="Container to stop \"b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:38:01.904492 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8-shm.mount: Deactivated successfully. Apr 30 03:38:01.906472 containerd[1507]: time="2025-04-30T03:38:01.906015526Z" level=info msg="StopContainer for \"8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565\" returns successfully" Apr 30 03:38:01.907698 containerd[1507]: time="2025-04-30T03:38:01.907679600Z" level=info msg="StopPodSandbox for \"cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3\"" Apr 30 03:38:01.907880 containerd[1507]: time="2025-04-30T03:38:01.907763589Z" level=info msg="Container to stop \"8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 03:38:01.911343 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3-shm.mount: Deactivated successfully. Apr 30 03:38:01.912469 systemd[1]: cri-containerd-94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8.scope: Deactivated successfully. Apr 30 03:38:01.922861 systemd[1]: cri-containerd-cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3.scope: Deactivated successfully. Apr 30 03:38:01.942912 containerd[1507]: time="2025-04-30T03:38:01.942860591Z" level=info msg="shim disconnected" id=94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8 namespace=k8s.io Apr 30 03:38:01.943816 containerd[1507]: time="2025-04-30T03:38:01.943793377Z" level=warning msg="cleaning up after shim disconnected" id=94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8 namespace=k8s.io Apr 30 03:38:01.943816 containerd[1507]: time="2025-04-30T03:38:01.943809097Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:38:01.943884 containerd[1507]: time="2025-04-30T03:38:01.943109059Z" level=info msg="shim disconnected" id=cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3 namespace=k8s.io Apr 30 03:38:01.943884 containerd[1507]: time="2025-04-30T03:38:01.943866245Z" level=warning msg="cleaning up after shim disconnected" id=cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3 namespace=k8s.io Apr 30 03:38:01.943884 containerd[1507]: time="2025-04-30T03:38:01.943871976Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:38:01.959146 containerd[1507]: time="2025-04-30T03:38:01.958802025Z" level=warning msg="cleanup warnings time=\"2025-04-30T03:38:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 03:38:01.967400 containerd[1507]: time="2025-04-30T03:38:01.966868648Z" level=info msg="TearDown network for sandbox \"cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3\" successfully" Apr 30 03:38:01.967400 containerd[1507]: time="2025-04-30T03:38:01.966902662Z" level=info msg="StopPodSandbox for \"cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3\" returns successfully" Apr 30 03:38:01.968048 containerd[1507]: time="2025-04-30T03:38:01.967912614Z" level=info msg="TearDown network for sandbox \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\" successfully" Apr 30 03:38:01.968048 containerd[1507]: time="2025-04-30T03:38:01.967966836Z" level=info msg="StopPodSandbox for \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\" returns successfully" Apr 30 03:38:02.051885 kubelet[2807]: I0430 03:38:02.051277 2807 scope.go:117] "RemoveContainer" containerID="bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb" Apr 30 03:38:02.064790 kubelet[2807]: I0430 03:38:02.064257 2807 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-xtables-lock\") pod \"c248d004-745e-4042-8043-dd144de849c5\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " Apr 30 03:38:02.064790 kubelet[2807]: I0430 03:38:02.064307 2807 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-cilium-run\") pod \"c248d004-745e-4042-8043-dd144de849c5\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " Apr 30 03:38:02.064790 kubelet[2807]: I0430 03:38:02.064338 2807 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c248d004-745e-4042-8043-dd144de849c5-hubble-tls\") pod \"c248d004-745e-4042-8043-dd144de849c5\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " Apr 30 03:38:02.064790 kubelet[2807]: I0430 03:38:02.064356 2807 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-lib-modules\") pod \"c248d004-745e-4042-8043-dd144de849c5\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " Apr 30 03:38:02.064790 kubelet[2807]: I0430 03:38:02.064377 2807 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c248d004-745e-4042-8043-dd144de849c5-clustermesh-secrets\") pod \"c248d004-745e-4042-8043-dd144de849c5\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " Apr 30 03:38:02.064790 kubelet[2807]: I0430 03:38:02.064398 2807 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjvlv\" (UniqueName: \"kubernetes.io/projected/94e027fd-84c6-489d-954b-6ae05b7d5370-kube-api-access-kjvlv\") pod \"94e027fd-84c6-489d-954b-6ae05b7d5370\" (UID: \"94e027fd-84c6-489d-954b-6ae05b7d5370\") " Apr 30 03:38:02.065106 kubelet[2807]: I0430 03:38:02.064415 2807 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-host-proc-sys-kernel\") pod \"c248d004-745e-4042-8043-dd144de849c5\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " Apr 30 03:38:02.065106 kubelet[2807]: I0430 03:38:02.064431 2807 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-cilium-cgroup\") pod \"c248d004-745e-4042-8043-dd144de849c5\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " Apr 30 03:38:02.065106 kubelet[2807]: I0430 03:38:02.064450 2807 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tww75\" (UniqueName: \"kubernetes.io/projected/c248d004-745e-4042-8043-dd144de849c5-kube-api-access-tww75\") pod \"c248d004-745e-4042-8043-dd144de849c5\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " Apr 30 03:38:02.065106 kubelet[2807]: I0430 03:38:02.064487 2807 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-host-proc-sys-net\") pod \"c248d004-745e-4042-8043-dd144de849c5\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " Apr 30 03:38:02.065106 kubelet[2807]: I0430 03:38:02.064512 2807 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c248d004-745e-4042-8043-dd144de849c5-cilium-config-path\") pod \"c248d004-745e-4042-8043-dd144de849c5\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " Apr 30 03:38:02.065106 kubelet[2807]: I0430 03:38:02.064531 2807 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-bpf-maps\") pod \"c248d004-745e-4042-8043-dd144de849c5\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " Apr 30 03:38:02.065329 kubelet[2807]: I0430 03:38:02.064547 2807 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-hostproc\") pod \"c248d004-745e-4042-8043-dd144de849c5\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " Apr 30 03:38:02.065329 kubelet[2807]: I0430 03:38:02.064566 2807 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94e027fd-84c6-489d-954b-6ae05b7d5370-cilium-config-path\") pod \"94e027fd-84c6-489d-954b-6ae05b7d5370\" (UID: \"94e027fd-84c6-489d-954b-6ae05b7d5370\") " Apr 30 03:38:02.065329 kubelet[2807]: I0430 03:38:02.064585 2807 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-cni-path\") pod \"c248d004-745e-4042-8043-dd144de849c5\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " Apr 30 03:38:02.065329 kubelet[2807]: I0430 03:38:02.064601 2807 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-etc-cni-netd\") pod \"c248d004-745e-4042-8043-dd144de849c5\" (UID: \"c248d004-745e-4042-8043-dd144de849c5\") " Apr 30 03:38:02.068644 kubelet[2807]: I0430 03:38:02.064682 2807 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c248d004-745e-4042-8043-dd144de849c5" (UID: "c248d004-745e-4042-8043-dd144de849c5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:38:02.068644 kubelet[2807]: I0430 03:38:02.068398 2807 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c248d004-745e-4042-8043-dd144de849c5" (UID: "c248d004-745e-4042-8043-dd144de849c5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:38:02.068644 kubelet[2807]: I0430 03:38:02.068417 2807 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c248d004-745e-4042-8043-dd144de849c5" (UID: "c248d004-745e-4042-8043-dd144de849c5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:38:02.070854 containerd[1507]: time="2025-04-30T03:38:02.070128499Z" level=info msg="RemoveContainer for \"bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb\"" Apr 30 03:38:02.078279 containerd[1507]: time="2025-04-30T03:38:02.076545905Z" level=info msg="RemoveContainer for \"bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb\" returns successfully" Apr 30 03:38:02.083879 kubelet[2807]: I0430 03:38:02.083841 2807 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c248d004-745e-4042-8043-dd144de849c5" (UID: "c248d004-745e-4042-8043-dd144de849c5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:38:02.084739 kubelet[2807]: I0430 03:38:02.084702 2807 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c248d004-745e-4042-8043-dd144de849c5" (UID: "c248d004-745e-4042-8043-dd144de849c5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:38:02.086715 kubelet[2807]: I0430 03:38:02.086693 2807 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c248d004-745e-4042-8043-dd144de849c5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c248d004-745e-4042-8043-dd144de849c5" (UID: "c248d004-745e-4042-8043-dd144de849c5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 03:38:02.087855 kubelet[2807]: I0430 03:38:02.087822 2807 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c248d004-745e-4042-8043-dd144de849c5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c248d004-745e-4042-8043-dd144de849c5" (UID: "c248d004-745e-4042-8043-dd144de849c5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:38:02.087919 kubelet[2807]: I0430 03:38:02.087874 2807 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c248d004-745e-4042-8043-dd144de849c5" (UID: "c248d004-745e-4042-8043-dd144de849c5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:38:02.087919 kubelet[2807]: I0430 03:38:02.087893 2807 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-hostproc" (OuterVolumeSpecName: "hostproc") pod "c248d004-745e-4042-8043-dd144de849c5" (UID: "c248d004-745e-4042-8043-dd144de849c5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:38:02.091258 kubelet[2807]: I0430 03:38:02.090717 2807 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94e027fd-84c6-489d-954b-6ae05b7d5370-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "94e027fd-84c6-489d-954b-6ae05b7d5370" (UID: "94e027fd-84c6-489d-954b-6ae05b7d5370"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 03:38:02.091258 kubelet[2807]: I0430 03:38:02.090763 2807 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-cni-path" (OuterVolumeSpecName: "cni-path") pod "c248d004-745e-4042-8043-dd144de849c5" (UID: "c248d004-745e-4042-8043-dd144de849c5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:38:02.091258 kubelet[2807]: I0430 03:38:02.091041 2807 scope.go:117] "RemoveContainer" containerID="d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909" Apr 30 03:38:02.091258 kubelet[2807]: I0430 03:38:02.091135 2807 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c248d004-745e-4042-8043-dd144de849c5" (UID: "c248d004-745e-4042-8043-dd144de849c5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:38:02.091258 kubelet[2807]: I0430 03:38:02.091156 2807 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c248d004-745e-4042-8043-dd144de849c5" (UID: "c248d004-745e-4042-8043-dd144de849c5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 03:38:02.094646 containerd[1507]: time="2025-04-30T03:38:02.094248818Z" level=info msg="RemoveContainer for \"d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909\"" Apr 30 03:38:02.100669 containerd[1507]: time="2025-04-30T03:38:02.100617913Z" level=info msg="RemoveContainer for \"d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909\" returns successfully" Apr 30 03:38:02.100775 kubelet[2807]: I0430 03:38:02.100658 2807 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94e027fd-84c6-489d-954b-6ae05b7d5370-kube-api-access-kjvlv" (OuterVolumeSpecName: "kube-api-access-kjvlv") pod "94e027fd-84c6-489d-954b-6ae05b7d5370" (UID: "94e027fd-84c6-489d-954b-6ae05b7d5370"). InnerVolumeSpecName "kube-api-access-kjvlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:38:02.100775 kubelet[2807]: I0430 03:38:02.100707 2807 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c248d004-745e-4042-8043-dd144de849c5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c248d004-745e-4042-8043-dd144de849c5" (UID: "c248d004-745e-4042-8043-dd144de849c5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:38:02.100775 kubelet[2807]: I0430 03:38:02.100722 2807 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c248d004-745e-4042-8043-dd144de849c5-kube-api-access-tww75" (OuterVolumeSpecName: "kube-api-access-tww75") pod "c248d004-745e-4042-8043-dd144de849c5" (UID: "c248d004-745e-4042-8043-dd144de849c5"). InnerVolumeSpecName "kube-api-access-tww75". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 03:38:02.101068 kubelet[2807]: I0430 03:38:02.101028 2807 scope.go:117] "RemoveContainer" containerID="b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe" Apr 30 03:38:02.106399 containerd[1507]: time="2025-04-30T03:38:02.106199695Z" level=info msg="RemoveContainer for \"b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe\"" Apr 30 03:38:02.108830 containerd[1507]: time="2025-04-30T03:38:02.108812247Z" level=info msg="RemoveContainer for \"b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe\" returns successfully" Apr 30 03:38:02.109198 kubelet[2807]: I0430 03:38:02.109025 2807 scope.go:117] "RemoveContainer" containerID="ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d" Apr 30 03:38:02.110206 containerd[1507]: time="2025-04-30T03:38:02.110139776Z" level=info msg="RemoveContainer for \"ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d\"" Apr 30 03:38:02.124116 containerd[1507]: time="2025-04-30T03:38:02.124014659Z" level=info msg="RemoveContainer for \"ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d\" returns successfully" Apr 30 03:38:02.124383 kubelet[2807]: I0430 03:38:02.124274 2807 scope.go:117] "RemoveContainer" containerID="4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7" Apr 30 03:38:02.125799 containerd[1507]: time="2025-04-30T03:38:02.125449411Z" level=info msg="RemoveContainer for \"4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7\"" Apr 30 03:38:02.129134 containerd[1507]: time="2025-04-30T03:38:02.129092973Z" level=info msg="RemoveContainer for \"4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7\" returns successfully" Apr 30 03:38:02.129337 kubelet[2807]: I0430 03:38:02.129262 2807 scope.go:117] "RemoveContainer" containerID="bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb" Apr 30 03:38:02.140693 containerd[1507]: time="2025-04-30T03:38:02.134234187Z" level=error msg="ContainerStatus for \"bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb\": not found" Apr 30 03:38:02.151346 kubelet[2807]: E0430 03:38:02.151294 2807 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb\": not found" containerID="bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb" Apr 30 03:38:02.155620 kubelet[2807]: I0430 03:38:02.155505 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb"} err="failed to get container status \"bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"bee572e0346e08a9d731d4c233092ec9b6fb1a8cfbeee86f7bf4b4a3f3e5c4cb\": not found" Apr 30 03:38:02.155681 kubelet[2807]: I0430 03:38:02.155625 2807 scope.go:117] "RemoveContainer" containerID="d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909" Apr 30 03:38:02.156029 containerd[1507]: time="2025-04-30T03:38:02.155983069Z" level=error msg="ContainerStatus for \"d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909\": not found" Apr 30 03:38:02.156180 kubelet[2807]: E0430 03:38:02.156156 2807 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909\": not found" containerID="d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909" Apr 30 03:38:02.156215 kubelet[2807]: I0430 03:38:02.156187 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909"} err="failed to get container status \"d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909\": rpc error: code = NotFound desc = an error occurred when try to find container \"d6874378cd875b531c2baa30a517715e7d8da98942d33b582902b381f3b32909\": not found" Apr 30 03:38:02.156215 kubelet[2807]: I0430 03:38:02.156204 2807 scope.go:117] "RemoveContainer" containerID="b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe" Apr 30 03:38:02.156468 containerd[1507]: time="2025-04-30T03:38:02.156385035Z" level=error msg="ContainerStatus for \"b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe\": not found" Apr 30 03:38:02.156605 kubelet[2807]: E0430 03:38:02.156575 2807 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe\": not found" containerID="b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe" Apr 30 03:38:02.156638 kubelet[2807]: I0430 03:38:02.156603 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe"} err="failed to get container status \"b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3a6f1e5659d73b2847e5f0283e76be7161612236cac77c3d787a0c8563f8bbe\": not found" Apr 30 03:38:02.156638 kubelet[2807]: I0430 03:38:02.156620 2807 scope.go:117] "RemoveContainer" containerID="ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d" Apr 30 03:38:02.156829 containerd[1507]: time="2025-04-30T03:38:02.156794267Z" level=error msg="ContainerStatus for \"ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d\": not found" Apr 30 03:38:02.157044 kubelet[2807]: E0430 03:38:02.156913 2807 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d\": not found" containerID="ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d" Apr 30 03:38:02.157044 kubelet[2807]: I0430 03:38:02.156946 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d"} err="failed to get container status \"ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee0cf5ba92d0e78bee4965f10acc6fbbab332326a20c8d04272d28fdf30eed8d\": not found" Apr 30 03:38:02.157044 kubelet[2807]: I0430 03:38:02.156979 2807 scope.go:117] "RemoveContainer" containerID="4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7" Apr 30 03:38:02.157484 kubelet[2807]: E0430 03:38:02.157333 2807 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7\": not found" containerID="4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7" Apr 30 03:38:02.157484 kubelet[2807]: I0430 03:38:02.157348 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7"} err="failed to get container status \"4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7\": not found" Apr 30 03:38:02.157484 kubelet[2807]: I0430 03:38:02.157361 2807 scope.go:117] "RemoveContainer" containerID="8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565" Apr 30 03:38:02.157565 containerd[1507]: time="2025-04-30T03:38:02.157240947Z" level=error msg="ContainerStatus for \"4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4940b71930f4021cb6a9af3db589fb9b14a3a493cb9ecd899f4e891ce04921d7\": not found" Apr 30 03:38:02.158438 containerd[1507]: time="2025-04-30T03:38:02.158353073Z" level=info msg="RemoveContainer for \"8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565\"" Apr 30 03:38:02.162163 containerd[1507]: time="2025-04-30T03:38:02.162121319Z" level=info msg="RemoveContainer for \"8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565\" returns successfully" Apr 30 03:38:02.162373 kubelet[2807]: I0430 03:38:02.162349 2807 scope.go:117] "RemoveContainer" containerID="8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565" Apr 30 03:38:02.162708 containerd[1507]: time="2025-04-30T03:38:02.162645597Z" level=error msg="ContainerStatus for \"8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565\": not found" Apr 30 03:38:02.162845 kubelet[2807]: E0430 03:38:02.162815 2807 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565\": not found" containerID="8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565" Apr 30 03:38:02.162995 kubelet[2807]: I0430 03:38:02.162851 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565"} err="failed to get container status \"8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f94c9956cc61cba13b20c77b82c086b46f7fe48cc6cdde3aad436bfe706c565\": not found" Apr 30 03:38:02.167917 kubelet[2807]: I0430 03:38:02.167877 2807 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kjvlv\" (UniqueName: \"kubernetes.io/projected/94e027fd-84c6-489d-954b-6ae05b7d5370-kube-api-access-kjvlv\") on node \"ci-4081-3-3-b-f8d40824c9\" DevicePath \"\"" Apr 30 03:38:02.167917 kubelet[2807]: I0430 03:38:02.167898 2807 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c248d004-745e-4042-8043-dd144de849c5-hubble-tls\") on node \"ci-4081-3-3-b-f8d40824c9\" DevicePath \"\"" Apr 30 03:38:02.167917 kubelet[2807]: I0430 03:38:02.167907 2807 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-lib-modules\") on node \"ci-4081-3-3-b-f8d40824c9\" DevicePath \"\"" Apr 30 03:38:02.167917 kubelet[2807]: I0430 03:38:02.167913 2807 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c248d004-745e-4042-8043-dd144de849c5-clustermesh-secrets\") on node \"ci-4081-3-3-b-f8d40824c9\" DevicePath \"\"" Apr 30 03:38:02.167917 kubelet[2807]: I0430 03:38:02.167920 2807 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-host-proc-sys-kernel\") on node \"ci-4081-3-3-b-f8d40824c9\" DevicePath \"\"" Apr 30 03:38:02.168157 kubelet[2807]: I0430 03:38:02.167941 2807 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-cilium-cgroup\") on node \"ci-4081-3-3-b-f8d40824c9\" DevicePath \"\"" Apr 30 03:38:02.168157 kubelet[2807]: I0430 03:38:02.167960 2807 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-host-proc-sys-net\") on node \"ci-4081-3-3-b-f8d40824c9\" DevicePath \"\"" Apr 30 03:38:02.168157 kubelet[2807]: I0430 03:38:02.167968 2807 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tww75\" (UniqueName: \"kubernetes.io/projected/c248d004-745e-4042-8043-dd144de849c5-kube-api-access-tww75\") on node \"ci-4081-3-3-b-f8d40824c9\" DevicePath \"\"" Apr 30 03:38:02.168157 kubelet[2807]: I0430 03:38:02.167974 2807 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c248d004-745e-4042-8043-dd144de849c5-cilium-config-path\") on node \"ci-4081-3-3-b-f8d40824c9\" DevicePath \"\"" Apr 30 03:38:02.168157 kubelet[2807]: I0430 03:38:02.167981 2807 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-bpf-maps\") on node \"ci-4081-3-3-b-f8d40824c9\" DevicePath \"\"" Apr 30 03:38:02.168157 kubelet[2807]: I0430 03:38:02.167987 2807 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-hostproc\") on node \"ci-4081-3-3-b-f8d40824c9\" DevicePath \"\"" Apr 30 03:38:02.168157 kubelet[2807]: I0430 03:38:02.167994 2807 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94e027fd-84c6-489d-954b-6ae05b7d5370-cilium-config-path\") on node \"ci-4081-3-3-b-f8d40824c9\" DevicePath \"\"" Apr 30 03:38:02.168157 kubelet[2807]: I0430 03:38:02.168002 2807 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-cni-path\") on node \"ci-4081-3-3-b-f8d40824c9\" DevicePath \"\"" Apr 30 03:38:02.168436 kubelet[2807]: I0430 03:38:02.168035 2807 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-etc-cni-netd\") on node \"ci-4081-3-3-b-f8d40824c9\" DevicePath \"\"" Apr 30 03:38:02.168436 kubelet[2807]: I0430 03:38:02.168042 2807 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-xtables-lock\") on node \"ci-4081-3-3-b-f8d40824c9\" DevicePath \"\"" Apr 30 03:38:02.168436 kubelet[2807]: I0430 03:38:02.168049 2807 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c248d004-745e-4042-8043-dd144de849c5-cilium-run\") on node \"ci-4081-3-3-b-f8d40824c9\" DevicePath \"\"" Apr 30 03:38:02.362068 systemd[1]: Removed slice kubepods-burstable-podc248d004_745e_4042_8043_dd144de849c5.slice - libcontainer container kubepods-burstable-podc248d004_745e_4042_8043_dd144de849c5.slice. Apr 30 03:38:02.362296 systemd[1]: kubepods-burstable-podc248d004_745e_4042_8043_dd144de849c5.slice: Consumed 8.350s CPU time. Apr 30 03:38:02.384682 systemd[1]: Removed slice kubepods-besteffort-pod94e027fd_84c6_489d_954b_6ae05b7d5370.slice - libcontainer container kubepods-besteffort-pod94e027fd_84c6_489d_954b_6ae05b7d5370.slice. Apr 30 03:38:02.729756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3-rootfs.mount: Deactivated successfully. Apr 30 03:38:02.729978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8-rootfs.mount: Deactivated successfully. Apr 30 03:38:02.730090 systemd[1]: var-lib-kubelet-pods-94e027fd\x2d84c6\x2d489d\x2d954b\x2d6ae05b7d5370-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkjvlv.mount: Deactivated successfully. Apr 30 03:38:02.730214 systemd[1]: var-lib-kubelet-pods-c248d004\x2d745e\x2d4042\x2d8043\x2ddd144de849c5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtww75.mount: Deactivated successfully. Apr 30 03:38:02.730326 systemd[1]: var-lib-kubelet-pods-c248d004\x2d745e\x2d4042\x2d8043\x2ddd144de849c5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 03:38:02.730459 systemd[1]: var-lib-kubelet-pods-c248d004\x2d745e\x2d4042\x2d8043\x2ddd144de849c5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 03:38:03.738286 sshd[4367]: pam_unix(sshd:session): session closed for user core Apr 30 03:38:03.743081 systemd[1]: sshd@19-157.180.66.130:22-139.178.68.195:42028.service: Deactivated successfully. Apr 30 03:38:03.747253 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 03:38:03.747561 systemd[1]: session-20.scope: Consumed 1.122s CPU time. Apr 30 03:38:03.749988 systemd-logind[1487]: Session 20 logged out. Waiting for processes to exit. Apr 30 03:38:03.752991 systemd-logind[1487]: Removed session 20. Apr 30 03:38:03.914405 systemd[1]: Started sshd@20-157.180.66.130:22-139.178.68.195:42032.service - OpenSSH per-connection server daemon (139.178.68.195:42032). Apr 30 03:38:04.077041 kubelet[2807]: I0430 03:38:04.075822 2807 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94e027fd-84c6-489d-954b-6ae05b7d5370" path="/var/lib/kubelet/pods/94e027fd-84c6-489d-954b-6ae05b7d5370/volumes" Apr 30 03:38:04.077041 kubelet[2807]: I0430 03:38:04.076596 2807 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c248d004-745e-4042-8043-dd144de849c5" path="/var/lib/kubelet/pods/c248d004-745e-4042-8043-dd144de849c5/volumes" Apr 30 03:38:04.895875 sshd[4526]: Accepted publickey for core from 139.178.68.195 port 42032 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:38:04.899560 sshd[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:38:04.908098 systemd-logind[1487]: New session 21 of user core. Apr 30 03:38:04.915202 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 03:38:05.253642 kubelet[2807]: E0430 03:38:05.253501 2807 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 03:38:06.210888 kubelet[2807]: I0430 03:38:06.210827 2807 topology_manager.go:215] "Topology Admit Handler" podUID="77e3bc59-4da5-430a-9991-2712b68444b1" podNamespace="kube-system" podName="cilium-swztj" Apr 30 03:38:06.214884 kubelet[2807]: E0430 03:38:06.214815 2807 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c248d004-745e-4042-8043-dd144de849c5" containerName="mount-bpf-fs" Apr 30 03:38:06.214884 kubelet[2807]: E0430 03:38:06.214842 2807 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c248d004-745e-4042-8043-dd144de849c5" containerName="clean-cilium-state" Apr 30 03:38:06.214884 kubelet[2807]: E0430 03:38:06.214848 2807 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c248d004-745e-4042-8043-dd144de849c5" containerName="mount-cgroup" Apr 30 03:38:06.214884 kubelet[2807]: E0430 03:38:06.214853 2807 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c248d004-745e-4042-8043-dd144de849c5" containerName="apply-sysctl-overwrites" Apr 30 03:38:06.214884 kubelet[2807]: E0430 03:38:06.214857 2807 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c248d004-745e-4042-8043-dd144de849c5" containerName="cilium-agent" Apr 30 03:38:06.214884 kubelet[2807]: E0430 03:38:06.214863 2807 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94e027fd-84c6-489d-954b-6ae05b7d5370" containerName="cilium-operator" Apr 30 03:38:06.214884 kubelet[2807]: I0430 03:38:06.214886 2807 memory_manager.go:354] "RemoveStaleState removing state" podUID="c248d004-745e-4042-8043-dd144de849c5" containerName="cilium-agent" Apr 30 03:38:06.214884 kubelet[2807]: I0430 03:38:06.214890 2807 memory_manager.go:354] "RemoveStaleState removing state" podUID="94e027fd-84c6-489d-954b-6ae05b7d5370" containerName="cilium-operator" Apr 30 03:38:06.274599 systemd[1]: Created slice kubepods-burstable-pod77e3bc59_4da5_430a_9991_2712b68444b1.slice - libcontainer container kubepods-burstable-pod77e3bc59_4da5_430a_9991_2712b68444b1.slice. Apr 30 03:38:06.388042 sshd[4526]: pam_unix(sshd:session): session closed for user core Apr 30 03:38:06.391543 systemd[1]: sshd@20-157.180.66.130:22-139.178.68.195:42032.service: Deactivated successfully. Apr 30 03:38:06.393336 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 03:38:06.395122 systemd-logind[1487]: Session 21 logged out. Waiting for processes to exit. Apr 30 03:38:06.397320 systemd-logind[1487]: Removed session 21. Apr 30 03:38:06.401615 kubelet[2807]: I0430 03:38:06.401565 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/77e3bc59-4da5-430a-9991-2712b68444b1-clustermesh-secrets\") pod \"cilium-swztj\" (UID: \"77e3bc59-4da5-430a-9991-2712b68444b1\") " pod="kube-system/cilium-swztj" Apr 30 03:38:06.401873 kubelet[2807]: I0430 03:38:06.401643 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/77e3bc59-4da5-430a-9991-2712b68444b1-host-proc-sys-net\") pod \"cilium-swztj\" (UID: \"77e3bc59-4da5-430a-9991-2712b68444b1\") " pod="kube-system/cilium-swztj" Apr 30 03:38:06.401873 kubelet[2807]: I0430 03:38:06.401698 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pcp2\" (UniqueName: \"kubernetes.io/projected/77e3bc59-4da5-430a-9991-2712b68444b1-kube-api-access-8pcp2\") pod \"cilium-swztj\" (UID: \"77e3bc59-4da5-430a-9991-2712b68444b1\") " pod="kube-system/cilium-swztj" Apr 30 03:38:06.401873 kubelet[2807]: I0430 03:38:06.401739 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/77e3bc59-4da5-430a-9991-2712b68444b1-cilium-ipsec-secrets\") pod \"cilium-swztj\" (UID: \"77e3bc59-4da5-430a-9991-2712b68444b1\") " pod="kube-system/cilium-swztj" Apr 30 03:38:06.401873 kubelet[2807]: I0430 03:38:06.401776 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/77e3bc59-4da5-430a-9991-2712b68444b1-hubble-tls\") pod \"cilium-swztj\" (UID: \"77e3bc59-4da5-430a-9991-2712b68444b1\") " pod="kube-system/cilium-swztj" Apr 30 03:38:06.401873 kubelet[2807]: I0430 03:38:06.401808 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/77e3bc59-4da5-430a-9991-2712b68444b1-cilium-cgroup\") pod \"cilium-swztj\" (UID: \"77e3bc59-4da5-430a-9991-2712b68444b1\") " pod="kube-system/cilium-swztj" Apr 30 03:38:06.401873 kubelet[2807]: I0430 03:38:06.401835 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/77e3bc59-4da5-430a-9991-2712b68444b1-cni-path\") pod \"cilium-swztj\" (UID: \"77e3bc59-4da5-430a-9991-2712b68444b1\") " pod="kube-system/cilium-swztj" Apr 30 03:38:06.402121 kubelet[2807]: I0430 03:38:06.401862 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/77e3bc59-4da5-430a-9991-2712b68444b1-host-proc-sys-kernel\") pod \"cilium-swztj\" (UID: \"77e3bc59-4da5-430a-9991-2712b68444b1\") " pod="kube-system/cilium-swztj" Apr 30 03:38:06.402121 kubelet[2807]: I0430 03:38:06.401886 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77e3bc59-4da5-430a-9991-2712b68444b1-xtables-lock\") pod \"cilium-swztj\" (UID: \"77e3bc59-4da5-430a-9991-2712b68444b1\") " pod="kube-system/cilium-swztj" Apr 30 03:38:06.402121 kubelet[2807]: I0430 03:38:06.401912 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77e3bc59-4da5-430a-9991-2712b68444b1-cilium-config-path\") pod \"cilium-swztj\" (UID: \"77e3bc59-4da5-430a-9991-2712b68444b1\") " pod="kube-system/cilium-swztj" Apr 30 03:38:06.402121 kubelet[2807]: I0430 03:38:06.401969 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/77e3bc59-4da5-430a-9991-2712b68444b1-cilium-run\") pod \"cilium-swztj\" (UID: \"77e3bc59-4da5-430a-9991-2712b68444b1\") " pod="kube-system/cilium-swztj" Apr 30 03:38:06.402121 kubelet[2807]: I0430 03:38:06.402022 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/77e3bc59-4da5-430a-9991-2712b68444b1-bpf-maps\") pod \"cilium-swztj\" (UID: \"77e3bc59-4da5-430a-9991-2712b68444b1\") " pod="kube-system/cilium-swztj" Apr 30 03:38:06.402121 kubelet[2807]: I0430 03:38:06.402047 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77e3bc59-4da5-430a-9991-2712b68444b1-lib-modules\") pod \"cilium-swztj\" (UID: \"77e3bc59-4da5-430a-9991-2712b68444b1\") " pod="kube-system/cilium-swztj" Apr 30 03:38:06.402302 kubelet[2807]: I0430 03:38:06.402073 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/77e3bc59-4da5-430a-9991-2712b68444b1-hostproc\") pod \"cilium-swztj\" (UID: \"77e3bc59-4da5-430a-9991-2712b68444b1\") " pod="kube-system/cilium-swztj" Apr 30 03:38:06.402302 kubelet[2807]: I0430 03:38:06.402097 2807 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77e3bc59-4da5-430a-9991-2712b68444b1-etc-cni-netd\") pod \"cilium-swztj\" (UID: \"77e3bc59-4da5-430a-9991-2712b68444b1\") " pod="kube-system/cilium-swztj" Apr 30 03:38:06.560159 systemd[1]: Started sshd@21-157.180.66.130:22-139.178.68.195:58992.service - OpenSSH per-connection server daemon (139.178.68.195:58992). Apr 30 03:38:06.578186 containerd[1507]: time="2025-04-30T03:38:06.578151356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-swztj,Uid:77e3bc59-4da5-430a-9991-2712b68444b1,Namespace:kube-system,Attempt:0,}" Apr 30 03:38:06.612172 containerd[1507]: time="2025-04-30T03:38:06.611848902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 03:38:06.612172 containerd[1507]: time="2025-04-30T03:38:06.612011739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 03:38:06.612172 containerd[1507]: time="2025-04-30T03:38:06.612041154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:38:06.613061 containerd[1507]: time="2025-04-30T03:38:06.612504527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 03:38:06.634136 systemd[1]: Started cri-containerd-f29d0a13fc667199d611244406535587640887e783c31aa565a1cde4d76bda82.scope - libcontainer container f29d0a13fc667199d611244406535587640887e783c31aa565a1cde4d76bda82. Apr 30 03:38:06.661655 containerd[1507]: time="2025-04-30T03:38:06.661438751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-swztj,Uid:77e3bc59-4da5-430a-9991-2712b68444b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f29d0a13fc667199d611244406535587640887e783c31aa565a1cde4d76bda82\"" Apr 30 03:38:06.678055 containerd[1507]: time="2025-04-30T03:38:06.677855189Z" level=info msg="CreateContainer within sandbox \"f29d0a13fc667199d611244406535587640887e783c31aa565a1cde4d76bda82\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 03:38:06.697587 containerd[1507]: time="2025-04-30T03:38:06.697529885Z" level=info msg="CreateContainer within sandbox \"f29d0a13fc667199d611244406535587640887e783c31aa565a1cde4d76bda82\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c36cba03eb2be4e6e19881499dc161aed90676fdd5bdeed55b1757a571916647\"" Apr 30 03:38:06.699178 containerd[1507]: time="2025-04-30T03:38:06.699151459Z" level=info msg="StartContainer for \"c36cba03eb2be4e6e19881499dc161aed90676fdd5bdeed55b1757a571916647\"" Apr 30 03:38:06.723184 systemd[1]: Started cri-containerd-c36cba03eb2be4e6e19881499dc161aed90676fdd5bdeed55b1757a571916647.scope - libcontainer container c36cba03eb2be4e6e19881499dc161aed90676fdd5bdeed55b1757a571916647. Apr 30 03:38:06.748305 containerd[1507]: time="2025-04-30T03:38:06.748170532Z" level=info msg="StartContainer for \"c36cba03eb2be4e6e19881499dc161aed90676fdd5bdeed55b1757a571916647\" returns successfully" Apr 30 03:38:06.761373 systemd[1]: cri-containerd-c36cba03eb2be4e6e19881499dc161aed90676fdd5bdeed55b1757a571916647.scope: Deactivated successfully. Apr 30 03:38:06.796655 containerd[1507]: time="2025-04-30T03:38:06.796599394Z" level=info msg="shim disconnected" id=c36cba03eb2be4e6e19881499dc161aed90676fdd5bdeed55b1757a571916647 namespace=k8s.io Apr 30 03:38:06.796655 containerd[1507]: time="2025-04-30T03:38:06.796648075Z" level=warning msg="cleaning up after shim disconnected" id=c36cba03eb2be4e6e19881499dc161aed90676fdd5bdeed55b1757a571916647 namespace=k8s.io Apr 30 03:38:06.796843 containerd[1507]: time="2025-04-30T03:38:06.796667903Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:38:07.048597 containerd[1507]: time="2025-04-30T03:38:07.048427420Z" level=info msg="CreateContainer within sandbox \"f29d0a13fc667199d611244406535587640887e783c31aa565a1cde4d76bda82\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 03:38:07.058915 containerd[1507]: time="2025-04-30T03:38:07.058864618Z" level=info msg="CreateContainer within sandbox \"f29d0a13fc667199d611244406535587640887e783c31aa565a1cde4d76bda82\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7695a372267afdccb045297cbe69e6ebd25055ff4ca96dc09f1d4e2c4b9432ec\"" Apr 30 03:38:07.060281 containerd[1507]: time="2025-04-30T03:38:07.059465269Z" level=info msg="StartContainer for \"7695a372267afdccb045297cbe69e6ebd25055ff4ca96dc09f1d4e2c4b9432ec\"" Apr 30 03:38:07.091252 systemd[1]: Started cri-containerd-7695a372267afdccb045297cbe69e6ebd25055ff4ca96dc09f1d4e2c4b9432ec.scope - libcontainer container 7695a372267afdccb045297cbe69e6ebd25055ff4ca96dc09f1d4e2c4b9432ec. Apr 30 03:38:07.120524 containerd[1507]: time="2025-04-30T03:38:07.120232489Z" level=info msg="StartContainer for \"7695a372267afdccb045297cbe69e6ebd25055ff4ca96dc09f1d4e2c4b9432ec\" returns successfully" Apr 30 03:38:07.123902 kubelet[2807]: I0430 03:38:07.123010 2807 setters.go:580] "Node became not ready" node="ci-4081-3-3-b-f8d40824c9" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T03:38:07Z","lastTransitionTime":"2025-04-30T03:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 03:38:07.135130 systemd[1]: cri-containerd-7695a372267afdccb045297cbe69e6ebd25055ff4ca96dc09f1d4e2c4b9432ec.scope: Deactivated successfully. Apr 30 03:38:07.157707 containerd[1507]: time="2025-04-30T03:38:07.157638510Z" level=info msg="shim disconnected" id=7695a372267afdccb045297cbe69e6ebd25055ff4ca96dc09f1d4e2c4b9432ec namespace=k8s.io Apr 30 03:38:07.157707 containerd[1507]: time="2025-04-30T03:38:07.157688715Z" level=warning msg="cleaning up after shim disconnected" id=7695a372267afdccb045297cbe69e6ebd25055ff4ca96dc09f1d4e2c4b9432ec namespace=k8s.io Apr 30 03:38:07.157707 containerd[1507]: time="2025-04-30T03:38:07.157695688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:38:07.530210 sshd[4542]: Accepted publickey for core from 139.178.68.195 port 58992 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:38:07.531980 sshd[4542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:38:07.538097 systemd-logind[1487]: New session 22 of user core. Apr 30 03:38:07.545158 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 03:38:08.059727 containerd[1507]: time="2025-04-30T03:38:08.059619685Z" level=info msg="CreateContainer within sandbox \"f29d0a13fc667199d611244406535587640887e783c31aa565a1cde4d76bda82\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 03:38:08.097865 containerd[1507]: time="2025-04-30T03:38:08.096306992Z" level=info msg="CreateContainer within sandbox \"f29d0a13fc667199d611244406535587640887e783c31aa565a1cde4d76bda82\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4691226c3adc2a4298f6b5a19f86c92754f00a4aee3acc96e570444eb688cc6d\"" Apr 30 03:38:08.097865 containerd[1507]: time="2025-04-30T03:38:08.097072123Z" level=info msg="StartContainer for \"4691226c3adc2a4298f6b5a19f86c92754f00a4aee3acc96e570444eb688cc6d\"" Apr 30 03:38:08.098479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4099256728.mount: Deactivated successfully. Apr 30 03:38:08.145242 systemd[1]: Started cri-containerd-4691226c3adc2a4298f6b5a19f86c92754f00a4aee3acc96e570444eb688cc6d.scope - libcontainer container 4691226c3adc2a4298f6b5a19f86c92754f00a4aee3acc96e570444eb688cc6d. Apr 30 03:38:08.194886 containerd[1507]: time="2025-04-30T03:38:08.194828710Z" level=info msg="StartContainer for \"4691226c3adc2a4298f6b5a19f86c92754f00a4aee3acc96e570444eb688cc6d\" returns successfully" Apr 30 03:38:08.198677 systemd[1]: cri-containerd-4691226c3adc2a4298f6b5a19f86c92754f00a4aee3acc96e570444eb688cc6d.scope: Deactivated successfully. Apr 30 03:38:08.212343 sshd[4542]: pam_unix(sshd:session): session closed for user core Apr 30 03:38:08.223193 systemd-logind[1487]: Session 22 logged out. Waiting for processes to exit. Apr 30 03:38:08.224288 systemd[1]: sshd@21-157.180.66.130:22-139.178.68.195:58992.service: Deactivated successfully. Apr 30 03:38:08.229742 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 03:38:08.232556 systemd-logind[1487]: Removed session 22. Apr 30 03:38:08.250237 containerd[1507]: time="2025-04-30T03:38:08.249844927Z" level=info msg="shim disconnected" id=4691226c3adc2a4298f6b5a19f86c92754f00a4aee3acc96e570444eb688cc6d namespace=k8s.io Apr 30 03:38:08.250237 containerd[1507]: time="2025-04-30T03:38:08.249924077Z" level=warning msg="cleaning up after shim disconnected" id=4691226c3adc2a4298f6b5a19f86c92754f00a4aee3acc96e570444eb688cc6d namespace=k8s.io Apr 30 03:38:08.250237 containerd[1507]: time="2025-04-30T03:38:08.249954164Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:38:08.383322 systemd[1]: Started sshd@22-157.180.66.130:22-139.178.68.195:58994.service - OpenSSH per-connection server daemon (139.178.68.195:58994). Apr 30 03:38:08.515461 systemd[1]: run-containerd-runc-k8s.io-4691226c3adc2a4298f6b5a19f86c92754f00a4aee3acc96e570444eb688cc6d-runc.laWzug.mount: Deactivated successfully. Apr 30 03:38:08.515636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4691226c3adc2a4298f6b5a19f86c92754f00a4aee3acc96e570444eb688cc6d-rootfs.mount: Deactivated successfully. Apr 30 03:38:09.071056 containerd[1507]: time="2025-04-30T03:38:09.070989563Z" level=info msg="CreateContainer within sandbox \"f29d0a13fc667199d611244406535587640887e783c31aa565a1cde4d76bda82\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 03:38:09.073201 kubelet[2807]: E0430 03:38:09.072982 2807 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-cqgx2" podUID="3f8415f9-bd34-4ffd-b599-7c6a3031b4ae" Apr 30 03:38:09.087877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4263816505.mount: Deactivated successfully. Apr 30 03:38:09.092987 containerd[1507]: time="2025-04-30T03:38:09.092502812Z" level=info msg="CreateContainer within sandbox \"f29d0a13fc667199d611244406535587640887e783c31aa565a1cde4d76bda82\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cb96790801650adb221af02134c0176c282a66c1c9f082ce631cd65b8b6e36df\"" Apr 30 03:38:09.094052 containerd[1507]: time="2025-04-30T03:38:09.094012445Z" level=info msg="StartContainer for \"cb96790801650adb221af02134c0176c282a66c1c9f082ce631cd65b8b6e36df\"" Apr 30 03:38:09.121081 systemd[1]: Started cri-containerd-cb96790801650adb221af02134c0176c282a66c1c9f082ce631cd65b8b6e36df.scope - libcontainer container cb96790801650adb221af02134c0176c282a66c1c9f082ce631cd65b8b6e36df. Apr 30 03:38:09.144557 systemd[1]: cri-containerd-cb96790801650adb221af02134c0176c282a66c1c9f082ce631cd65b8b6e36df.scope: Deactivated successfully. Apr 30 03:38:09.147692 containerd[1507]: time="2025-04-30T03:38:09.147658474Z" level=info msg="StartContainer for \"cb96790801650adb221af02134c0176c282a66c1c9f082ce631cd65b8b6e36df\" returns successfully" Apr 30 03:38:09.170071 containerd[1507]: time="2025-04-30T03:38:09.169914130Z" level=info msg="shim disconnected" id=cb96790801650adb221af02134c0176c282a66c1c9f082ce631cd65b8b6e36df namespace=k8s.io Apr 30 03:38:09.170071 containerd[1507]: time="2025-04-30T03:38:09.170060566Z" level=warning msg="cleaning up after shim disconnected" id=cb96790801650adb221af02134c0176c282a66c1c9f082ce631cd65b8b6e36df namespace=k8s.io Apr 30 03:38:09.170071 containerd[1507]: time="2025-04-30T03:38:09.170073189Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:38:09.343805 sshd[4776]: Accepted publickey for core from 139.178.68.195 port 58994 ssh2: RSA SHA256:gGXMCF4E/CKFW/UaU7FG2z812oBOSn8bTrcx47QNk0s Apr 30 03:38:09.346905 sshd[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 03:38:09.354590 systemd-logind[1487]: New session 23 of user core. Apr 30 03:38:09.362189 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 03:38:09.515547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb96790801650adb221af02134c0176c282a66c1c9f082ce631cd65b8b6e36df-rootfs.mount: Deactivated successfully. Apr 30 03:38:10.080463 containerd[1507]: time="2025-04-30T03:38:10.080413939Z" level=info msg="CreateContainer within sandbox \"f29d0a13fc667199d611244406535587640887e783c31aa565a1cde4d76bda82\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 03:38:10.115847 containerd[1507]: time="2025-04-30T03:38:10.115753488Z" level=info msg="CreateContainer within sandbox \"f29d0a13fc667199d611244406535587640887e783c31aa565a1cde4d76bda82\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"97a86d758c0ad3f41874215a3d306813a016aa6316f88f0015d76f192cd1d528\"" Apr 30 03:38:10.118530 containerd[1507]: time="2025-04-30T03:38:10.117637295Z" level=info msg="StartContainer for \"97a86d758c0ad3f41874215a3d306813a016aa6316f88f0015d76f192cd1d528\"" Apr 30 03:38:10.163076 systemd[1]: Started cri-containerd-97a86d758c0ad3f41874215a3d306813a016aa6316f88f0015d76f192cd1d528.scope - libcontainer container 97a86d758c0ad3f41874215a3d306813a016aa6316f88f0015d76f192cd1d528. Apr 30 03:38:10.205777 containerd[1507]: time="2025-04-30T03:38:10.205583956Z" level=info msg="StartContainer for \"97a86d758c0ad3f41874215a3d306813a016aa6316f88f0015d76f192cd1d528\" returns successfully" Apr 30 03:38:10.256047 kubelet[2807]: E0430 03:38:10.255854 2807 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 03:38:10.747123 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 03:38:11.073015 kubelet[2807]: E0430 03:38:11.072785 2807 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-cqgx2" podUID="3f8415f9-bd34-4ffd-b599-7c6a3031b4ae" Apr 30 03:38:11.097017 kubelet[2807]: I0430 03:38:11.096912 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-swztj" podStartSLOduration=5.096884945 podStartE2EDuration="5.096884945s" podCreationTimestamp="2025-04-30 03:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 03:38:11.095557515 +0000 UTC m=+341.137304370" watchObservedRunningTime="2025-04-30 03:38:11.096884945 +0000 UTC m=+341.138631841" Apr 30 03:38:12.330513 systemd[1]: run-containerd-runc-k8s.io-97a86d758c0ad3f41874215a3d306813a016aa6316f88f0015d76f192cd1d528-runc.NL4B3G.mount: Deactivated successfully. Apr 30 03:38:13.073315 kubelet[2807]: E0430 03:38:13.073246 2807 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-cqgx2" podUID="3f8415f9-bd34-4ffd-b599-7c6a3031b4ae" Apr 30 03:38:13.683942 systemd-networkd[1400]: lxc_health: Link UP Apr 30 03:38:13.696013 systemd-networkd[1400]: lxc_health: Gained carrier Apr 30 03:38:15.073071 kubelet[2807]: E0430 03:38:15.072995 2807 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-cqgx2" podUID="3f8415f9-bd34-4ffd-b599-7c6a3031b4ae" Apr 30 03:38:15.595124 systemd-networkd[1400]: lxc_health: Gained IPv6LL Apr 30 03:38:19.075155 sshd[4776]: pam_unix(sshd:session): session closed for user core Apr 30 03:38:19.080533 systemd[1]: sshd@22-157.180.66.130:22-139.178.68.195:58994.service: Deactivated successfully. Apr 30 03:38:19.082902 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 03:38:19.084171 systemd-logind[1487]: Session 23 logged out. Waiting for processes to exit. Apr 30 03:38:19.085905 systemd-logind[1487]: Removed session 23. Apr 30 03:38:30.103536 containerd[1507]: time="2025-04-30T03:38:30.103334212Z" level=info msg="StopPodSandbox for \"cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3\"" Apr 30 03:38:30.105293 containerd[1507]: time="2025-04-30T03:38:30.103588451Z" level=info msg="TearDown network for sandbox \"cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3\" successfully" Apr 30 03:38:30.105293 containerd[1507]: time="2025-04-30T03:38:30.103614601Z" level=info msg="StopPodSandbox for \"cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3\" returns successfully" Apr 30 03:38:30.122621 containerd[1507]: time="2025-04-30T03:38:30.122358964Z" level=info msg="RemovePodSandbox for \"cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3\"" Apr 30 03:38:30.122621 containerd[1507]: time="2025-04-30T03:38:30.122461256Z" level=info msg="Forcibly stopping sandbox \"cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3\"" Apr 30 03:38:30.122621 containerd[1507]: time="2025-04-30T03:38:30.122566495Z" level=info msg="TearDown network for sandbox \"cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3\" successfully" Apr 30 03:38:30.133674 containerd[1507]: time="2025-04-30T03:38:30.133385862Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:38:30.133674 containerd[1507]: time="2025-04-30T03:38:30.133550452Z" level=info msg="RemovePodSandbox \"cb8e3d9fd0f86b308c19985afc3c01cd560a363f005bee567ee63db544993fa3\" returns successfully" Apr 30 03:38:30.134515 containerd[1507]: time="2025-04-30T03:38:30.134453603Z" level=info msg="StopPodSandbox for \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\"" Apr 30 03:38:30.135152 containerd[1507]: time="2025-04-30T03:38:30.134543162Z" level=info msg="TearDown network for sandbox \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\" successfully" Apr 30 03:38:30.135152 containerd[1507]: time="2025-04-30T03:38:30.134555785Z" level=info msg="StopPodSandbox for \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\" returns successfully" Apr 30 03:38:30.135152 containerd[1507]: time="2025-04-30T03:38:30.135007996Z" level=info msg="RemovePodSandbox for \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\"" Apr 30 03:38:30.135152 containerd[1507]: time="2025-04-30T03:38:30.135022885Z" level=info msg="Forcibly stopping sandbox \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\"" Apr 30 03:38:30.135152 containerd[1507]: time="2025-04-30T03:38:30.135058042Z" level=info msg="TearDown network for sandbox \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\" successfully" Apr 30 03:38:30.139375 containerd[1507]: time="2025-04-30T03:38:30.139318666Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 03:38:30.139375 containerd[1507]: time="2025-04-30T03:38:30.139386404Z" level=info msg="RemovePodSandbox \"94571f2ad1f01876c090fa8dca61476d4094dd22581389f4530b6cfc0a42a2c8\" returns successfully" Apr 30 03:38:35.125809 systemd[1]: cri-containerd-dce657e9fe74050eb29b2c3a8e24ae12a76b2b17947c4761678492c33ef70a60.scope: Deactivated successfully. Apr 30 03:38:35.127668 systemd[1]: cri-containerd-dce657e9fe74050eb29b2c3a8e24ae12a76b2b17947c4761678492c33ef70a60.scope: Consumed 2.069s CPU time, 17.5M memory peak, 0B memory swap peak. Apr 30 03:38:35.134373 kubelet[2807]: E0430 03:38:35.133681 2807 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48700->10.0.0.2:2379: read: connection timed out" Apr 30 03:38:35.174566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dce657e9fe74050eb29b2c3a8e24ae12a76b2b17947c4761678492c33ef70a60-rootfs.mount: Deactivated successfully. Apr 30 03:38:35.189702 containerd[1507]: time="2025-04-30T03:38:35.189411240Z" level=info msg="shim disconnected" id=dce657e9fe74050eb29b2c3a8e24ae12a76b2b17947c4761678492c33ef70a60 namespace=k8s.io Apr 30 03:38:35.189702 containerd[1507]: time="2025-04-30T03:38:35.189485369Z" level=warning msg="cleaning up after shim disconnected" id=dce657e9fe74050eb29b2c3a8e24ae12a76b2b17947c4761678492c33ef70a60 namespace=k8s.io Apr 30 03:38:35.189702 containerd[1507]: time="2025-04-30T03:38:35.189502301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:38:35.501666 systemd[1]: cri-containerd-0decd15a8a7360a9f043ec3030a43178890f1091c4a185999211efcbc9157c52.scope: Deactivated successfully. Apr 30 03:38:35.502577 systemd[1]: cri-containerd-0decd15a8a7360a9f043ec3030a43178890f1091c4a185999211efcbc9157c52.scope: Consumed 6.650s CPU time, 22.8M memory peak, 0B memory swap peak. Apr 30 03:38:35.533885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0decd15a8a7360a9f043ec3030a43178890f1091c4a185999211efcbc9157c52-rootfs.mount: Deactivated successfully. Apr 30 03:38:35.542721 containerd[1507]: time="2025-04-30T03:38:35.542644715Z" level=info msg="shim disconnected" id=0decd15a8a7360a9f043ec3030a43178890f1091c4a185999211efcbc9157c52 namespace=k8s.io Apr 30 03:38:35.543080 containerd[1507]: time="2025-04-30T03:38:35.543028818Z" level=warning msg="cleaning up after shim disconnected" id=0decd15a8a7360a9f043ec3030a43178890f1091c4a185999211efcbc9157c52 namespace=k8s.io Apr 30 03:38:35.543080 containerd[1507]: time="2025-04-30T03:38:35.543058294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 03:38:36.160522 kubelet[2807]: I0430 03:38:36.159839 2807 scope.go:117] "RemoveContainer" containerID="0decd15a8a7360a9f043ec3030a43178890f1091c4a185999211efcbc9157c52" Apr 30 03:38:36.165634 kubelet[2807]: I0430 03:38:36.165572 2807 scope.go:117] "RemoveContainer" containerID="dce657e9fe74050eb29b2c3a8e24ae12a76b2b17947c4761678492c33ef70a60" Apr 30 03:38:36.167885 containerd[1507]: time="2025-04-30T03:38:36.167812835Z" level=info msg="CreateContainer within sandbox \"994fa2749bceb3c96269efda94b44c74addad02c95a7c9fa39ede5746f073d90\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 03:38:36.170640 containerd[1507]: time="2025-04-30T03:38:36.170481642Z" level=info msg="CreateContainer within sandbox \"7ac954839c37f1a9e2eda991fc25932a14090b78cfb932ce01f672c22b1b7bf1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 03:38:36.189615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4218429751.mount: Deactivated successfully. Apr 30 03:38:36.199618 containerd[1507]: time="2025-04-30T03:38:36.199531746Z" level=info msg="CreateContainer within sandbox \"7ac954839c37f1a9e2eda991fc25932a14090b78cfb932ce01f672c22b1b7bf1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"11bee226f2914a6242053b4247c493e21d78f0d123a20fa0f4006501b7647444\"" Apr 30 03:38:36.200451 containerd[1507]: time="2025-04-30T03:38:36.200412114Z" level=info msg="StartContainer for \"11bee226f2914a6242053b4247c493e21d78f0d123a20fa0f4006501b7647444\"" Apr 30 03:38:36.202119 containerd[1507]: time="2025-04-30T03:38:36.202079664Z" level=info msg="CreateContainer within sandbox \"994fa2749bceb3c96269efda94b44c74addad02c95a7c9fa39ede5746f073d90\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"03bdc47bf04b5f77156560a395024e0434e4c641889eab9e9e78caa04d3b8f00\"" Apr 30 03:38:36.203299 containerd[1507]: time="2025-04-30T03:38:36.202515676Z" level=info msg="StartContainer for \"03bdc47bf04b5f77156560a395024e0434e4c641889eab9e9e78caa04d3b8f00\"" Apr 30 03:38:36.235406 systemd[1]: Started cri-containerd-03bdc47bf04b5f77156560a395024e0434e4c641889eab9e9e78caa04d3b8f00.scope - libcontainer container 03bdc47bf04b5f77156560a395024e0434e4c641889eab9e9e78caa04d3b8f00. Apr 30 03:38:36.260150 systemd[1]: Started cri-containerd-11bee226f2914a6242053b4247c493e21d78f0d123a20fa0f4006501b7647444.scope - libcontainer container 11bee226f2914a6242053b4247c493e21d78f0d123a20fa0f4006501b7647444. Apr 30 03:38:36.306887 containerd[1507]: time="2025-04-30T03:38:36.306791058Z" level=info msg="StartContainer for \"03bdc47bf04b5f77156560a395024e0434e4c641889eab9e9e78caa04d3b8f00\" returns successfully" Apr 30 03:38:36.332463 containerd[1507]: time="2025-04-30T03:38:36.332406394Z" level=info msg="StartContainer for \"11bee226f2914a6242053b4247c493e21d78f0d123a20fa0f4006501b7647444\" returns successfully" Apr 30 03:38:38.061334 kubelet[2807]: E0430 03:38:38.056477 2807 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48460->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-3-b-f8d40824c9.183afb7a25ba8177 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-3-b-f8d40824c9,UID:57519a1b8082a6aae12704e1abd8078b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-b-f8d40824c9,},FirstTimestamp:2025-04-30 03:38:27.597050231 +0000 UTC m=+357.638797117,LastTimestamp:2025-04-30 03:38:27.597050231 +0000 UTC m=+357.638797117,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-b-f8d40824c9,}"