Apr 30 12:52:14.831591 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 22:26:36 -00 2025 Apr 30 12:52:14.831611 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:52:14.831621 kernel: BIOS-provided physical RAM map: Apr 30 12:52:14.831626 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 12:52:14.831631 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 12:52:14.831636 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 12:52:14.831641 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Apr 30 12:52:14.831646 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Apr 30 12:52:14.831652 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 30 12:52:14.831657 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 30 12:52:14.831662 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 12:52:14.831667 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 12:52:14.831672 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 30 12:52:14.831677 kernel: NX (Execute Disable) protection: active Apr 30 12:52:14.831684 kernel: APIC: Static calls initialized Apr 30 12:52:14.831689 kernel: SMBIOS 3.0.0 present. Apr 30 12:52:14.831694 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 30 12:52:14.831700 kernel: Hypervisor detected: KVM Apr 30 12:52:14.831705 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 12:52:14.831710 kernel: kvm-clock: using sched offset of 3060546058 cycles Apr 30 12:52:14.831715 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 12:52:14.831721 kernel: tsc: Detected 2445.404 MHz processor Apr 30 12:52:14.831726 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 12:52:14.831733 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 12:52:14.831739 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Apr 30 12:52:14.831745 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 12:52:14.831750 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 12:52:14.831755 kernel: Using GB pages for direct mapping Apr 30 12:52:14.831761 kernel: ACPI: Early table checksum verification disabled Apr 30 12:52:14.831766 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Apr 30 12:52:14.831771 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:52:14.831777 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:52:14.831782 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:52:14.831789 kernel: ACPI: FACS 0x000000007CFE0000 000040 Apr 30 12:52:14.831794 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:52:14.831799 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:52:14.831805 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:52:14.831810 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:52:14.831816 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Apr 30 12:52:14.831821 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Apr 30 12:52:14.831830 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Apr 30 12:52:14.831835 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Apr 30 12:52:14.831841 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Apr 30 12:52:14.831847 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Apr 30 12:52:14.831852 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Apr 30 12:52:14.831858 kernel: No NUMA configuration found Apr 30 12:52:14.831863 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Apr 30 12:52:14.831870 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Apr 30 12:52:14.831876 kernel: Zone ranges: Apr 30 12:52:14.831882 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 12:52:14.831887 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Apr 30 12:52:14.831893 kernel: Normal empty Apr 30 12:52:14.831898 kernel: Movable zone start for each node Apr 30 12:52:14.831904 kernel: Early memory node ranges Apr 30 12:52:14.831909 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 12:52:14.831915 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Apr 30 12:52:14.831937 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Apr 30 12:52:14.831943 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 12:52:14.831948 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 12:52:14.831954 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 30 12:52:14.831959 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 12:52:14.835033 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 12:52:14.835042 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 12:52:14.835048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 12:52:14.835054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 12:52:14.835065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 12:52:14.835070 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 12:52:14.835076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 12:52:14.835082 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 12:52:14.835087 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 12:52:14.835093 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 12:52:14.835099 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 12:52:14.835104 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 30 12:52:14.835110 kernel: Booting paravirtualized kernel on KVM Apr 30 12:52:14.835116 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 12:52:14.835123 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 12:52:14.835128 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 12:52:14.835134 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 12:52:14.835140 kernel: pcpu-alloc: [0] 0 1 Apr 30 12:52:14.835145 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 30 12:52:14.835153 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:52:14.835159 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 12:52:14.835164 kernel: random: crng init done Apr 30 12:52:14.835171 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 12:52:14.835177 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 12:52:14.835183 kernel: Fallback order for Node 0: 0 Apr 30 12:52:14.835188 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Apr 30 12:52:14.835194 kernel: Policy zone: DMA32 Apr 30 12:52:14.835200 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 12:52:14.835206 kernel: Memory: 1920004K/2047464K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 127200K reserved, 0K cma-reserved) Apr 30 12:52:14.835212 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 12:52:14.835217 kernel: ftrace: allocating 37918 entries in 149 pages Apr 30 12:52:14.835224 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 12:52:14.835230 kernel: Dynamic Preempt: voluntary Apr 30 12:52:14.835235 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 12:52:14.835242 kernel: rcu: RCU event tracing is enabled. Apr 30 12:52:14.835248 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 12:52:14.835253 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 12:52:14.835259 kernel: Rude variant of Tasks RCU enabled. Apr 30 12:52:14.835265 kernel: Tracing variant of Tasks RCU enabled. Apr 30 12:52:14.835283 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 12:52:14.835290 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 12:52:14.835296 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 12:52:14.835301 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 12:52:14.835307 kernel: Console: colour VGA+ 80x25 Apr 30 12:52:14.835313 kernel: printk: console [tty0] enabled Apr 30 12:52:14.835318 kernel: printk: console [ttyS0] enabled Apr 30 12:52:14.835324 kernel: ACPI: Core revision 20230628 Apr 30 12:52:14.835329 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 12:52:14.835335 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 12:52:14.835342 kernel: x2apic enabled Apr 30 12:52:14.835359 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 12:52:14.835365 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 12:52:14.835371 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 30 12:52:14.835377 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Apr 30 12:52:14.835382 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 12:52:14.835388 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 30 12:52:14.835394 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 30 12:52:14.835404 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 12:52:14.835410 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 12:52:14.835417 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 12:52:14.835423 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 12:52:14.835430 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 30 12:52:14.835436 kernel: RETBleed: Mitigation: untrained return thunk Apr 30 12:52:14.835442 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 12:52:14.835448 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 12:52:14.835454 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 12:52:14.835461 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 12:52:14.835467 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 12:52:14.835473 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 12:52:14.835479 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Apr 30 12:52:14.835485 kernel: Freeing SMP alternatives memory: 32K Apr 30 12:52:14.835490 kernel: pid_max: default: 32768 minimum: 301 Apr 30 12:52:14.835496 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 12:52:14.835502 kernel: landlock: Up and running. Apr 30 12:52:14.835509 kernel: SELinux: Initializing. Apr 30 12:52:14.835515 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 12:52:14.835521 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 12:52:14.835527 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 30 12:52:14.835533 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:52:14.835539 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:52:14.835545 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:52:14.835551 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 30 12:52:14.835557 kernel: ... version: 0 Apr 30 12:52:14.835564 kernel: ... bit width: 48 Apr 30 12:52:14.835570 kernel: ... generic registers: 6 Apr 30 12:52:14.835576 kernel: ... value mask: 0000ffffffffffff Apr 30 12:52:14.835582 kernel: ... max period: 00007fffffffffff Apr 30 12:52:14.835587 kernel: ... fixed-purpose events: 0 Apr 30 12:52:14.835593 kernel: ... event mask: 000000000000003f Apr 30 12:52:14.835599 kernel: signal: max sigframe size: 1776 Apr 30 12:52:14.835605 kernel: rcu: Hierarchical SRCU implementation. Apr 30 12:52:14.835611 kernel: rcu: Max phase no-delay instances is 400. Apr 30 12:52:14.835618 kernel: smp: Bringing up secondary CPUs ... Apr 30 12:52:14.835624 kernel: smpboot: x86: Booting SMP configuration: Apr 30 12:52:14.835630 kernel: .... node #0, CPUs: #1 Apr 30 12:52:14.835635 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 12:52:14.835641 kernel: smpboot: Max logical packages: 1 Apr 30 12:52:14.835647 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Apr 30 12:52:14.835653 kernel: devtmpfs: initialized Apr 30 12:52:14.835659 kernel: x86/mm: Memory block size: 128MB Apr 30 12:52:14.835665 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 12:52:14.835671 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 12:52:14.835678 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 12:52:14.835684 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 12:52:14.835690 kernel: audit: initializing netlink subsys (disabled) Apr 30 12:52:14.835696 kernel: audit: type=2000 audit(1746017534.301:1): state=initialized audit_enabled=0 res=1 Apr 30 12:52:14.835702 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 12:52:14.835708 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 12:52:14.835714 kernel: cpuidle: using governor menu Apr 30 12:52:14.835719 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 12:52:14.835725 kernel: dca service started, version 1.12.1 Apr 30 12:52:14.835733 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 30 12:52:14.835739 kernel: PCI: Using configuration type 1 for base access Apr 30 12:52:14.835745 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 12:52:14.835750 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 12:52:14.835756 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 12:52:14.835762 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 12:52:14.835768 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 12:52:14.835774 kernel: ACPI: Added _OSI(Module Device) Apr 30 12:52:14.835781 kernel: ACPI: Added _OSI(Processor Device) Apr 30 12:52:14.835787 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 12:52:14.835793 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 12:52:14.835799 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 12:52:14.835805 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 12:52:14.835811 kernel: ACPI: Interpreter enabled Apr 30 12:52:14.835817 kernel: ACPI: PM: (supports S0 S5) Apr 30 12:52:14.835822 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 12:52:14.835828 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 12:52:14.835834 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 12:52:14.835842 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 12:52:14.835848 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 12:52:14.835965 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 12:52:14.836081 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 30 12:52:14.836197 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 30 12:52:14.836207 kernel: PCI host bridge to bus 0000:00 Apr 30 12:52:14.836295 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 12:52:14.836378 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 12:52:14.836436 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 12:52:14.836491 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Apr 30 12:52:14.836549 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 30 12:52:14.836605 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 30 12:52:14.836660 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 12:52:14.836759 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 12:52:14.836867 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Apr 30 12:52:14.836939 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Apr 30 12:52:14.837002 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Apr 30 12:52:14.837065 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Apr 30 12:52:14.837128 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Apr 30 12:52:14.837190 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 12:52:14.837283 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 12:52:14.837370 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Apr 30 12:52:14.837444 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 12:52:14.837509 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Apr 30 12:52:14.837577 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 12:52:14.837641 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Apr 30 12:52:14.837717 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 12:52:14.837781 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Apr 30 12:52:14.837850 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 12:52:14.837913 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Apr 30 12:52:14.837981 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 12:52:14.838081 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Apr 30 12:52:14.838164 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 12:52:14.838230 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Apr 30 12:52:14.838890 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 12:52:14.838969 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Apr 30 12:52:14.839045 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 30 12:52:14.839111 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Apr 30 12:52:14.839188 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 12:52:14.839253 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 12:52:14.842138 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 12:52:14.842212 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Apr 30 12:52:14.842751 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Apr 30 12:52:14.842840 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 12:52:14.842908 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 30 12:52:14.842990 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 12:52:14.843060 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Apr 30 12:52:14.843186 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Apr 30 12:52:14.844589 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Apr 30 12:52:14.844671 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 12:52:14.844739 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 12:52:14.844803 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 12:52:14.844885 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 12:52:14.844952 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Apr 30 12:52:14.845017 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 12:52:14.845080 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 12:52:14.845191 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 12:52:14.845304 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 30 12:52:14.845401 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Apr 30 12:52:14.845509 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Apr 30 12:52:14.846471 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 12:52:14.846544 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 12:52:14.846609 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 12:52:14.846695 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 30 12:52:14.846765 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Apr 30 12:52:14.846835 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 12:52:14.846898 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 12:52:14.846960 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 12:52:14.847035 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 12:52:14.847150 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Apr 30 12:52:14.847235 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Apr 30 12:52:14.848628 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 12:52:14.848710 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 12:52:14.848777 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 12:52:14.848853 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 30 12:52:14.848922 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Apr 30 12:52:14.848987 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Apr 30 12:52:14.849052 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 12:52:14.849115 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 12:52:14.849184 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 12:52:14.849201 kernel: acpiphp: Slot [0] registered Apr 30 12:52:14.850367 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 12:52:14.850476 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Apr 30 12:52:14.850596 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Apr 30 12:52:14.850669 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Apr 30 12:52:14.850735 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 12:52:14.850799 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 12:52:14.850862 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 12:52:14.850875 kernel: acpiphp: Slot [0-2] registered Apr 30 12:52:14.850939 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 12:52:14.851016 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Apr 30 12:52:14.851080 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 12:52:14.851088 kernel: acpiphp: Slot [0-3] registered Apr 30 12:52:14.851151 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 12:52:14.851214 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 12:52:14.851378 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 12:52:14.851395 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 12:52:14.851401 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 12:52:14.851408 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 12:52:14.851414 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 12:52:14.851420 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 12:52:14.851426 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 12:52:14.851432 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 12:52:14.851438 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 12:52:14.851444 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 12:52:14.851452 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 12:52:14.851458 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 12:52:14.851464 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 12:52:14.851470 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 12:52:14.851476 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 12:52:14.851482 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 12:52:14.851488 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 12:52:14.851494 kernel: iommu: Default domain type: Translated Apr 30 12:52:14.851500 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 12:52:14.851511 kernel: PCI: Using ACPI for IRQ routing Apr 30 12:52:14.851526 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 12:52:14.851540 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 12:52:14.851552 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Apr 30 12:52:14.851640 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 12:52:14.851707 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 12:52:14.851769 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 12:52:14.851778 kernel: vgaarb: loaded Apr 30 12:52:14.851785 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 12:52:14.851806 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 12:52:14.851817 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 12:52:14.851829 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 12:52:14.851841 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 12:52:14.851852 kernel: pnp: PnP ACPI init Apr 30 12:52:14.851960 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 30 12:52:14.851974 kernel: pnp: PnP ACPI: found 5 devices Apr 30 12:52:14.851981 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 12:52:14.851992 kernel: NET: Registered PF_INET protocol family Apr 30 12:52:14.851998 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 12:52:14.852004 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 12:52:14.852010 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 12:52:14.852018 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 12:52:14.852029 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 12:52:14.852040 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 12:52:14.852052 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 12:52:14.852064 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 12:52:14.852071 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 12:52:14.852077 kernel: NET: Registered PF_XDP protocol family Apr 30 12:52:14.852176 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 12:52:14.852966 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 12:52:14.853057 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 12:52:14.853125 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 12:52:14.853189 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 12:52:14.853260 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 12:52:14.853391 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 12:52:14.853458 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 12:52:14.853522 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 12:52:14.853585 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 12:52:14.853648 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 12:52:14.853710 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 12:52:14.853773 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 12:52:14.853858 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 12:52:14.853951 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 12:52:14.854057 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 12:52:14.854245 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 12:52:14.856733 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 12:52:14.856806 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 12:52:14.856878 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 12:52:14.856955 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 12:52:14.857025 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 12:52:14.857092 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 12:52:14.857157 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 12:52:14.857237 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 12:52:14.857331 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 30 12:52:14.857420 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 12:52:14.857487 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 12:52:14.857552 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 12:52:14.857616 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 30 12:52:14.857684 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Apr 30 12:52:14.857746 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 12:52:14.857808 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 12:52:14.857875 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 30 12:52:14.857938 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 12:52:14.858004 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 12:52:14.858069 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 12:52:14.858127 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 12:52:14.858187 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 12:52:14.858310 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Apr 30 12:52:14.858415 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 30 12:52:14.858503 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 30 12:52:14.858579 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Apr 30 12:52:14.858642 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 12:52:14.858709 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Apr 30 12:52:14.858771 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 12:52:14.858841 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Apr 30 12:52:14.858910 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 12:52:14.858976 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Apr 30 12:52:14.859034 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 12:52:14.859099 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Apr 30 12:52:14.859158 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 12:52:14.859222 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Apr 30 12:52:14.859316 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 12:52:14.859481 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 30 12:52:14.859580 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Apr 30 12:52:14.859644 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 12:52:14.859713 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 30 12:52:14.859773 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Apr 30 12:52:14.859838 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 12:52:14.859903 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 30 12:52:14.859962 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Apr 30 12:52:14.860023 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 12:52:14.860035 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 12:52:14.860048 kernel: PCI: CLS 0 bytes, default 64 Apr 30 12:52:14.860065 kernel: Initialise system trusted keyrings Apr 30 12:52:14.860079 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 12:52:14.860097 kernel: Key type asymmetric registered Apr 30 12:52:14.860110 kernel: Asymmetric key parser 'x509' registered Apr 30 12:52:14.860119 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 12:52:14.860126 kernel: io scheduler mq-deadline registered Apr 30 12:52:14.860132 kernel: io scheduler kyber registered Apr 30 12:52:14.860138 kernel: io scheduler bfq registered Apr 30 12:52:14.860226 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 30 12:52:14.860323 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 30 12:52:14.860432 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 30 12:52:14.860504 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 30 12:52:14.860569 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 30 12:52:14.860632 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 30 12:52:14.860695 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 30 12:52:14.860758 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 30 12:52:14.860822 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 30 12:52:14.860886 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 30 12:52:14.860951 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 30 12:52:14.861018 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 30 12:52:14.861082 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 30 12:52:14.861194 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 30 12:52:14.861378 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 30 12:52:14.861477 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 30 12:52:14.861494 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 12:52:14.861583 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 30 12:52:14.861673 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 30 12:52:14.861703 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 12:52:14.861714 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 30 12:52:14.861721 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 12:52:14.861728 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 12:52:14.861735 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 12:52:14.861741 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 12:52:14.861750 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 12:52:14.861762 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 12:52:14.861878 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 30 12:52:14.861979 kernel: rtc_cmos 00:03: registered as rtc0 Apr 30 12:52:14.862073 kernel: rtc_cmos 00:03: setting system clock to 2025-04-30T12:52:14 UTC (1746017534) Apr 30 12:52:14.862136 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 30 12:52:14.862146 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 30 12:52:14.862153 kernel: NET: Registered PF_INET6 protocol family Apr 30 12:52:14.862160 kernel: Segment Routing with IPv6 Apr 30 12:52:14.862166 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 12:52:14.862172 kernel: NET: Registered PF_PACKET protocol family Apr 30 12:52:14.862185 kernel: Key type dns_resolver registered Apr 30 12:52:14.862192 kernel: IPI shorthand broadcast: enabled Apr 30 12:52:14.862198 kernel: sched_clock: Marking stable (1056010138, 136619660)->(1201118670, -8488872) Apr 30 12:52:14.862205 kernel: registered taskstats version 1 Apr 30 12:52:14.862211 kernel: Loading compiled-in X.509 certificates Apr 30 12:52:14.862218 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 10d2d341d26c1df942e743344427c053ef3a2a5f' Apr 30 12:52:14.862224 kernel: Key type .fscrypt registered Apr 30 12:52:14.862230 kernel: Key type fscrypt-provisioning registered Apr 30 12:52:14.862237 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 12:52:14.862245 kernel: ima: Allocated hash algorithm: sha1 Apr 30 12:52:14.862251 kernel: ima: No architecture policies found Apr 30 12:52:14.862257 kernel: clk: Disabling unused clocks Apr 30 12:52:14.862264 kernel: Freeing unused kernel image (initmem) memory: 43484K Apr 30 12:52:14.862296 kernel: Write protecting the kernel read-only data: 38912k Apr 30 12:52:14.862303 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K Apr 30 12:52:14.862310 kernel: Run /init as init process Apr 30 12:52:14.862316 kernel: with arguments: Apr 30 12:52:14.862323 kernel: /init Apr 30 12:52:14.862332 kernel: with environment: Apr 30 12:52:14.862338 kernel: HOME=/ Apr 30 12:52:14.862358 kernel: TERM=linux Apr 30 12:52:14.862365 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 12:52:14.862373 systemd[1]: Successfully made /usr/ read-only. Apr 30 12:52:14.862383 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:52:14.862391 systemd[1]: Detected virtualization kvm. Apr 30 12:52:14.862402 systemd[1]: Detected architecture x86-64. Apr 30 12:52:14.862424 systemd[1]: Running in initrd. Apr 30 12:52:14.862436 systemd[1]: No hostname configured, using default hostname. Apr 30 12:52:14.862449 systemd[1]: Hostname set to . Apr 30 12:52:14.862456 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:52:14.862463 systemd[1]: Queued start job for default target initrd.target. Apr 30 12:52:14.862470 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:52:14.862478 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:52:14.862509 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 12:52:14.862541 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:52:14.862549 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 12:52:14.862556 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 12:52:14.862564 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 12:52:14.862571 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 12:52:14.862579 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:52:14.862587 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:52:14.862594 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:52:14.862601 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:52:14.862607 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:52:14.862614 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:52:14.862621 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:52:14.862628 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:52:14.862634 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 12:52:14.862641 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 30 12:52:14.862650 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:52:14.862656 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:52:14.862663 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:52:14.862670 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:52:14.862677 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 12:52:14.862684 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:52:14.862691 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 12:52:14.862698 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 12:52:14.862705 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:52:14.862713 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:52:14.862720 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:52:14.862752 systemd-journald[188]: Collecting audit messages is disabled. Apr 30 12:52:14.862772 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 12:52:14.862781 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:52:14.862788 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 12:52:14.862795 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 12:52:14.862803 systemd-journald[188]: Journal started Apr 30 12:52:14.862823 systemd-journald[188]: Runtime Journal (/run/log/journal/1f6488b0c9aa4ae9b6b69797c97864d7) is 4.8M, max 38.3M, 33.5M free. Apr 30 12:52:14.835059 systemd-modules-load[189]: Inserted module 'overlay' Apr 30 12:52:14.899409 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 12:52:14.899431 kernel: Bridge firewalling registered Apr 30 12:52:14.868251 systemd-modules-load[189]: Inserted module 'br_netfilter' Apr 30 12:52:14.910296 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:52:14.910690 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:52:14.911964 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:52:14.912566 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:52:14.919457 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:52:14.923394 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:52:14.925215 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:52:14.927414 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:52:14.933144 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:52:14.939745 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 12:52:14.940415 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:52:14.940949 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:52:14.952742 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:52:14.954649 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:52:14.960780 dracut-cmdline[222]: dracut-dracut-053 Apr 30 12:52:14.963106 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:52:14.978510 systemd-resolved[225]: Positive Trust Anchors: Apr 30 12:52:14.979199 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:52:14.979235 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:52:14.982017 systemd-resolved[225]: Defaulting to hostname 'linux'. Apr 30 12:52:14.988313 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:52:14.989009 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:52:15.013333 kernel: SCSI subsystem initialized Apr 30 12:52:15.021301 kernel: Loading iSCSI transport class v2.0-870. Apr 30 12:52:15.030314 kernel: iscsi: registered transport (tcp) Apr 30 12:52:15.047574 kernel: iscsi: registered transport (qla4xxx) Apr 30 12:52:15.047657 kernel: QLogic iSCSI HBA Driver Apr 30 12:52:15.072793 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 12:52:15.079430 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 12:52:15.098529 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 12:52:15.098604 kernel: device-mapper: uevent: version 1.0.3 Apr 30 12:52:15.099755 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 12:52:15.136300 kernel: raid6: avx2x4 gen() 35985 MB/s Apr 30 12:52:15.153294 kernel: raid6: avx2x2 gen() 35298 MB/s Apr 30 12:52:15.170478 kernel: raid6: avx2x1 gen() 23874 MB/s Apr 30 12:52:15.170553 kernel: raid6: using algorithm avx2x4 gen() 35985 MB/s Apr 30 12:52:15.188542 kernel: raid6: .... xor() 4673 MB/s, rmw enabled Apr 30 12:52:15.188587 kernel: raid6: using avx2x2 recovery algorithm Apr 30 12:52:15.210301 kernel: xor: automatically using best checksumming function avx Apr 30 12:52:15.320329 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 12:52:15.328537 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:52:15.334396 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:52:15.348201 systemd-udevd[409]: Using default interface naming scheme 'v255'. Apr 30 12:52:15.351959 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:52:15.359442 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 12:52:15.368747 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Apr 30 12:52:15.389196 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:52:15.395426 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:52:15.442704 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:52:15.449460 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 12:52:15.464175 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 12:52:15.465560 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:52:15.466729 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:52:15.467791 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:52:15.473426 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 12:52:15.484327 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:52:15.507687 kernel: scsi host0: Virtio SCSI HBA Apr 30 12:52:15.516293 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 30 12:52:15.519329 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 12:52:15.529196 kernel: ACPI: bus type USB registered Apr 30 12:52:15.529389 kernel: usbcore: registered new interface driver usbfs Apr 30 12:52:15.531675 kernel: usbcore: registered new interface driver hub Apr 30 12:52:15.531704 kernel: usbcore: registered new device driver usb Apr 30 12:52:15.543543 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:52:15.543645 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:52:15.544708 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:52:15.545891 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:52:15.545992 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:52:15.574964 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:52:15.585523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:52:15.603315 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 12:52:15.610306 kernel: libata version 3.00 loaded. Apr 30 12:52:15.615296 kernel: AES CTR mode by8 optimization enabled Apr 30 12:52:15.625476 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 12:52:15.642455 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 12:52:15.642478 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 30 12:52:15.643036 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 30 12:52:15.643193 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 12:52:15.644404 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 12:52:15.644548 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 12:52:15.644697 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 30 12:52:15.644834 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 12:52:15.644981 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 12:52:15.644994 kernel: GPT:17805311 != 80003071 Apr 30 12:52:15.645006 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 12:52:15.645019 kernel: GPT:17805311 != 80003071 Apr 30 12:52:15.645030 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 12:52:15.645041 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:52:15.645058 kernel: scsi host1: ahci Apr 30 12:52:15.645203 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 12:52:15.645406 kernel: scsi host2: ahci Apr 30 12:52:15.645527 kernel: scsi host3: ahci Apr 30 12:52:15.645641 kernel: scsi host4: ahci Apr 30 12:52:15.645751 kernel: scsi host5: ahci Apr 30 12:52:15.645867 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 12:52:15.646004 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 30 12:52:15.646126 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 12:52:15.646238 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 12:52:15.646442 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 30 12:52:15.646579 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 30 12:52:15.646667 kernel: scsi host6: ahci Apr 30 12:52:15.646753 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Apr 30 12:52:15.646765 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Apr 30 12:52:15.646773 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Apr 30 12:52:15.646781 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Apr 30 12:52:15.646789 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Apr 30 12:52:15.646796 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Apr 30 12:52:15.646803 kernel: hub 1-0:1.0: USB hub found Apr 30 12:52:15.646934 kernel: hub 1-0:1.0: 4 ports detected Apr 30 12:52:15.647018 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 12:52:15.647111 kernel: hub 2-0:1.0: USB hub found Apr 30 12:52:15.647247 kernel: hub 2-0:1.0: 4 ports detected Apr 30 12:52:15.702343 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 30 12:52:15.727298 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (460) Apr 30 12:52:15.732292 kernel: BTRFS: device fsid 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (456) Apr 30 12:52:15.735533 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 30 12:52:15.736939 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:52:15.745793 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 12:52:15.752806 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 30 12:52:15.754071 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 30 12:52:15.767417 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 12:52:15.769937 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:52:15.774337 disk-uuid[552]: Primary Header is updated. Apr 30 12:52:15.774337 disk-uuid[552]: Secondary Entries is updated. Apr 30 12:52:15.774337 disk-uuid[552]: Secondary Header is updated. Apr 30 12:52:15.782968 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:52:15.796313 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:52:15.884300 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 12:52:15.951287 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 30 12:52:15.951377 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 30 12:52:15.951392 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 12:52:15.954037 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 12:52:15.954279 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 12:52:15.956717 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 12:52:15.956751 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 30 12:52:15.958685 kernel: ata1.00: applying bridge limits Apr 30 12:52:15.958745 kernel: ata1.00: configured for UDMA/100 Apr 30 12:52:15.960436 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 12:52:16.007752 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 30 12:52:16.018840 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 12:52:16.018856 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 30 12:52:16.027302 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 12:52:16.031430 kernel: usbcore: registered new interface driver usbhid Apr 30 12:52:16.031462 kernel: usbhid: USB HID core driver Apr 30 12:52:16.036801 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 30 12:52:16.036830 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 30 12:52:16.807301 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:52:16.807908 disk-uuid[553]: The operation has completed successfully. Apr 30 12:52:16.860523 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 12:52:16.860630 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 12:52:16.886462 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 12:52:16.889349 sh[593]: Success Apr 30 12:52:16.901533 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 30 12:52:16.950521 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 12:52:16.958395 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 12:52:16.960768 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 12:52:16.979331 kernel: BTRFS info (device dm-0): first mount of filesystem 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 Apr 30 12:52:16.979411 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:52:16.979427 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 12:52:16.981331 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 12:52:16.982547 kernel: BTRFS info (device dm-0): using free space tree Apr 30 12:52:16.990312 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 12:52:16.992580 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 12:52:16.993558 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 12:52:16.998519 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 12:52:17.002512 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 12:52:17.017687 kernel: BTRFS info (device sda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:52:17.017744 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:52:17.017754 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:52:17.025288 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 12:52:17.025350 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:52:17.032351 kernel: BTRFS info (device sda6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:52:17.035301 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 12:52:17.039480 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 12:52:17.081247 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:52:17.092964 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:52:17.119296 systemd-networkd[771]: lo: Link UP Apr 30 12:52:17.119921 systemd-networkd[771]: lo: Gained carrier Apr 30 12:52:17.120091 ignition[704]: Ignition 2.20.0 Apr 30 12:52:17.121735 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:52:17.120099 ignition[704]: Stage: fetch-offline Apr 30 12:52:17.123006 systemd-networkd[771]: Enumeration completed Apr 30 12:52:17.120138 ignition[704]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:52:17.123331 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:52:17.120149 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:52:17.124036 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:52:17.120239 ignition[704]: parsed url from cmdline: "" Apr 30 12:52:17.124042 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:52:17.120244 ignition[704]: no config URL provided Apr 30 12:52:17.124863 systemd-networkd[771]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:52:17.120250 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 12:52:17.124867 systemd-networkd[771]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:52:17.120259 ignition[704]: no config at "/usr/lib/ignition/user.ign" Apr 30 12:52:17.124922 systemd[1]: Reached target network.target - Network. Apr 30 12:52:17.120265 ignition[704]: failed to fetch config: resource requires networking Apr 30 12:52:17.125448 systemd-networkd[771]: eth0: Link UP Apr 30 12:52:17.120614 ignition[704]: Ignition finished successfully Apr 30 12:52:17.125453 systemd-networkd[771]: eth0: Gained carrier Apr 30 12:52:17.125462 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:52:17.131099 systemd-networkd[771]: eth1: Link UP Apr 30 12:52:17.131104 systemd-networkd[771]: eth1: Gained carrier Apr 30 12:52:17.131115 systemd-networkd[771]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:52:17.137444 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 12:52:17.149122 ignition[779]: Ignition 2.20.0 Apr 30 12:52:17.149678 ignition[779]: Stage: fetch Apr 30 12:52:17.149900 ignition[779]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:52:17.149913 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:52:17.150008 ignition[779]: parsed url from cmdline: "" Apr 30 12:52:17.150013 ignition[779]: no config URL provided Apr 30 12:52:17.150020 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 12:52:17.150029 ignition[779]: no config at "/usr/lib/ignition/user.ign" Apr 30 12:52:17.150053 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 30 12:52:17.150201 ignition[779]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 30 12:52:17.162345 systemd-networkd[771]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 12:52:17.193351 systemd-networkd[771]: eth0: DHCPv4 address 37.27.3.216/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 12:52:17.350478 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 30 12:52:17.356054 ignition[779]: GET result: OK Apr 30 12:52:17.356117 ignition[779]: parsing config with SHA512: cc4c838b92f6d8f7338ee1c4632afbe8ff919d6cf4f036274f3c12321133c1cf841fcf015ce7a04cea1dc3b48d14d3fee2519044f36aa314718a400219b29f6f Apr 30 12:52:17.360294 unknown[779]: fetched base config from "system" Apr 30 12:52:17.360306 unknown[779]: fetched base config from "system" Apr 30 12:52:17.360625 ignition[779]: fetch: fetch complete Apr 30 12:52:17.360312 unknown[779]: fetched user config from "hetzner" Apr 30 12:52:17.360632 ignition[779]: fetch: fetch passed Apr 30 12:52:17.363059 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 12:52:17.360669 ignition[779]: Ignition finished successfully Apr 30 12:52:17.368476 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 12:52:17.385066 ignition[787]: Ignition 2.20.0 Apr 30 12:52:17.385084 ignition[787]: Stage: kargs Apr 30 12:52:17.385322 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:52:17.385339 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:52:17.388251 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 12:52:17.386851 ignition[787]: kargs: kargs passed Apr 30 12:52:17.386921 ignition[787]: Ignition finished successfully Apr 30 12:52:17.402525 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 12:52:17.415797 ignition[794]: Ignition 2.20.0 Apr 30 12:52:17.416353 ignition[794]: Stage: disks Apr 30 12:52:17.419243 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 12:52:17.416649 ignition[794]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:52:17.424522 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 12:52:17.416667 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:52:17.425825 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 12:52:17.417892 ignition[794]: disks: disks passed Apr 30 12:52:17.427022 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:52:17.417959 ignition[794]: Ignition finished successfully Apr 30 12:52:17.428445 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:52:17.430109 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:52:17.445543 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 12:52:17.462008 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 12:52:17.470781 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 12:52:17.476510 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 12:52:17.551300 kernel: EXT4-fs (sda9): mounted filesystem 59d16236-967d-47d1-a9bd-4b055a17ab77 r/w with ordered data mode. Quota mode: none. Apr 30 12:52:17.551522 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 12:52:17.552340 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 12:52:17.558370 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:52:17.561357 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 12:52:17.564478 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 12:52:17.567215 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 12:52:17.568009 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:52:17.570848 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 12:52:17.575876 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (810) Apr 30 12:52:17.575922 kernel: BTRFS info (device sda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:52:17.578573 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 12:52:17.584455 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:52:17.584480 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:52:17.589104 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 12:52:17.589139 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:52:17.592228 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:52:17.625412 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 12:52:17.627407 coreos-metadata[812]: Apr 30 12:52:17.626 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 30 12:52:17.628779 coreos-metadata[812]: Apr 30 12:52:17.628 INFO Fetch successful Apr 30 12:52:17.628779 coreos-metadata[812]: Apr 30 12:52:17.628 INFO wrote hostname ci-4230-1-1-d-a2f51ba0c1 to /sysroot/etc/hostname Apr 30 12:52:17.631183 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Apr 30 12:52:17.632135 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 12:52:17.634015 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 12:52:17.637050 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 12:52:17.699250 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 12:52:17.703395 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 12:52:17.705454 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 12:52:17.712327 kernel: BTRFS info (device sda6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:52:17.730298 ignition[926]: INFO : Ignition 2.20.0 Apr 30 12:52:17.730298 ignition[926]: INFO : Stage: mount Apr 30 12:52:17.730298 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:52:17.730298 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:52:17.734351 ignition[926]: INFO : mount: mount passed Apr 30 12:52:17.734351 ignition[926]: INFO : Ignition finished successfully Apr 30 12:52:17.733892 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 12:52:17.739382 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 12:52:17.740463 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 12:52:17.976595 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 12:52:17.981506 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:52:17.992297 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (939) Apr 30 12:52:17.995778 kernel: BTRFS info (device sda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:52:17.995817 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:52:17.997505 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:52:18.003300 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 12:52:18.003343 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:52:18.007467 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:52:18.028295 ignition[955]: INFO : Ignition 2.20.0 Apr 30 12:52:18.028295 ignition[955]: INFO : Stage: files Apr 30 12:52:18.028295 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:52:18.028295 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:52:18.031977 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Apr 30 12:52:18.031977 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 12:52:18.031977 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 12:52:18.035377 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 12:52:18.036381 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 12:52:18.036381 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 12:52:18.035782 unknown[955]: wrote ssh authorized keys file for user: core Apr 30 12:52:18.038950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 12:52:18.038950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 12:52:18.318485 systemd-networkd[771]: eth0: Gained IPv6LL Apr 30 12:52:18.395643 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 12:52:18.766481 systemd-networkd[771]: eth1: Gained IPv6LL Apr 30 12:52:22.251045 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 12:52:22.251045 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:52:22.253953 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 12:52:23.007352 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 12:52:23.265640 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:52:23.265640 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 12:52:23.267545 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 12:52:23.267545 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:52:23.267545 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:52:23.267545 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:52:23.267545 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:52:23.267545 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:52:23.267545 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:52:23.267545 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:52:23.267545 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:52:23.267545 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 12:52:23.267545 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 12:52:23.267545 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 12:52:23.267545 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Apr 30 12:52:23.941863 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 12:52:25.105681 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 12:52:25.105681 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 12:52:25.108502 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:52:25.108502 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:52:25.108502 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 12:52:25.108502 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 30 12:52:25.108502 ignition[955]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 12:52:25.108502 ignition[955]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 12:52:25.108502 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 30 12:52:25.108502 ignition[955]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 30 12:52:25.108502 ignition[955]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 12:52:25.108502 ignition[955]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:52:25.108502 ignition[955]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:52:25.108502 ignition[955]: INFO : files: files passed Apr 30 12:52:25.108502 ignition[955]: INFO : Ignition finished successfully Apr 30 12:52:25.108821 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 12:52:25.118544 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 12:52:25.123439 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 12:52:25.125142 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 12:52:25.126001 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 12:52:25.136033 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:52:25.136033 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:52:25.138401 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:52:25.137836 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:52:25.140107 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 12:52:25.153516 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 12:52:25.170493 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 12:52:25.170612 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 12:52:25.172006 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 12:52:25.172962 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 12:52:25.174126 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 12:52:25.176386 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 12:52:25.187402 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:52:25.192398 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 12:52:25.201478 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:52:25.202890 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:52:25.204071 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 12:52:25.205053 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 12:52:25.205143 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:52:25.206923 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 12:52:25.207577 systemd[1]: Stopped target basic.target - Basic System. Apr 30 12:52:25.208777 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 12:52:25.209968 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:52:25.211306 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 12:52:25.212550 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 12:52:25.213759 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:52:25.215026 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 12:52:25.216416 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 12:52:25.217642 systemd[1]: Stopped target swap.target - Swaps. Apr 30 12:52:25.218521 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 12:52:25.218639 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:52:25.219795 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:52:25.220486 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:52:25.221644 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 12:52:25.221729 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:52:25.222677 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 12:52:25.222756 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 12:52:25.224016 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 12:52:25.224107 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:52:25.224744 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 12:52:25.224854 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 12:52:25.225820 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 12:52:25.225927 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 12:52:25.232629 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 12:52:25.234767 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 12:52:25.235209 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 12:52:25.235356 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:52:25.237434 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 12:52:25.237560 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:52:25.241678 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 12:52:25.241748 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 12:52:25.248621 ignition[1009]: INFO : Ignition 2.20.0 Apr 30 12:52:25.248621 ignition[1009]: INFO : Stage: umount Apr 30 12:52:25.248621 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:52:25.248621 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:52:25.253858 ignition[1009]: INFO : umount: umount passed Apr 30 12:52:25.253858 ignition[1009]: INFO : Ignition finished successfully Apr 30 12:52:25.249807 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 12:52:25.249875 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 12:52:25.251219 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 12:52:25.251254 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 12:52:25.254842 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 12:52:25.254879 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 12:52:25.255323 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 12:52:25.255353 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 12:52:25.255919 systemd[1]: Stopped target network.target - Network. Apr 30 12:52:25.258260 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 12:52:25.258333 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:52:25.259770 systemd[1]: Stopped target paths.target - Path Units. Apr 30 12:52:25.260137 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 12:52:25.264338 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:52:25.265124 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 12:52:25.266096 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 12:52:25.267034 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 12:52:25.267072 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:52:25.268019 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 12:52:25.268056 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:52:25.269260 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 12:52:25.269315 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 12:52:25.270133 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 12:52:25.270174 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 12:52:25.271315 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 12:52:25.272146 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 12:52:25.274184 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 12:52:25.274716 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 12:52:25.274777 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 12:52:25.275748 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 12:52:25.275801 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 12:52:25.277904 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 12:52:25.277977 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 12:52:25.280604 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 30 12:52:25.280921 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 12:52:25.280985 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:52:25.282441 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:52:25.284513 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 12:52:25.284589 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 12:52:25.286780 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 30 12:52:25.287026 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 12:52:25.287066 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:52:25.293361 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 12:52:25.294537 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 12:52:25.294577 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:52:25.295499 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:52:25.295532 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:52:25.296924 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 12:52:25.296958 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 12:52:25.297643 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:52:25.299189 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 30 12:52:25.305631 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 12:52:25.305711 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 12:52:25.306816 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 12:52:25.306914 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:52:25.308043 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 12:52:25.308092 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 12:52:25.309162 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 12:52:25.309185 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:52:25.310115 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 12:52:25.310149 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:52:25.311581 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 12:52:25.311622 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 12:52:25.312608 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:52:25.312641 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:52:25.322424 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 12:52:25.322894 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 12:52:25.322933 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:52:25.323511 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 12:52:25.323545 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:52:25.324024 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 12:52:25.324055 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:52:25.325156 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:52:25.325190 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:52:25.326685 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 12:52:25.326743 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 12:52:25.328040 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 12:52:25.337424 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 12:52:25.342572 systemd[1]: Switching root. Apr 30 12:52:25.379342 systemd-journald[188]: Journal stopped Apr 30 12:52:26.208425 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Apr 30 12:52:26.208473 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 12:52:26.208489 kernel: SELinux: policy capability open_perms=1 Apr 30 12:52:26.208498 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 12:52:26.208509 kernel: SELinux: policy capability always_check_network=0 Apr 30 12:52:26.208517 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 12:52:26.208527 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 12:52:26.208541 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 12:52:26.208548 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 12:52:26.208557 kernel: audit: type=1403 audit(1746017545.499:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 12:52:26.208568 systemd[1]: Successfully loaded SELinux policy in 45.543ms. Apr 30 12:52:26.208583 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.215ms. Apr 30 12:52:26.208592 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:52:26.208611 systemd[1]: Detected virtualization kvm. Apr 30 12:52:26.208620 systemd[1]: Detected architecture x86-64. Apr 30 12:52:26.208630 systemd[1]: Detected first boot. Apr 30 12:52:26.208643 systemd[1]: Hostname set to . Apr 30 12:52:26.208651 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:52:26.208660 zram_generator::config[1056]: No configuration found. Apr 30 12:52:26.208670 kernel: Guest personality initialized and is inactive Apr 30 12:52:26.208683 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Apr 30 12:52:26.208698 kernel: Initialized host personality Apr 30 12:52:26.208707 kernel: NET: Registered PF_VSOCK protocol family Apr 30 12:52:26.208717 systemd[1]: Populated /etc with preset unit settings. Apr 30 12:52:26.208727 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 30 12:52:26.208735 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 12:52:26.208743 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 12:52:26.208751 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 12:52:26.208760 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 12:52:26.208768 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 12:52:26.208777 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 12:52:26.208785 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 12:52:26.208795 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 12:52:26.208804 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 12:52:26.208813 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 12:52:26.208821 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 12:52:26.208830 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:52:26.208838 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:52:26.208847 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 12:52:26.208855 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 12:52:26.208866 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 12:52:26.208874 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:52:26.208883 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 12:52:26.208913 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:52:26.208922 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 12:52:26.208931 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 12:52:26.208942 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 12:52:26.208950 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 12:52:26.208958 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:52:26.208967 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:52:26.208975 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:52:26.208984 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:52:26.208996 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 12:52:26.209006 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 12:52:26.209014 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 30 12:52:26.209022 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:52:26.209031 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:52:26.209039 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:52:26.209047 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 12:52:26.209058 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 12:52:26.209066 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 12:52:26.209076 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 12:52:26.209085 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:52:26.209094 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 12:52:26.209102 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 12:52:26.209111 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 12:52:26.209119 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 12:52:26.209127 systemd[1]: Reached target machines.target - Containers. Apr 30 12:52:26.209136 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 12:52:26.209145 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:52:26.209176 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:52:26.209186 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 12:52:26.209195 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:52:26.209204 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:52:26.209217 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:52:26.209225 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 12:52:26.209234 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:52:26.209242 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 12:52:26.209253 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 12:52:26.209261 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 12:52:26.209308 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 12:52:26.209318 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 12:52:26.209327 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:52:26.209336 kernel: loop: module loaded Apr 30 12:52:26.209343 kernel: fuse: init (API version 7.39) Apr 30 12:52:26.209352 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:52:26.209361 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:52:26.209372 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 12:52:26.209380 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 12:52:26.209388 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 30 12:52:26.209397 kernel: ACPI: bus type drm_connector registered Apr 30 12:52:26.209404 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:52:26.209429 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 12:52:26.209438 systemd[1]: Stopped verity-setup.service. Apr 30 12:52:26.209448 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:52:26.209457 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 12:52:26.209466 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 12:52:26.209474 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 12:52:26.209484 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 12:52:26.209511 systemd-journald[1137]: Collecting audit messages is disabled. Apr 30 12:52:26.209533 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 12:52:26.209543 systemd-journald[1137]: Journal started Apr 30 12:52:26.209562 systemd-journald[1137]: Runtime Journal (/run/log/journal/1f6488b0c9aa4ae9b6b69797c97864d7) is 4.8M, max 38.3M, 33.5M free. Apr 30 12:52:25.942492 systemd[1]: Queued start job for default target multi-user.target. Apr 30 12:52:25.954863 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 12:52:25.955316 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 12:52:26.211429 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:52:26.213002 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 12:52:26.213745 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 12:52:26.214559 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:52:26.215345 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 12:52:26.215564 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 12:52:26.216423 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:52:26.216617 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:52:26.217348 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:52:26.217548 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:52:26.218199 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:52:26.218554 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:52:26.219262 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 12:52:26.219523 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 12:52:26.220204 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:52:26.220542 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:52:26.221487 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:52:26.222167 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 12:52:26.222933 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 12:52:26.223656 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 30 12:52:26.232121 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 12:52:26.240343 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 12:52:26.248344 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 12:52:26.249209 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 12:52:26.249324 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:52:26.250850 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 30 12:52:26.255874 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 12:52:26.257911 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 12:52:26.259552 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:52:26.266148 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 12:52:26.268682 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 12:52:26.270661 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:52:26.272519 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 12:52:26.275834 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:52:26.279324 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:52:26.283093 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 12:52:26.297346 systemd-journald[1137]: Time spent on flushing to /var/log/journal/1f6488b0c9aa4ae9b6b69797c97864d7 is 29.521ms for 1144 entries. Apr 30 12:52:26.297346 systemd-journald[1137]: System Journal (/var/log/journal/1f6488b0c9aa4ae9b6b69797c97864d7) is 8M, max 584.8M, 576.8M free. Apr 30 12:52:26.356491 systemd-journald[1137]: Received client request to flush runtime journal. Apr 30 12:52:26.356524 kernel: loop0: detected capacity change from 0 to 147912 Apr 30 12:52:26.356544 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 12:52:26.296079 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 12:52:26.301978 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:52:26.302721 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 12:52:26.303234 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 12:52:26.304768 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 12:52:26.309561 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 12:52:26.315844 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 12:52:26.324436 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 30 12:52:26.327520 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 12:52:26.351903 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:52:26.363891 kernel: loop1: detected capacity change from 0 to 138176 Apr 30 12:52:26.360151 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 12:52:26.367496 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Apr 30 12:52:26.367517 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Apr 30 12:52:26.367545 udevadm[1191]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 12:52:26.371527 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 30 12:52:26.374312 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:52:26.382064 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 12:52:26.407524 kernel: loop2: detected capacity change from 0 to 8 Apr 30 12:52:26.412676 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 12:52:26.418374 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:52:26.429628 kernel: loop3: detected capacity change from 0 to 205544 Apr 30 12:52:26.430195 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Apr 30 12:52:26.430212 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Apr 30 12:52:26.434519 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:52:26.477307 kernel: loop4: detected capacity change from 0 to 147912 Apr 30 12:52:26.497334 kernel: loop5: detected capacity change from 0 to 138176 Apr 30 12:52:26.523544 kernel: loop6: detected capacity change from 0 to 8 Apr 30 12:52:26.525297 kernel: loop7: detected capacity change from 0 to 205544 Apr 30 12:52:26.548634 (sd-merge)[1210]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 30 12:52:26.549021 (sd-merge)[1210]: Merged extensions into '/usr'. Apr 30 12:52:26.555668 systemd[1]: Reload requested from client PID 1182 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 12:52:26.556554 systemd[1]: Reloading... Apr 30 12:52:26.638299 zram_generator::config[1235]: No configuration found. Apr 30 12:52:26.751474 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:52:26.758758 ldconfig[1177]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 12:52:26.809902 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 12:52:26.810217 systemd[1]: Reloading finished in 251 ms. Apr 30 12:52:26.828692 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 12:52:26.829459 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 12:52:26.838463 systemd[1]: Starting ensure-sysext.service... Apr 30 12:52:26.843996 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:52:26.858609 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 12:52:26.861254 systemd[1]: Reload requested from client PID 1281 ('systemctl') (unit ensure-sysext.service)... Apr 30 12:52:26.861365 systemd[1]: Reloading... Apr 30 12:52:26.869791 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 12:52:26.870347 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 12:52:26.871027 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 12:52:26.871300 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Apr 30 12:52:26.871405 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Apr 30 12:52:26.874339 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:52:26.874404 systemd-tmpfiles[1282]: Skipping /boot Apr 30 12:52:26.882142 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:52:26.882237 systemd-tmpfiles[1282]: Skipping /boot Apr 30 12:52:26.913126 zram_generator::config[1308]: No configuration found. Apr 30 12:52:26.997855 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:52:27.050279 systemd[1]: Reloading finished in 188 ms. Apr 30 12:52:27.076133 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:52:27.085531 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:52:27.088522 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 12:52:27.091413 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 12:52:27.099399 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:52:27.103520 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:52:27.107383 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 12:52:27.111504 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:52:27.111689 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:52:27.118975 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:52:27.131498 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:52:27.135521 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:52:27.136013 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:52:27.136092 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:52:27.136167 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:52:27.137900 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:52:27.138327 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:52:27.142853 systemd-udevd[1365]: Using default interface naming scheme 'v255'. Apr 30 12:52:27.143798 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:52:27.143926 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:52:27.149342 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:52:27.149523 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:52:27.154078 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:52:27.156076 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:52:27.157103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:52:27.157188 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:52:27.159493 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 12:52:27.160106 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:52:27.161038 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 12:52:27.162363 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 12:52:27.163533 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:52:27.163654 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:52:27.165368 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:52:27.165509 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:52:27.171524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:52:27.171656 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:52:27.177709 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:52:27.177912 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:52:27.185512 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:52:27.188619 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:52:27.191328 augenrules[1394]: No rules Apr 30 12:52:27.196436 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:52:27.198942 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:52:27.199538 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:52:27.199621 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:52:27.204467 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 12:52:27.205200 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:52:27.208022 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:52:27.209214 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:52:27.209762 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:52:27.212621 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:52:27.212734 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:52:27.214799 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:52:27.216034 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:52:27.216985 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:52:27.217094 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:52:27.217790 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:52:27.217890 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:52:27.223922 systemd[1]: Finished ensure-sysext.service. Apr 30 12:52:27.241297 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:52:27.241772 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:52:27.241813 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:52:27.244414 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 12:52:27.245008 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 12:52:27.247462 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 12:52:27.253251 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 12:52:27.254745 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 12:52:27.279309 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 12:52:27.353298 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1411) Apr 30 12:52:27.356927 systemd-resolved[1359]: Positive Trust Anchors: Apr 30 12:52:27.356941 systemd-resolved[1359]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:52:27.356965 systemd-resolved[1359]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:52:27.371149 systemd-resolved[1359]: Using system hostname 'ci-4230-1-1-d-a2f51ba0c1'. Apr 30 12:52:27.380834 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:52:27.382406 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:52:27.388826 systemd-networkd[1428]: lo: Link UP Apr 30 12:52:27.388836 systemd-networkd[1428]: lo: Gained carrier Apr 30 12:52:27.389992 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 12:52:27.391585 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 12:52:27.394042 systemd-timesyncd[1429]: No network connectivity, watching for changes. Apr 30 12:52:27.395312 systemd-networkd[1428]: Enumeration completed Apr 30 12:52:27.395376 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:52:27.396069 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:52:27.396078 systemd-networkd[1428]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:52:27.396399 systemd[1]: Reached target network.target - Network. Apr 30 12:52:27.397851 systemd-networkd[1428]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:52:27.397860 systemd-networkd[1428]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:52:27.400746 systemd-networkd[1428]: eth0: Link UP Apr 30 12:52:27.400756 systemd-networkd[1428]: eth0: Gained carrier Apr 30 12:52:27.400767 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:52:27.403452 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 30 12:52:27.405243 systemd-networkd[1428]: eth1: Link UP Apr 30 12:52:27.405254 systemd-networkd[1428]: eth1: Gained carrier Apr 30 12:52:27.405282 systemd-networkd[1428]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:52:27.406956 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 12:52:27.411612 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 12:52:27.418498 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 12:52:27.422292 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 12:52:27.433373 kernel: ACPI: button: Power Button [PWRF] Apr 30 12:52:27.434436 systemd-networkd[1428]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 12:52:27.437182 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Apr 30 12:52:27.447337 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 12:52:27.447964 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 30 12:52:27.450803 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 12:52:27.460536 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 30 12:52:27.460608 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:52:27.460757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:52:27.466338 systemd-networkd[1428]: eth0: DHCPv4 address 37.27.3.216/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 12:52:27.467690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:52:27.469697 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Apr 30 12:52:27.474757 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:52:27.484238 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 12:52:27.484531 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 12:52:27.484921 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 30 12:52:27.484936 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 12:52:27.485735 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 30 12:52:27.487325 kernel: Console: switching to colour dummy device 80x25 Apr 30 12:52:27.487357 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 12:52:27.487373 kernel: [drm] features: -context_init Apr 30 12:52:27.487527 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:52:27.488756 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:52:27.488922 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:52:27.489002 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 12:52:27.489434 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:52:27.489803 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:52:27.489929 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:52:27.490688 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:52:27.492293 kernel: [drm] number of scanouts: 1 Apr 30 12:52:27.492328 kernel: [drm] number of cap sets: 0 Apr 30 12:52:27.492343 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 30 12:52:27.491112 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:52:27.495937 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:52:27.496253 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:52:27.499995 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 30 12:52:27.500036 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 12:52:27.507290 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 12:52:27.519919 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:52:27.520041 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:52:27.522510 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 30 12:52:27.536303 kernel: EDAC MC: Ver: 3.0.0 Apr 30 12:52:27.558255 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:52:27.566689 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:52:27.566992 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:52:27.573364 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:52:27.582471 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:52:27.630624 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:52:27.708231 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 12:52:27.713451 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 12:52:27.723293 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:52:27.751123 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 12:52:27.751785 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:52:27.751891 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:52:27.752044 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 12:52:27.752129 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 12:52:27.752370 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 12:52:27.752536 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 12:52:27.752609 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 12:52:27.752675 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 12:52:27.752705 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:52:27.752755 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:52:27.755640 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 12:52:27.756926 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 12:52:27.759386 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 30 12:52:27.761135 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 30 12:52:27.761691 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 30 12:52:27.764834 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 12:52:27.765694 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 30 12:52:27.767412 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 12:52:27.772104 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 12:52:27.772622 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:52:27.773014 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:52:27.774917 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:52:27.774952 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:52:27.777368 lvm[1484]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:52:27.777655 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 12:52:27.786735 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 12:52:27.789436 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 12:52:27.791760 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 12:52:27.795152 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 12:52:27.795701 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 12:52:27.800491 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 12:52:27.805609 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 12:52:27.810345 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 30 12:52:27.818437 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 12:52:27.822345 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 12:52:27.828620 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 12:52:27.832076 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 12:52:27.833574 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 12:52:27.841491 coreos-metadata[1486]: Apr 30 12:52:27.834 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 30 12:52:27.841491 coreos-metadata[1486]: Apr 30 12:52:27.840 INFO Fetch successful Apr 30 12:52:27.841491 coreos-metadata[1486]: Apr 30 12:52:27.840 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 30 12:52:27.834449 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 12:52:27.847875 coreos-metadata[1486]: Apr 30 12:52:27.842 INFO Fetch successful Apr 30 12:52:27.842817 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 12:52:27.845743 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 12:52:27.857295 jq[1490]: false Apr 30 12:52:27.855067 dbus-daemon[1487]: [system] SELinux support is enabled Apr 30 12:52:27.859738 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 12:52:27.864371 jq[1499]: true Apr 30 12:52:27.871725 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 12:52:27.871898 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 12:52:27.877636 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 12:52:27.877806 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 12:52:27.884783 extend-filesystems[1491]: Found loop4 Apr 30 12:52:27.885569 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 12:52:27.886644 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 12:52:27.888002 extend-filesystems[1491]: Found loop5 Apr 30 12:52:27.888002 extend-filesystems[1491]: Found loop6 Apr 30 12:52:27.888002 extend-filesystems[1491]: Found loop7 Apr 30 12:52:27.888002 extend-filesystems[1491]: Found sda Apr 30 12:52:27.888002 extend-filesystems[1491]: Found sda1 Apr 30 12:52:27.888002 extend-filesystems[1491]: Found sda2 Apr 30 12:52:27.888002 extend-filesystems[1491]: Found sda3 Apr 30 12:52:27.888002 extend-filesystems[1491]: Found usr Apr 30 12:52:27.888002 extend-filesystems[1491]: Found sda4 Apr 30 12:52:27.888002 extend-filesystems[1491]: Found sda6 Apr 30 12:52:27.888002 extend-filesystems[1491]: Found sda7 Apr 30 12:52:27.888002 extend-filesystems[1491]: Found sda9 Apr 30 12:52:27.888002 extend-filesystems[1491]: Checking size of /dev/sda9 Apr 30 12:52:27.932283 update_engine[1498]: I20250430 12:52:27.929399 1498 main.cc:92] Flatcar Update Engine starting Apr 30 12:52:27.903632 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 12:52:27.903673 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 12:52:27.932630 jq[1513]: true Apr 30 12:52:27.909440 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 12:52:27.937686 update_engine[1498]: I20250430 12:52:27.933485 1498 update_check_scheduler.cc:74] Next update check in 8m20s Apr 30 12:52:27.909482 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 12:52:27.927472 (ntainerd)[1515]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 12:52:27.933679 systemd[1]: Started update-engine.service - Update Engine. Apr 30 12:52:27.937438 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 12:52:27.943530 extend-filesystems[1491]: Resized partition /dev/sda9 Apr 30 12:52:27.947210 extend-filesystems[1537]: resize2fs 1.47.1 (20-May-2024) Apr 30 12:52:27.952355 tar[1512]: linux-amd64/helm Apr 30 12:52:27.956633 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 30 12:52:27.997997 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 12:52:28.000109 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 12:52:28.012964 systemd-logind[1497]: New seat seat0. Apr 30 12:52:28.040988 systemd-logind[1497]: Watching system buttons on /dev/input/event2 (Power Button) Apr 30 12:52:28.041962 systemd-logind[1497]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 12:52:28.042147 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 12:52:28.059253 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1403) Apr 30 12:52:28.096299 bash[1555]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:52:28.099527 sshd_keygen[1517]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 12:52:28.100520 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 12:52:28.117534 systemd[1]: Starting sshkeys.service... Apr 30 12:52:28.148686 containerd[1515]: time="2025-04-30T12:52:28.148598655Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 12:52:28.159370 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 30 12:52:28.154747 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 12:52:28.163510 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 12:52:28.173352 locksmithd[1535]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 12:52:28.180107 extend-filesystems[1537]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 12:52:28.180107 extend-filesystems[1537]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 30 12:52:28.180107 extend-filesystems[1537]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 30 12:52:28.190128 extend-filesystems[1491]: Resized filesystem in /dev/sda9 Apr 30 12:52:28.190128 extend-filesystems[1491]: Found sr0 Apr 30 12:52:28.194821 coreos-metadata[1575]: Apr 30 12:52:28.189 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 30 12:52:28.194821 coreos-metadata[1575]: Apr 30 12:52:28.191 INFO Fetch successful Apr 30 12:52:28.180546 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 12:52:28.195168 containerd[1515]: time="2025-04-30T12:52:28.186280438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:52:28.195168 containerd[1515]: time="2025-04-30T12:52:28.187366335Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:52:28.195168 containerd[1515]: time="2025-04-30T12:52:28.187386893Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 12:52:28.195168 containerd[1515]: time="2025-04-30T12:52:28.187400218Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 12:52:28.195168 containerd[1515]: time="2025-04-30T12:52:28.187534800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 12:52:28.195168 containerd[1515]: time="2025-04-30T12:52:28.187549067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 12:52:28.195168 containerd[1515]: time="2025-04-30T12:52:28.187600534Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:52:28.195168 containerd[1515]: time="2025-04-30T12:52:28.187611224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:52:28.195168 containerd[1515]: time="2025-04-30T12:52:28.187771805Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:52:28.195168 containerd[1515]: time="2025-04-30T12:52:28.187790981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 12:52:28.195168 containerd[1515]: time="2025-04-30T12:52:28.187801971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:52:28.180700 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 12:52:28.195534 containerd[1515]: time="2025-04-30T12:52:28.187808774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 12:52:28.195534 containerd[1515]: time="2025-04-30T12:52:28.187866863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:52:28.195534 containerd[1515]: time="2025-04-30T12:52:28.188015202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:52:28.195534 containerd[1515]: time="2025-04-30T12:52:28.188113285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:52:28.195534 containerd[1515]: time="2025-04-30T12:52:28.188123464Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 12:52:28.195534 containerd[1515]: time="2025-04-30T12:52:28.188179720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 12:52:28.195534 containerd[1515]: time="2025-04-30T12:52:28.188215948Z" level=info msg="metadata content store policy set" policy=shared Apr 30 12:52:28.196588 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 12:52:28.203033 unknown[1575]: wrote ssh authorized keys file for user: core Apr 30 12:52:28.208719 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 12:52:28.212128 containerd[1515]: time="2025-04-30T12:52:28.207685684Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 12:52:28.212128 containerd[1515]: time="2025-04-30T12:52:28.207854140Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 12:52:28.212128 containerd[1515]: time="2025-04-30T12:52:28.207873636Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 12:52:28.212128 containerd[1515]: time="2025-04-30T12:52:28.207887041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 12:52:28.212128 containerd[1515]: time="2025-04-30T12:52:28.208372372Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 12:52:28.216590 containerd[1515]: time="2025-04-30T12:52:28.213456677Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 12:52:28.216590 containerd[1515]: time="2025-04-30T12:52:28.213659477Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 12:52:28.216590 containerd[1515]: time="2025-04-30T12:52:28.213754145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 12:52:28.216590 containerd[1515]: time="2025-04-30T12:52:28.213767971Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 12:52:28.216590 containerd[1515]: time="2025-04-30T12:52:28.213779172Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 12:52:28.216590 containerd[1515]: time="2025-04-30T12:52:28.213793028Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 12:52:28.216590 containerd[1515]: time="2025-04-30T12:52:28.213803177Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 12:52:28.216590 containerd[1515]: time="2025-04-30T12:52:28.213812815Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 12:52:28.216590 containerd[1515]: time="2025-04-30T12:52:28.213824978Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 12:52:28.216590 containerd[1515]: time="2025-04-30T12:52:28.213836870Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 12:52:28.216590 containerd[1515]: time="2025-04-30T12:52:28.213846909Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 12:52:28.216590 containerd[1515]: time="2025-04-30T12:52:28.213856147Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 12:52:28.216590 containerd[1515]: time="2025-04-30T12:52:28.213869101Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 12:52:28.216590 containerd[1515]: time="2025-04-30T12:52:28.213894489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 12:52:28.216815 containerd[1515]: time="2025-04-30T12:52:28.213905239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 12:52:28.216815 containerd[1515]: time="2025-04-30T12:52:28.213915458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 12:52:28.216815 containerd[1515]: time="2025-04-30T12:52:28.213926538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 12:52:28.216815 containerd[1515]: time="2025-04-30T12:52:28.213935475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 12:52:28.216815 containerd[1515]: time="2025-04-30T12:52:28.213954911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 12:52:28.216815 containerd[1515]: time="2025-04-30T12:52:28.213965321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 12:52:28.216815 containerd[1515]: time="2025-04-30T12:52:28.213974599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 12:52:28.216815 containerd[1515]: time="2025-04-30T12:52:28.213990559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 12:52:28.216815 containerd[1515]: time="2025-04-30T12:52:28.214003192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 12:52:28.216815 containerd[1515]: time="2025-04-30T12:52:28.214012890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 12:52:28.216815 containerd[1515]: time="2025-04-30T12:52:28.214021336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 12:52:28.216815 containerd[1515]: time="2025-04-30T12:52:28.214031004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 12:52:28.216815 containerd[1515]: time="2025-04-30T12:52:28.214041915Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 12:52:28.216815 containerd[1515]: time="2025-04-30T12:52:28.214059458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 12:52:28.216815 containerd[1515]: time="2025-04-30T12:52:28.214069216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 12:52:28.217048 containerd[1515]: time="2025-04-30T12:52:28.214083503Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 12:52:28.217048 containerd[1515]: time="2025-04-30T12:52:28.214118859Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 12:52:28.217048 containerd[1515]: time="2025-04-30T12:52:28.214136001Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 12:52:28.217048 containerd[1515]: time="2025-04-30T12:52:28.214144266Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 12:52:28.217048 containerd[1515]: time="2025-04-30T12:52:28.214152813Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 12:52:28.217048 containerd[1515]: time="2025-04-30T12:52:28.214159355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 12:52:28.217048 containerd[1515]: time="2025-04-30T12:52:28.214169063Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 12:52:28.217048 containerd[1515]: time="2025-04-30T12:52:28.214176578Z" level=info msg="NRI interface is disabled by configuration." Apr 30 12:52:28.217048 containerd[1515]: time="2025-04-30T12:52:28.214184513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 12:52:28.220380 containerd[1515]: time="2025-04-30T12:52:28.219794965Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 12:52:28.220380 containerd[1515]: time="2025-04-30T12:52:28.219862711Z" level=info msg="Connect containerd service" Apr 30 12:52:28.220380 containerd[1515]: time="2025-04-30T12:52:28.219900542Z" level=info msg="using legacy CRI server" Apr 30 12:52:28.220380 containerd[1515]: time="2025-04-30T12:52:28.219907246Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 12:52:28.220380 containerd[1515]: time="2025-04-30T12:52:28.220056616Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 12:52:28.222710 containerd[1515]: time="2025-04-30T12:52:28.220983173Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:52:28.222710 containerd[1515]: time="2025-04-30T12:52:28.221040100Z" level=info msg="Start subscribing containerd event" Apr 30 12:52:28.222710 containerd[1515]: time="2025-04-30T12:52:28.221075757Z" level=info msg="Start recovering state" Apr 30 12:52:28.222710 containerd[1515]: time="2025-04-30T12:52:28.221123817Z" level=info msg="Start event monitor" Apr 30 12:52:28.222710 containerd[1515]: time="2025-04-30T12:52:28.221137292Z" level=info msg="Start snapshots syncer" Apr 30 12:52:28.222710 containerd[1515]: time="2025-04-30T12:52:28.221144796Z" level=info msg="Start cni network conf syncer for default" Apr 30 12:52:28.222710 containerd[1515]: time="2025-04-30T12:52:28.221151288Z" level=info msg="Start streaming server" Apr 30 12:52:28.224998 containerd[1515]: time="2025-04-30T12:52:28.224959911Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 12:52:28.225105 containerd[1515]: time="2025-04-30T12:52:28.225092440Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 12:52:28.225327 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 12:52:28.231604 containerd[1515]: time="2025-04-30T12:52:28.231466295Z" level=info msg="containerd successfully booted in 0.084610s" Apr 30 12:52:28.236622 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 12:52:28.236906 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 12:52:28.245527 update-ssh-keys[1586]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:52:28.247659 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 12:52:28.248915 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 12:52:28.253731 systemd[1]: Finished sshkeys.service. Apr 30 12:52:28.258349 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 12:52:28.268558 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 12:52:28.272257 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 12:52:28.275779 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 12:52:28.525815 tar[1512]: linux-amd64/LICENSE Apr 30 12:52:28.525815 tar[1512]: linux-amd64/README.md Apr 30 12:52:28.535792 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 12:52:28.878456 systemd-networkd[1428]: eth0: Gained IPv6LL Apr 30 12:52:28.879053 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Apr 30 12:52:28.880997 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 12:52:28.883625 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 12:52:28.891487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:52:28.895817 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 12:52:28.913524 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 12:52:29.262484 systemd-networkd[1428]: eth1: Gained IPv6LL Apr 30 12:52:29.263046 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Apr 30 12:52:29.655819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:52:29.659767 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 12:52:29.663183 (kubelet)[1619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:52:29.664900 systemd[1]: Startup finished in 1.168s (kernel) + 10.827s (initrd) + 4.205s (userspace) = 16.201s. Apr 30 12:52:30.176457 kubelet[1619]: E0430 12:52:30.176363 1619 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:52:30.179510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:52:30.179637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:52:30.179881 systemd[1]: kubelet.service: Consumed 831ms CPU time, 235.4M memory peak. Apr 30 12:52:40.430304 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 12:52:40.436929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:52:40.549534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:52:40.560775 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:52:40.617925 kubelet[1638]: E0430 12:52:40.617843 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:52:40.621962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:52:40.622161 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:52:40.622588 systemd[1]: kubelet.service: Consumed 166ms CPU time, 98.5M memory peak. Apr 30 12:52:50.873042 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 12:52:50.878815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:52:50.981953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:52:50.987151 (kubelet)[1653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:52:51.025301 kubelet[1653]: E0430 12:52:51.025195 1653 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:52:51.027062 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:52:51.027194 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:52:51.027525 systemd[1]: kubelet.service: Consumed 128ms CPU time, 96.3M memory peak. Apr 30 12:53:00.433819 systemd-timesyncd[1429]: Contacted time server 131.188.3.222:123 (2.flatcar.pool.ntp.org). Apr 30 12:53:00.433880 systemd-timesyncd[1429]: Initial clock synchronization to Wed 2025-04-30 12:53:00.433659 UTC. Apr 30 12:53:00.434024 systemd-resolved[1359]: Clock change detected. Flushing caches. Apr 30 12:53:02.051232 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 12:53:02.056790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:53:02.132672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:53:02.135592 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:53:02.165405 kubelet[1669]: E0430 12:53:02.165347 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:53:02.167761 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:53:02.167884 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:53:02.168152 systemd[1]: kubelet.service: Consumed 98ms CPU time, 95.6M memory peak. Apr 30 12:53:12.241371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 12:53:12.246778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:53:12.326435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:53:12.329808 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:53:12.364705 kubelet[1684]: E0430 12:53:12.364643 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:53:12.367085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:53:12.367267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:53:12.367570 systemd[1]: kubelet.service: Consumed 111ms CPU time, 95.2M memory peak. Apr 30 12:53:14.266779 update_engine[1498]: I20250430 12:53:14.266692 1498 update_attempter.cc:509] Updating boot flags... Apr 30 12:53:14.304644 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1700) Apr 30 12:53:14.348629 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1700) Apr 30 12:53:14.384677 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1700) Apr 30 12:53:22.491415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 12:53:22.497056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:53:22.579542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:53:22.584496 (kubelet)[1720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:53:22.617719 kubelet[1720]: E0430 12:53:22.617593 1720 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:53:22.620688 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:53:22.620889 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:53:22.621328 systemd[1]: kubelet.service: Consumed 120ms CPU time, 97.5M memory peak. Apr 30 12:53:32.741374 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 12:53:32.746790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:53:32.826982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:53:32.830497 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:53:32.871787 kubelet[1735]: E0430 12:53:32.871718 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:53:32.874686 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:53:32.874874 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:53:32.875470 systemd[1]: kubelet.service: Consumed 121ms CPU time, 97.9M memory peak. Apr 30 12:53:42.991269 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 30 12:53:42.996795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:53:43.077181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:53:43.079811 (kubelet)[1750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:53:43.117971 kubelet[1750]: E0430 12:53:43.117913 1750 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:53:43.119846 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:53:43.119983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:53:43.120232 systemd[1]: kubelet.service: Consumed 114ms CPU time, 93.4M memory peak. Apr 30 12:53:53.241515 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 30 12:53:53.249787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:53:53.351846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:53:53.367183 (kubelet)[1765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:53:53.405205 kubelet[1765]: E0430 12:53:53.405118 1765 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:53:53.407536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:53:53.407693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:53:53.407942 systemd[1]: kubelet.service: Consumed 132ms CPU time, 95.5M memory peak. Apr 30 12:54:03.491431 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 30 12:54:03.496820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:54:03.582138 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:54:03.593152 (kubelet)[1781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:54:03.642834 kubelet[1781]: E0430 12:54:03.642773 1781 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:54:03.644843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:54:03.644964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:54:03.645254 systemd[1]: kubelet.service: Consumed 128ms CPU time, 97.5M memory peak. Apr 30 12:54:13.741184 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 30 12:54:13.751743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:54:13.824299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:54:13.828127 (kubelet)[1797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:54:13.868279 kubelet[1797]: E0430 12:54:13.868194 1797 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:54:13.870633 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:54:13.870845 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:54:13.871259 systemd[1]: kubelet.service: Consumed 111ms CPU time, 93.6M memory peak. Apr 30 12:54:14.054075 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 12:54:14.060138 systemd[1]: Started sshd@0-37.27.3.216:22-139.178.68.195:36616.service - OpenSSH per-connection server daemon (139.178.68.195:36616). Apr 30 12:54:15.064043 sshd[1805]: Accepted publickey for core from 139.178.68.195 port 36616 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:54:15.067171 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:54:15.083415 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 12:54:15.095842 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 12:54:15.098140 systemd-logind[1497]: New session 1 of user core. Apr 30 12:54:15.107807 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 12:54:15.115005 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 12:54:15.120333 (systemd)[1809]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 12:54:15.123968 systemd-logind[1497]: New session c1 of user core. Apr 30 12:54:15.258888 systemd[1809]: Queued start job for default target default.target. Apr 30 12:54:15.269429 systemd[1809]: Created slice app.slice - User Application Slice. Apr 30 12:54:15.269454 systemd[1809]: Reached target paths.target - Paths. Apr 30 12:54:15.269491 systemd[1809]: Reached target timers.target - Timers. Apr 30 12:54:15.270513 systemd[1809]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 12:54:15.280732 systemd[1809]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 12:54:15.280876 systemd[1809]: Reached target sockets.target - Sockets. Apr 30 12:54:15.280985 systemd[1809]: Reached target basic.target - Basic System. Apr 30 12:54:15.281019 systemd[1809]: Reached target default.target - Main User Target. Apr 30 12:54:15.281039 systemd[1809]: Startup finished in 149ms. Apr 30 12:54:15.281075 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 12:54:15.282450 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 12:54:15.966431 systemd[1]: Started sshd@1-37.27.3.216:22-139.178.68.195:49526.service - OpenSSH per-connection server daemon (139.178.68.195:49526). Apr 30 12:54:16.947471 sshd[1820]: Accepted publickey for core from 139.178.68.195 port 49526 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:54:16.948788 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:54:16.953417 systemd-logind[1497]: New session 2 of user core. Apr 30 12:54:16.956739 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 12:54:17.625970 sshd[1822]: Connection closed by 139.178.68.195 port 49526 Apr 30 12:54:17.626688 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Apr 30 12:54:17.629527 systemd[1]: sshd@1-37.27.3.216:22-139.178.68.195:49526.service: Deactivated successfully. Apr 30 12:54:17.631653 systemd-logind[1497]: Session 2 logged out. Waiting for processes to exit. Apr 30 12:54:17.631928 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 12:54:17.632975 systemd-logind[1497]: Removed session 2. Apr 30 12:54:17.799833 systemd[1]: Started sshd@2-37.27.3.216:22-139.178.68.195:49534.service - OpenSSH per-connection server daemon (139.178.68.195:49534). Apr 30 12:54:18.768480 sshd[1828]: Accepted publickey for core from 139.178.68.195 port 49534 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:54:18.769861 sshd-session[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:54:18.773759 systemd-logind[1497]: New session 3 of user core. Apr 30 12:54:18.779717 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 12:54:19.439083 sshd[1830]: Connection closed by 139.178.68.195 port 49534 Apr 30 12:54:19.439701 sshd-session[1828]: pam_unix(sshd:session): session closed for user core Apr 30 12:54:19.442927 systemd[1]: sshd@2-37.27.3.216:22-139.178.68.195:49534.service: Deactivated successfully. Apr 30 12:54:19.444633 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 12:54:19.445406 systemd-logind[1497]: Session 3 logged out. Waiting for processes to exit. Apr 30 12:54:19.446477 systemd-logind[1497]: Removed session 3. Apr 30 12:54:19.618837 systemd[1]: Started sshd@3-37.27.3.216:22-139.178.68.195:49540.service - OpenSSH per-connection server daemon (139.178.68.195:49540). Apr 30 12:54:20.592002 sshd[1836]: Accepted publickey for core from 139.178.68.195 port 49540 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:54:20.593843 sshd-session[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:54:20.598694 systemd-logind[1497]: New session 4 of user core. Apr 30 12:54:20.604917 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 12:54:21.266188 sshd[1838]: Connection closed by 139.178.68.195 port 49540 Apr 30 12:54:21.266803 sshd-session[1836]: pam_unix(sshd:session): session closed for user core Apr 30 12:54:21.269503 systemd[1]: sshd@3-37.27.3.216:22-139.178.68.195:49540.service: Deactivated successfully. Apr 30 12:54:21.271862 systemd-logind[1497]: Session 4 logged out. Waiting for processes to exit. Apr 30 12:54:21.272427 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 12:54:21.273374 systemd-logind[1497]: Removed session 4. Apr 30 12:54:21.443934 systemd[1]: Started sshd@4-37.27.3.216:22-139.178.68.195:49542.service - OpenSSH per-connection server daemon (139.178.68.195:49542). Apr 30 12:54:22.412128 sshd[1844]: Accepted publickey for core from 139.178.68.195 port 49542 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:54:22.413651 sshd-session[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:54:22.418300 systemd-logind[1497]: New session 5 of user core. Apr 30 12:54:22.424749 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 12:54:22.941777 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 12:54:22.942408 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:54:22.958079 sudo[1847]: pam_unix(sudo:session): session closed for user root Apr 30 12:54:23.115837 sshd[1846]: Connection closed by 139.178.68.195 port 49542 Apr 30 12:54:23.116737 sshd-session[1844]: pam_unix(sshd:session): session closed for user core Apr 30 12:54:23.119713 systemd[1]: sshd@4-37.27.3.216:22-139.178.68.195:49542.service: Deactivated successfully. Apr 30 12:54:23.121808 systemd-logind[1497]: Session 5 logged out. Waiting for processes to exit. Apr 30 12:54:23.122184 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 12:54:23.123398 systemd-logind[1497]: Removed session 5. Apr 30 12:54:23.292034 systemd[1]: Started sshd@5-37.27.3.216:22-139.178.68.195:49552.service - OpenSSH per-connection server daemon (139.178.68.195:49552). Apr 30 12:54:23.991253 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 30 12:54:23.995818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:54:24.087857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:54:24.098915 (kubelet)[1863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:54:24.139999 kubelet[1863]: E0430 12:54:24.139901 1863 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:54:24.141800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:54:24.141996 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:54:24.142383 systemd[1]: kubelet.service: Consumed 122ms CPU time, 97.7M memory peak. Apr 30 12:54:24.280185 sshd[1853]: Accepted publickey for core from 139.178.68.195 port 49552 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:54:24.281530 sshd-session[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:54:24.286245 systemd-logind[1497]: New session 6 of user core. Apr 30 12:54:24.292757 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 12:54:24.799987 sudo[1872]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 12:54:24.800255 sudo[1872]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:54:24.803970 sudo[1872]: pam_unix(sudo:session): session closed for user root Apr 30 12:54:24.809410 sudo[1871]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 12:54:24.809747 sudo[1871]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:54:24.834967 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:54:24.855090 augenrules[1894]: No rules Apr 30 12:54:24.855758 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:54:24.855955 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:54:24.856913 sudo[1871]: pam_unix(sudo:session): session closed for user root Apr 30 12:54:25.015078 sshd[1870]: Connection closed by 139.178.68.195 port 49552 Apr 30 12:54:25.015669 sshd-session[1853]: pam_unix(sshd:session): session closed for user core Apr 30 12:54:25.019000 systemd[1]: sshd@5-37.27.3.216:22-139.178.68.195:49552.service: Deactivated successfully. Apr 30 12:54:25.020853 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 12:54:25.021563 systemd-logind[1497]: Session 6 logged out. Waiting for processes to exit. Apr 30 12:54:25.022653 systemd-logind[1497]: Removed session 6. Apr 30 12:54:25.187891 systemd[1]: Started sshd@6-37.27.3.216:22-139.178.68.195:49564.service - OpenSSH per-connection server daemon (139.178.68.195:49564). Apr 30 12:54:26.165408 sshd[1903]: Accepted publickey for core from 139.178.68.195 port 49564 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:54:26.166686 sshd-session[1903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:54:26.172054 systemd-logind[1497]: New session 7 of user core. Apr 30 12:54:26.177775 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 12:54:26.683805 sudo[1906]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 12:54:26.684110 sudo[1906]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:54:26.928814 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 12:54:26.928929 (dockerd)[1924]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 12:54:27.171589 dockerd[1924]: time="2025-04-30T12:54:27.171509622Z" level=info msg="Starting up" Apr 30 12:54:27.262913 dockerd[1924]: time="2025-04-30T12:54:27.262842370Z" level=info msg="Loading containers: start." Apr 30 12:54:27.414646 kernel: Initializing XFRM netlink socket Apr 30 12:54:27.509394 systemd-networkd[1428]: docker0: Link UP Apr 30 12:54:27.539625 dockerd[1924]: time="2025-04-30T12:54:27.539538527Z" level=info msg="Loading containers: done." Apr 30 12:54:27.557042 dockerd[1924]: time="2025-04-30T12:54:27.556985614Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 12:54:27.557225 dockerd[1924]: time="2025-04-30T12:54:27.557089639Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Apr 30 12:54:27.557225 dockerd[1924]: time="2025-04-30T12:54:27.557182032Z" level=info msg="Daemon has completed initialization" Apr 30 12:54:27.583779 dockerd[1924]: time="2025-04-30T12:54:27.583689419Z" level=info msg="API listen on /run/docker.sock" Apr 30 12:54:27.584094 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 12:54:28.674540 containerd[1515]: time="2025-04-30T12:54:28.674489881Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Apr 30 12:54:29.313464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount787633451.mount: Deactivated successfully. Apr 30 12:54:31.086391 containerd[1515]: time="2025-04-30T12:54:31.086283217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:31.087937 containerd[1515]: time="2025-04-30T12:54:31.087875193Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27961081" Apr 30 12:54:31.088690 containerd[1515]: time="2025-04-30T12:54:31.088590663Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:31.092303 containerd[1515]: time="2025-04-30T12:54:31.092203880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:31.094121 containerd[1515]: time="2025-04-30T12:54:31.093766570Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.419235362s" Apr 30 12:54:31.094121 containerd[1515]: time="2025-04-30T12:54:31.093818236Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Apr 30 12:54:31.095642 containerd[1515]: time="2025-04-30T12:54:31.095573729Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Apr 30 12:54:32.729182 containerd[1515]: time="2025-04-30T12:54:32.729125002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:32.730745 containerd[1515]: time="2025-04-30T12:54:32.730702991Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713798" Apr 30 12:54:32.732182 containerd[1515]: time="2025-04-30T12:54:32.732145227Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:32.734715 containerd[1515]: time="2025-04-30T12:54:32.734647138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:32.735460 containerd[1515]: time="2025-04-30T12:54:32.735434093Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.639790333s" Apr 30 12:54:32.735498 containerd[1515]: time="2025-04-30T12:54:32.735463098Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Apr 30 12:54:32.735941 containerd[1515]: time="2025-04-30T12:54:32.735821861Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Apr 30 12:54:33.968643 containerd[1515]: time="2025-04-30T12:54:33.968566227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:33.969512 containerd[1515]: time="2025-04-30T12:54:33.969465684Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780408" Apr 30 12:54:33.970356 containerd[1515]: time="2025-04-30T12:54:33.970285541Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:33.973305 containerd[1515]: time="2025-04-30T12:54:33.973268645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:33.974478 containerd[1515]: time="2025-04-30T12:54:33.974327290Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.238482327s" Apr 30 12:54:33.974478 containerd[1515]: time="2025-04-30T12:54:33.974369209Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Apr 30 12:54:33.975139 containerd[1515]: time="2025-04-30T12:54:33.974978442Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Apr 30 12:54:34.241654 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 30 12:54:34.249851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:54:34.339623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:54:34.348886 (kubelet)[2179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:54:34.389516 kubelet[2179]: E0430 12:54:34.389460 2179 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:54:34.391815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:54:34.391944 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:54:34.392212 systemd[1]: kubelet.service: Consumed 125ms CPU time, 95.7M memory peak. Apr 30 12:54:34.902338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1886762188.mount: Deactivated successfully. Apr 30 12:54:35.183072 containerd[1515]: time="2025-04-30T12:54:35.182921038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:35.184314 containerd[1515]: time="2025-04-30T12:54:35.184260469Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354653" Apr 30 12:54:35.185423 containerd[1515]: time="2025-04-30T12:54:35.185373787Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:35.187337 containerd[1515]: time="2025-04-30T12:54:35.187309718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:35.188332 containerd[1515]: time="2025-04-30T12:54:35.187894826Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.212877882s" Apr 30 12:54:35.188332 containerd[1515]: time="2025-04-30T12:54:35.187930813Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Apr 30 12:54:35.188632 containerd[1515]: time="2025-04-30T12:54:35.188610848Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 12:54:35.676371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount173603761.mount: Deactivated successfully. Apr 30 12:54:36.355692 containerd[1515]: time="2025-04-30T12:54:36.355629154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:36.357079 containerd[1515]: time="2025-04-30T12:54:36.357030400Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185843" Apr 30 12:54:36.357712 containerd[1515]: time="2025-04-30T12:54:36.357444237Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:36.361042 containerd[1515]: time="2025-04-30T12:54:36.361007870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:36.362567 containerd[1515]: time="2025-04-30T12:54:36.362417253Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.173717879s" Apr 30 12:54:36.362567 containerd[1515]: time="2025-04-30T12:54:36.362458771Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 12:54:36.363340 containerd[1515]: time="2025-04-30T12:54:36.363308294Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 12:54:36.825036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1326764583.mount: Deactivated successfully. Apr 30 12:54:36.831021 containerd[1515]: time="2025-04-30T12:54:36.830966459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:36.831954 containerd[1515]: time="2025-04-30T12:54:36.831901924Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Apr 30 12:54:36.832727 containerd[1515]: time="2025-04-30T12:54:36.832665937Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:36.835251 containerd[1515]: time="2025-04-30T12:54:36.835204208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:36.835874 containerd[1515]: time="2025-04-30T12:54:36.835843077Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 472.503784ms" Apr 30 12:54:36.835874 containerd[1515]: time="2025-04-30T12:54:36.835873243Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 12:54:36.836675 containerd[1515]: time="2025-04-30T12:54:36.836249699Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Apr 30 12:54:37.358310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1864194911.mount: Deactivated successfully. Apr 30 12:54:38.688499 containerd[1515]: time="2025-04-30T12:54:38.688435136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:38.689767 containerd[1515]: time="2025-04-30T12:54:38.689735029Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780083" Apr 30 12:54:38.690369 containerd[1515]: time="2025-04-30T12:54:38.690314898Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:38.693727 containerd[1515]: time="2025-04-30T12:54:38.693681785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:54:38.695321 containerd[1515]: time="2025-04-30T12:54:38.695193365Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.858920242s" Apr 30 12:54:38.695321 containerd[1515]: time="2025-04-30T12:54:38.695224373Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Apr 30 12:54:41.035680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:54:41.036009 systemd[1]: kubelet.service: Consumed 125ms CPU time, 95.7M memory peak. Apr 30 12:54:41.046999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:54:41.081973 systemd[1]: Reload requested from client PID 2324 ('systemctl') (unit session-7.scope)... Apr 30 12:54:41.081996 systemd[1]: Reloading... Apr 30 12:54:41.188636 zram_generator::config[2381]: No configuration found. Apr 30 12:54:41.306249 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:54:41.462306 systemd[1]: Reloading finished in 379 ms. Apr 30 12:54:41.525749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:54:41.530282 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:54:41.533561 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 12:54:41.533861 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:54:41.533915 systemd[1]: kubelet.service: Consumed 67ms CPU time, 83.2M memory peak. Apr 30 12:54:41.540079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:54:41.643992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:54:41.648023 (kubelet)[2425]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:54:41.694078 kubelet[2425]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:54:41.694078 kubelet[2425]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 12:54:41.694078 kubelet[2425]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:54:41.695418 kubelet[2425]: I0430 12:54:41.695359 2425 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:54:41.878005 kubelet[2425]: I0430 12:54:41.877943 2425 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 12:54:41.878005 kubelet[2425]: I0430 12:54:41.877986 2425 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:54:41.878364 kubelet[2425]: I0430 12:54:41.878328 2425 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 12:54:41.909643 kubelet[2425]: I0430 12:54:41.909293 2425 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:54:41.909643 kubelet[2425]: E0430 12:54:41.909521 2425 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://37.27.3.216:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 37.27.3.216:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:54:41.921007 kubelet[2425]: E0430 12:54:41.920935 2425 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 12:54:41.921007 kubelet[2425]: I0430 12:54:41.920977 2425 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 12:54:41.927912 kubelet[2425]: I0430 12:54:41.927880 2425 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:54:41.929136 kubelet[2425]: I0430 12:54:41.929101 2425 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 12:54:41.929263 kubelet[2425]: I0430 12:54:41.929218 2425 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:54:41.929421 kubelet[2425]: I0430 12:54:41.929252 2425 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-d-a2f51ba0c1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 12:54:41.929421 kubelet[2425]: I0430 12:54:41.929419 2425 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:54:41.929527 kubelet[2425]: I0430 12:54:41.929428 2425 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 12:54:41.929548 kubelet[2425]: I0430 12:54:41.929527 2425 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:54:41.931460 kubelet[2425]: I0430 12:54:41.931271 2425 kubelet.go:408] "Attempting to sync node with API server" Apr 30 12:54:41.931460 kubelet[2425]: I0430 12:54:41.931292 2425 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:54:41.931460 kubelet[2425]: I0430 12:54:41.931320 2425 kubelet.go:314] "Adding apiserver pod source" Apr 30 12:54:41.931460 kubelet[2425]: I0430 12:54:41.931332 2425 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:54:41.936750 kubelet[2425]: W0430 12:54:41.936692 2425 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.3.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-d-a2f51ba0c1&limit=500&resourceVersion=0": dial tcp 37.27.3.216:6443: connect: connection refused Apr 30 12:54:41.936815 kubelet[2425]: E0430 12:54:41.936774 2425 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://37.27.3.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-d-a2f51ba0c1&limit=500&resourceVersion=0\": dial tcp 37.27.3.216:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:54:41.937973 kubelet[2425]: W0430 12:54:41.937883 2425 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.3.216:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.3.216:6443: connect: connection refused Apr 30 12:54:41.937973 kubelet[2425]: E0430 12:54:41.937933 2425 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.3.216:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.3.216:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:54:41.938121 kubelet[2425]: I0430 12:54:41.938109 2425 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:54:41.939864 kubelet[2425]: I0430 12:54:41.939840 2425 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:54:41.943345 kubelet[2425]: W0430 12:54:41.943292 2425 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 12:54:41.944571 kubelet[2425]: I0430 12:54:41.944457 2425 server.go:1269] "Started kubelet" Apr 30 12:54:41.946507 kubelet[2425]: I0430 12:54:41.946012 2425 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:54:41.946570 kubelet[2425]: I0430 12:54:41.946491 2425 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:54:41.950831 kubelet[2425]: I0430 12:54:41.950710 2425 server.go:460] "Adding debug handlers to kubelet server" Apr 30 12:54:41.951237 kubelet[2425]: I0430 12:54:41.951192 2425 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:54:41.951583 kubelet[2425]: I0430 12:54:41.951549 2425 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:54:41.954652 kubelet[2425]: I0430 12:54:41.953462 2425 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 12:54:41.954652 kubelet[2425]: I0430 12:54:41.953866 2425 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 12:54:41.954652 kubelet[2425]: E0430 12:54:41.954066 2425 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-1-d-a2f51ba0c1\" not found" Apr 30 12:54:41.954652 kubelet[2425]: I0430 12:54:41.954103 2425 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 12:54:41.954652 kubelet[2425]: I0430 12:54:41.954579 2425 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:54:41.955680 kubelet[2425]: W0430 12:54:41.955638 2425 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.3.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.3.216:6443: connect: connection refused Apr 30 12:54:41.955752 kubelet[2425]: E0430 12:54:41.955685 2425 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://37.27.3.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.3.216:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:54:41.956014 kubelet[2425]: I0430 12:54:41.955825 2425 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:54:41.956014 kubelet[2425]: I0430 12:54:41.955897 2425 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:54:41.956066 kubelet[2425]: E0430 12:54:41.956024 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.3.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-d-a2f51ba0c1?timeout=10s\": dial tcp 37.27.3.216:6443: connect: connection refused" interval="200ms" Apr 30 12:54:41.965750 kubelet[2425]: E0430 12:54:41.963821 2425 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://37.27.3.216:6443/api/v1/namespaces/default/events\": dial tcp 37.27.3.216:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-1-1-d-a2f51ba0c1.183b19d4b7d9399f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-1-d-a2f51ba0c1,UID:ci-4230-1-1-d-a2f51ba0c1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-d-a2f51ba0c1,},FirstTimestamp:2025-04-30 12:54:41.944426911 +0000 UTC m=+0.292834002,LastTimestamp:2025-04-30 12:54:41.944426911 +0000 UTC m=+0.292834002,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-d-a2f51ba0c1,}" Apr 30 12:54:41.966669 kubelet[2425]: E0430 12:54:41.966060 2425 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 12:54:41.966669 kubelet[2425]: I0430 12:54:41.966174 2425 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:54:41.972844 kubelet[2425]: I0430 12:54:41.972811 2425 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:54:41.973798 kubelet[2425]: I0430 12:54:41.973783 2425 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:54:41.973870 kubelet[2425]: I0430 12:54:41.973862 2425 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 12:54:41.973920 kubelet[2425]: I0430 12:54:41.973914 2425 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 12:54:41.973998 kubelet[2425]: E0430 12:54:41.973983 2425 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:54:41.979765 kubelet[2425]: W0430 12:54:41.979724 2425 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.3.216:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.3.216:6443: connect: connection refused Apr 30 12:54:41.979925 kubelet[2425]: E0430 12:54:41.979869 2425 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://37.27.3.216:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.3.216:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:54:41.988506 kubelet[2425]: I0430 12:54:41.988471 2425 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 12:54:41.988506 kubelet[2425]: I0430 12:54:41.988504 2425 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 12:54:41.988769 kubelet[2425]: I0430 12:54:41.988533 2425 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:54:41.990871 kubelet[2425]: I0430 12:54:41.990845 2425 policy_none.go:49] "None policy: Start" Apr 30 12:54:41.991658 kubelet[2425]: I0430 12:54:41.991628 2425 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 12:54:41.991658 kubelet[2425]: I0430 12:54:41.991656 2425 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:54:41.999021 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 12:54:42.008919 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 12:54:42.013229 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 12:54:42.023172 kubelet[2425]: I0430 12:54:42.023117 2425 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:54:42.023528 kubelet[2425]: I0430 12:54:42.023394 2425 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 12:54:42.023528 kubelet[2425]: I0430 12:54:42.023438 2425 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:54:42.024672 kubelet[2425]: I0430 12:54:42.024056 2425 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:54:42.026304 kubelet[2425]: E0430 12:54:42.026255 2425 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-1-1-d-a2f51ba0c1\" not found" Apr 30 12:54:42.088192 systemd[1]: Created slice kubepods-burstable-podda147e7910478ae553dd2f74786f8e88.slice - libcontainer container kubepods-burstable-podda147e7910478ae553dd2f74786f8e88.slice. Apr 30 12:54:42.103028 systemd[1]: Created slice kubepods-burstable-pode10bc9fce0da89874bc9dae140b5ad14.slice - libcontainer container kubepods-burstable-pode10bc9fce0da89874bc9dae140b5ad14.slice. Apr 30 12:54:42.108199 systemd[1]: Created slice kubepods-burstable-pod481a98559bfdaef02332fc3f3ab35f2d.slice - libcontainer container kubepods-burstable-pod481a98559bfdaef02332fc3f3ab35f2d.slice. Apr 30 12:54:42.125895 kubelet[2425]: I0430 12:54:42.125836 2425 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:42.126457 kubelet[2425]: E0430 12:54:42.126383 2425 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://37.27.3.216:6443/api/v1/nodes\": dial tcp 37.27.3.216:6443: connect: connection refused" node="ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:42.156758 kubelet[2425]: E0430 12:54:42.156699 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.3.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-d-a2f51ba0c1?timeout=10s\": dial tcp 37.27.3.216:6443: connect: connection refused" interval="400ms" Apr 30 12:54:42.256047 kubelet[2425]: I0430 12:54:42.255970 2425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da147e7910478ae553dd2f74786f8e88-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"da147e7910478ae553dd2f74786f8e88\") " pod="kube-system/kube-apiserver-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:42.256047 kubelet[2425]: I0430 12:54:42.256025 2425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e10bc9fce0da89874bc9dae140b5ad14-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"e10bc9fce0da89874bc9dae140b5ad14\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:42.256047 kubelet[2425]: I0430 12:54:42.256043 2425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e10bc9fce0da89874bc9dae140b5ad14-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"e10bc9fce0da89874bc9dae140b5ad14\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:42.256246 kubelet[2425]: I0430 12:54:42.256062 2425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e10bc9fce0da89874bc9dae140b5ad14-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"e10bc9fce0da89874bc9dae140b5ad14\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:42.256246 kubelet[2425]: I0430 12:54:42.256098 2425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e10bc9fce0da89874bc9dae140b5ad14-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"e10bc9fce0da89874bc9dae140b5ad14\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:42.256246 kubelet[2425]: I0430 12:54:42.256119 2425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e10bc9fce0da89874bc9dae140b5ad14-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"e10bc9fce0da89874bc9dae140b5ad14\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:42.256246 kubelet[2425]: I0430 12:54:42.256136 2425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/481a98559bfdaef02332fc3f3ab35f2d-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"481a98559bfdaef02332fc3f3ab35f2d\") " pod="kube-system/kube-scheduler-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:42.256246 kubelet[2425]: I0430 12:54:42.256150 2425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da147e7910478ae553dd2f74786f8e88-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"da147e7910478ae553dd2f74786f8e88\") " pod="kube-system/kube-apiserver-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:42.256339 kubelet[2425]: I0430 12:54:42.256164 2425 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da147e7910478ae553dd2f74786f8e88-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"da147e7910478ae553dd2f74786f8e88\") " pod="kube-system/kube-apiserver-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:42.329370 kubelet[2425]: I0430 12:54:42.329271 2425 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:42.329717 kubelet[2425]: E0430 12:54:42.329674 2425 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://37.27.3.216:6443/api/v1/nodes\": dial tcp 37.27.3.216:6443: connect: connection refused" node="ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:42.398636 containerd[1515]: time="2025-04-30T12:54:42.398559890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-d-a2f51ba0c1,Uid:da147e7910478ae553dd2f74786f8e88,Namespace:kube-system,Attempt:0,}" Apr 30 12:54:42.407260 containerd[1515]: time="2025-04-30T12:54:42.407187738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1,Uid:e10bc9fce0da89874bc9dae140b5ad14,Namespace:kube-system,Attempt:0,}" Apr 30 12:54:42.412573 containerd[1515]: time="2025-04-30T12:54:42.412524203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-d-a2f51ba0c1,Uid:481a98559bfdaef02332fc3f3ab35f2d,Namespace:kube-system,Attempt:0,}" Apr 30 12:54:42.557478 kubelet[2425]: E0430 12:54:42.557254 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.3.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-d-a2f51ba0c1?timeout=10s\": dial tcp 37.27.3.216:6443: connect: connection refused" interval="800ms" Apr 30 12:54:42.731783 kubelet[2425]: I0430 12:54:42.731751 2425 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:42.732154 kubelet[2425]: E0430 12:54:42.732120 2425 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://37.27.3.216:6443/api/v1/nodes\": dial tcp 37.27.3.216:6443: connect: connection refused" node="ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:42.832074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount728938565.mount: Deactivated successfully. Apr 30 12:54:42.839929 containerd[1515]: time="2025-04-30T12:54:42.839877872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:54:42.841601 containerd[1515]: time="2025-04-30T12:54:42.841556395Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:54:42.843423 containerd[1515]: time="2025-04-30T12:54:42.843354613Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Apr 30 12:54:42.844111 containerd[1515]: time="2025-04-30T12:54:42.844066130Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:54:42.845807 containerd[1515]: time="2025-04-30T12:54:42.845769700Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:54:42.847130 containerd[1515]: time="2025-04-30T12:54:42.846947934Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:54:42.847130 containerd[1515]: time="2025-04-30T12:54:42.847062569Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:54:42.848211 containerd[1515]: time="2025-04-30T12:54:42.848140384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:54:42.850701 containerd[1515]: time="2025-04-30T12:54:42.849490811Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 450.81319ms" Apr 30 12:54:42.851510 containerd[1515]: time="2025-04-30T12:54:42.851396571Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 438.749788ms" Apr 30 12:54:42.854491 containerd[1515]: time="2025-04-30T12:54:42.854459555Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 447.172921ms" Apr 30 12:54:42.962127 containerd[1515]: time="2025-04-30T12:54:42.960240831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:54:42.962127 containerd[1515]: time="2025-04-30T12:54:42.961917901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:54:42.962127 containerd[1515]: time="2025-04-30T12:54:42.961929613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:54:42.962127 containerd[1515]: time="2025-04-30T12:54:42.962000096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:54:42.965378 containerd[1515]: time="2025-04-30T12:54:42.965036200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:54:42.965378 containerd[1515]: time="2025-04-30T12:54:42.965108696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:54:42.965378 containerd[1515]: time="2025-04-30T12:54:42.965127942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:54:42.965378 containerd[1515]: time="2025-04-30T12:54:42.965210456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:54:42.972703 containerd[1515]: time="2025-04-30T12:54:42.972465285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:54:42.972703 containerd[1515]: time="2025-04-30T12:54:42.972659460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:54:42.972858 containerd[1515]: time="2025-04-30T12:54:42.972726305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:54:42.975736 containerd[1515]: time="2025-04-30T12:54:42.973527511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:54:42.987168 systemd[1]: Started cri-containerd-2d5369b59997d14f4c20b5abe207916c5925c4d9bfc04a078a04e6dab041a4e2.scope - libcontainer container 2d5369b59997d14f4c20b5abe207916c5925c4d9bfc04a078a04e6dab041a4e2. Apr 30 12:54:42.997109 systemd[1]: Started cri-containerd-b949f34c44b0f52950c7e615308fe31c2b38a5c0a8eda9c06120bf0cf497b414.scope - libcontainer container b949f34c44b0f52950c7e615308fe31c2b38a5c0a8eda9c06120bf0cf497b414. Apr 30 12:54:43.001721 systemd[1]: Started cri-containerd-9a2b775bf475b498c6f5b59611a035f9f83a5d6c49e714b5e428642dfd4ed5ff.scope - libcontainer container 9a2b775bf475b498c6f5b59611a035f9f83a5d6c49e714b5e428642dfd4ed5ff. Apr 30 12:54:43.065547 containerd[1515]: time="2025-04-30T12:54:43.065506163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1,Uid:e10bc9fce0da89874bc9dae140b5ad14,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d5369b59997d14f4c20b5abe207916c5925c4d9bfc04a078a04e6dab041a4e2\"" Apr 30 12:54:43.070727 containerd[1515]: time="2025-04-30T12:54:43.070629045Z" level=info msg="CreateContainer within sandbox \"2d5369b59997d14f4c20b5abe207916c5925c4d9bfc04a078a04e6dab041a4e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 12:54:43.074807 containerd[1515]: time="2025-04-30T12:54:43.074666179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-d-a2f51ba0c1,Uid:481a98559bfdaef02332fc3f3ab35f2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a2b775bf475b498c6f5b59611a035f9f83a5d6c49e714b5e428642dfd4ed5ff\"" Apr 30 12:54:43.078139 containerd[1515]: time="2025-04-30T12:54:43.078110869Z" level=info msg="CreateContainer within sandbox \"9a2b775bf475b498c6f5b59611a035f9f83a5d6c49e714b5e428642dfd4ed5ff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 12:54:43.079103 containerd[1515]: time="2025-04-30T12:54:43.078930881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-d-a2f51ba0c1,Uid:da147e7910478ae553dd2f74786f8e88,Namespace:kube-system,Attempt:0,} returns sandbox id \"b949f34c44b0f52950c7e615308fe31c2b38a5c0a8eda9c06120bf0cf497b414\"" Apr 30 12:54:43.081313 containerd[1515]: time="2025-04-30T12:54:43.081285614Z" level=info msg="CreateContainer within sandbox \"b949f34c44b0f52950c7e615308fe31c2b38a5c0a8eda9c06120bf0cf497b414\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 12:54:43.097460 containerd[1515]: time="2025-04-30T12:54:43.097283826Z" level=info msg="CreateContainer within sandbox \"2d5369b59997d14f4c20b5abe207916c5925c4d9bfc04a078a04e6dab041a4e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"86e4a9ce2b11d1091dfc25f5a4ce5fb949e9d7702663a61ca462c8ed7415fd7c\"" Apr 30 12:54:43.098925 containerd[1515]: time="2025-04-30T12:54:43.098648740Z" level=info msg="StartContainer for \"86e4a9ce2b11d1091dfc25f5a4ce5fb949e9d7702663a61ca462c8ed7415fd7c\"" Apr 30 12:54:43.104352 containerd[1515]: time="2025-04-30T12:54:43.104318099Z" level=info msg="CreateContainer within sandbox \"9a2b775bf475b498c6f5b59611a035f9f83a5d6c49e714b5e428642dfd4ed5ff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9b61a0b31562b82db38304698e4082b0977c3af41b78abc2c7f8375d5b1f9296\"" Apr 30 12:54:43.105393 containerd[1515]: time="2025-04-30T12:54:43.105369905Z" level=info msg="StartContainer for \"9b61a0b31562b82db38304698e4082b0977c3af41b78abc2c7f8375d5b1f9296\"" Apr 30 12:54:43.105817 containerd[1515]: time="2025-04-30T12:54:43.105792440Z" level=info msg="CreateContainer within sandbox \"b949f34c44b0f52950c7e615308fe31c2b38a5c0a8eda9c06120bf0cf497b414\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"727f522169ebab9ad0f57717aff5d2670eec970f49753c40b0068ec6ef60abe1\"" Apr 30 12:54:43.107543 containerd[1515]: time="2025-04-30T12:54:43.107206817Z" level=info msg="StartContainer for \"727f522169ebab9ad0f57717aff5d2670eec970f49753c40b0068ec6ef60abe1\"" Apr 30 12:54:43.138889 systemd[1]: Started cri-containerd-86e4a9ce2b11d1091dfc25f5a4ce5fb949e9d7702663a61ca462c8ed7415fd7c.scope - libcontainer container 86e4a9ce2b11d1091dfc25f5a4ce5fb949e9d7702663a61ca462c8ed7415fd7c. Apr 30 12:54:43.148774 systemd[1]: Started cri-containerd-727f522169ebab9ad0f57717aff5d2670eec970f49753c40b0068ec6ef60abe1.scope - libcontainer container 727f522169ebab9ad0f57717aff5d2670eec970f49753c40b0068ec6ef60abe1. Apr 30 12:54:43.156786 systemd[1]: Started cri-containerd-9b61a0b31562b82db38304698e4082b0977c3af41b78abc2c7f8375d5b1f9296.scope - libcontainer container 9b61a0b31562b82db38304698e4082b0977c3af41b78abc2c7f8375d5b1f9296. Apr 30 12:54:43.218626 containerd[1515]: time="2025-04-30T12:54:43.216880335Z" level=info msg="StartContainer for \"86e4a9ce2b11d1091dfc25f5a4ce5fb949e9d7702663a61ca462c8ed7415fd7c\" returns successfully" Apr 30 12:54:43.218773 kubelet[2425]: W0430 12:54:43.217090 2425 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.3.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-d-a2f51ba0c1&limit=500&resourceVersion=0": dial tcp 37.27.3.216:6443: connect: connection refused Apr 30 12:54:43.218773 kubelet[2425]: E0430 12:54:43.217176 2425 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://37.27.3.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-d-a2f51ba0c1&limit=500&resourceVersion=0\": dial tcp 37.27.3.216:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:54:43.241047 containerd[1515]: time="2025-04-30T12:54:43.241008640Z" level=info msg="StartContainer for \"727f522169ebab9ad0f57717aff5d2670eec970f49753c40b0068ec6ef60abe1\" returns successfully" Apr 30 12:54:43.252044 containerd[1515]: time="2025-04-30T12:54:43.251997612Z" level=info msg="StartContainer for \"9b61a0b31562b82db38304698e4082b0977c3af41b78abc2c7f8375d5b1f9296\" returns successfully" Apr 30 12:54:43.355662 kubelet[2425]: W0430 12:54:43.353497 2425 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.3.216:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.3.216:6443: connect: connection refused Apr 30 12:54:43.355662 kubelet[2425]: E0430 12:54:43.353586 2425 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.3.216:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.3.216:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:54:43.357990 kubelet[2425]: E0430 12:54:43.357960 2425 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.3.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-d-a2f51ba0c1?timeout=10s\": dial tcp 37.27.3.216:6443: connect: connection refused" interval="1.6s" Apr 30 12:54:43.424850 kubelet[2425]: W0430 12:54:43.424777 2425 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.3.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.3.216:6443: connect: connection refused Apr 30 12:54:43.424971 kubelet[2425]: E0430 12:54:43.424866 2425 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://37.27.3.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.3.216:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:54:43.534120 kubelet[2425]: I0430 12:54:43.534091 2425 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:44.703526 kubelet[2425]: I0430 12:54:44.703489 2425 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:44.703526 kubelet[2425]: E0430 12:54:44.703521 2425 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230-1-1-d-a2f51ba0c1\": node \"ci-4230-1-1-d-a2f51ba0c1\" not found" Apr 30 12:54:44.713248 kubelet[2425]: E0430 12:54:44.713222 2425 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-1-d-a2f51ba0c1\" not found" Apr 30 12:54:44.814184 kubelet[2425]: E0430 12:54:44.814111 2425 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-1-d-a2f51ba0c1\" not found" Apr 30 12:54:44.915189 kubelet[2425]: E0430 12:54:44.915132 2425 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-1-d-a2f51ba0c1\" not found" Apr 30 12:54:45.016384 kubelet[2425]: E0430 12:54:45.016298 2425 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-1-d-a2f51ba0c1\" not found" Apr 30 12:54:45.117321 kubelet[2425]: E0430 12:54:45.117260 2425 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-1-d-a2f51ba0c1\" not found" Apr 30 12:54:45.218212 kubelet[2425]: E0430 12:54:45.218157 2425 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-1-d-a2f51ba0c1\" not found" Apr 30 12:54:45.319066 kubelet[2425]: E0430 12:54:45.318928 2425 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-1-d-a2f51ba0c1\" not found" Apr 30 12:54:45.939943 kubelet[2425]: I0430 12:54:45.939908 2425 apiserver.go:52] "Watching apiserver" Apr 30 12:54:45.954824 kubelet[2425]: I0430 12:54:45.954747 2425 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 12:54:46.598509 systemd[1]: Reload requested from client PID 2698 ('systemctl') (unit session-7.scope)... Apr 30 12:54:46.598532 systemd[1]: Reloading... Apr 30 12:54:46.682757 zram_generator::config[2742]: No configuration found. Apr 30 12:54:46.784996 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:54:46.887093 systemd[1]: Reloading finished in 288 ms. Apr 30 12:54:46.912473 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:54:46.912924 kubelet[2425]: E0430 12:54:46.912364 2425 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4230-1-1-d-a2f51ba0c1.183b19d4b7d9399f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-1-d-a2f51ba0c1,UID:ci-4230-1-1-d-a2f51ba0c1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-d-a2f51ba0c1,},FirstTimestamp:2025-04-30 12:54:41.944426911 +0000 UTC m=+0.292834002,LastTimestamp:2025-04-30 12:54:41.944426911 +0000 UTC m=+0.292834002,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-d-a2f51ba0c1,}" Apr 30 12:54:46.912924 kubelet[2425]: I0430 12:54:46.912677 2425 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:54:46.919870 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 12:54:46.920056 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:54:46.920098 systemd[1]: kubelet.service: Consumed 622ms CPU time, 115M memory peak. Apr 30 12:54:46.927904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:54:47.031147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:54:47.036124 (kubelet)[2794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:54:47.077118 kubelet[2794]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:54:47.077118 kubelet[2794]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 12:54:47.077118 kubelet[2794]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:54:47.077477 kubelet[2794]: I0430 12:54:47.077169 2794 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:54:47.086632 kubelet[2794]: I0430 12:54:47.085871 2794 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 12:54:47.086632 kubelet[2794]: I0430 12:54:47.085889 2794 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:54:47.086632 kubelet[2794]: I0430 12:54:47.086053 2794 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 12:54:47.087788 kubelet[2794]: I0430 12:54:47.087758 2794 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 12:54:47.090726 kubelet[2794]: I0430 12:54:47.090561 2794 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:54:47.095144 kubelet[2794]: E0430 12:54:47.095094 2794 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 12:54:47.095388 kubelet[2794]: I0430 12:54:47.095145 2794 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 12:54:47.099655 kubelet[2794]: I0430 12:54:47.099562 2794 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:54:47.099731 kubelet[2794]: I0430 12:54:47.099675 2794 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 12:54:47.099778 kubelet[2794]: I0430 12:54:47.099762 2794 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:54:47.099932 kubelet[2794]: I0430 12:54:47.099779 2794 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-d-a2f51ba0c1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 12:54:47.099932 kubelet[2794]: I0430 12:54:47.099922 2794 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:54:47.099932 kubelet[2794]: I0430 12:54:47.099930 2794 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 12:54:47.101862 kubelet[2794]: I0430 12:54:47.101836 2794 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:54:47.103019 kubelet[2794]: I0430 12:54:47.102992 2794 kubelet.go:408] "Attempting to sync node with API server" Apr 30 12:54:47.103019 kubelet[2794]: I0430 12:54:47.103010 2794 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:54:47.103106 kubelet[2794]: I0430 12:54:47.103033 2794 kubelet.go:314] "Adding apiserver pod source" Apr 30 12:54:47.103106 kubelet[2794]: I0430 12:54:47.103043 2794 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:54:47.114811 kubelet[2794]: I0430 12:54:47.114752 2794 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:54:47.115099 kubelet[2794]: I0430 12:54:47.115069 2794 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:54:47.121014 kubelet[2794]: I0430 12:54:47.120988 2794 server.go:1269] "Started kubelet" Apr 30 12:54:47.122111 kubelet[2794]: I0430 12:54:47.121150 2794 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:54:47.122111 kubelet[2794]: I0430 12:54:47.121879 2794 server.go:460] "Adding debug handlers to kubelet server" Apr 30 12:54:47.122507 kubelet[2794]: I0430 12:54:47.122469 2794 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:54:47.122680 kubelet[2794]: I0430 12:54:47.122657 2794 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:54:47.123160 kubelet[2794]: I0430 12:54:47.123147 2794 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:54:47.125334 kubelet[2794]: I0430 12:54:47.125307 2794 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 12:54:47.126725 kubelet[2794]: I0430 12:54:47.126616 2794 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 12:54:47.126926 kubelet[2794]: I0430 12:54:47.126900 2794 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 12:54:47.127018 kubelet[2794]: I0430 12:54:47.126998 2794 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:54:47.128904 kubelet[2794]: I0430 12:54:47.128784 2794 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:54:47.128904 kubelet[2794]: I0430 12:54:47.128900 2794 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:54:47.131682 kubelet[2794]: I0430 12:54:47.131664 2794 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:54:47.136841 kubelet[2794]: I0430 12:54:47.136822 2794 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:54:47.137741 kubelet[2794]: I0430 12:54:47.137665 2794 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:54:47.137840 kubelet[2794]: I0430 12:54:47.137826 2794 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 12:54:47.138185 kubelet[2794]: I0430 12:54:47.138170 2794 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 12:54:47.138358 kubelet[2794]: E0430 12:54:47.138293 2794 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:54:47.175764 kubelet[2794]: I0430 12:54:47.175719 2794 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 12:54:47.175764 kubelet[2794]: I0430 12:54:47.175741 2794 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 12:54:47.175764 kubelet[2794]: I0430 12:54:47.175761 2794 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:54:47.175913 kubelet[2794]: I0430 12:54:47.175890 2794 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 12:54:47.175913 kubelet[2794]: I0430 12:54:47.175899 2794 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 12:54:47.175951 kubelet[2794]: I0430 12:54:47.175915 2794 policy_none.go:49] "None policy: Start" Apr 30 12:54:47.176405 kubelet[2794]: I0430 12:54:47.176393 2794 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 12:54:47.177411 kubelet[2794]: I0430 12:54:47.176544 2794 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:54:47.177411 kubelet[2794]: I0430 12:54:47.176765 2794 state_mem.go:75] "Updated machine memory state" Apr 30 12:54:47.180626 kubelet[2794]: I0430 12:54:47.180612 2794 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:54:47.181264 kubelet[2794]: I0430 12:54:47.181253 2794 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 12:54:47.181367 kubelet[2794]: I0430 12:54:47.181332 2794 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:54:47.181854 kubelet[2794]: I0430 12:54:47.181830 2794 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:54:47.245113 kubelet[2794]: E0430 12:54:47.245074 2794 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230-1-1-d-a2f51ba0c1\" already exists" pod="kube-system/kube-scheduler-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:47.291321 kubelet[2794]: I0430 12:54:47.291277 2794 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:47.298918 kubelet[2794]: I0430 12:54:47.298844 2794 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:47.298918 kubelet[2794]: I0430 12:54:47.298923 2794 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:47.428998 kubelet[2794]: I0430 12:54:47.428878 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da147e7910478ae553dd2f74786f8e88-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"da147e7910478ae553dd2f74786f8e88\") " pod="kube-system/kube-apiserver-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:47.428998 kubelet[2794]: I0430 12:54:47.428922 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e10bc9fce0da89874bc9dae140b5ad14-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"e10bc9fce0da89874bc9dae140b5ad14\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:47.428998 kubelet[2794]: I0430 12:54:47.428948 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e10bc9fce0da89874bc9dae140b5ad14-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"e10bc9fce0da89874bc9dae140b5ad14\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:47.428998 kubelet[2794]: I0430 12:54:47.428973 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e10bc9fce0da89874bc9dae140b5ad14-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"e10bc9fce0da89874bc9dae140b5ad14\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:47.428998 kubelet[2794]: I0430 12:54:47.428997 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/481a98559bfdaef02332fc3f3ab35f2d-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"481a98559bfdaef02332fc3f3ab35f2d\") " pod="kube-system/kube-scheduler-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:47.429193 kubelet[2794]: I0430 12:54:47.429019 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da147e7910478ae553dd2f74786f8e88-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"da147e7910478ae553dd2f74786f8e88\") " pod="kube-system/kube-apiserver-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:47.429193 kubelet[2794]: I0430 12:54:47.429047 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da147e7910478ae553dd2f74786f8e88-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"da147e7910478ae553dd2f74786f8e88\") " pod="kube-system/kube-apiserver-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:47.429193 kubelet[2794]: I0430 12:54:47.429067 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e10bc9fce0da89874bc9dae140b5ad14-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"e10bc9fce0da89874bc9dae140b5ad14\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:47.429193 kubelet[2794]: I0430 12:54:47.429088 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e10bc9fce0da89874bc9dae140b5ad14-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1\" (UID: \"e10bc9fce0da89874bc9dae140b5ad14\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1" Apr 30 12:54:47.602735 sudo[2828]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 12:54:47.603047 sudo[2828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 12:54:48.081261 sudo[2828]: pam_unix(sudo:session): session closed for user root Apr 30 12:54:48.104798 kubelet[2794]: I0430 12:54:48.104296 2794 apiserver.go:52] "Watching apiserver" Apr 30 12:54:48.127181 kubelet[2794]: I0430 12:54:48.127134 2794 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 12:54:48.141331 kubelet[2794]: I0430 12:54:48.141206 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-1-1-d-a2f51ba0c1" podStartSLOduration=3.141192881 podStartE2EDuration="3.141192881s" podCreationTimestamp="2025-04-30 12:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:54:48.141035145 +0000 UTC m=+1.100818875" watchObservedRunningTime="2025-04-30 12:54:48.141192881 +0000 UTC m=+1.100976611" Apr 30 12:54:48.161890 kubelet[2794]: I0430 12:54:48.160016 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-1-1-d-a2f51ba0c1" podStartSLOduration=1.159996516 podStartE2EDuration="1.159996516s" podCreationTimestamp="2025-04-30 12:54:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:54:48.149854348 +0000 UTC m=+1.109638077" watchObservedRunningTime="2025-04-30 12:54:48.159996516 +0000 UTC m=+1.119780246" Apr 30 12:54:48.161890 kubelet[2794]: I0430 12:54:48.161147 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-1-1-d-a2f51ba0c1" podStartSLOduration=1.161135786 podStartE2EDuration="1.161135786s" podCreationTimestamp="2025-04-30 12:54:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:54:48.158660687 +0000 UTC m=+1.118444417" watchObservedRunningTime="2025-04-30 12:54:48.161135786 +0000 UTC m=+1.120919526" Apr 30 12:54:49.318026 sudo[1906]: pam_unix(sudo:session): session closed for user root Apr 30 12:54:49.476168 sshd[1905]: Connection closed by 139.178.68.195 port 49564 Apr 30 12:54:49.477559 sshd-session[1903]: pam_unix(sshd:session): session closed for user core Apr 30 12:54:49.481013 systemd-logind[1497]: Session 7 logged out. Waiting for processes to exit. Apr 30 12:54:49.481187 systemd[1]: sshd@6-37.27.3.216:22-139.178.68.195:49564.service: Deactivated successfully. Apr 30 12:54:49.482751 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 12:54:49.482979 systemd[1]: session-7.scope: Consumed 3.587s CPU time, 213.7M memory peak. Apr 30 12:54:49.484919 systemd-logind[1497]: Removed session 7. Apr 30 12:54:52.051027 kubelet[2794]: I0430 12:54:52.050973 2794 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 12:54:52.051707 containerd[1515]: time="2025-04-30T12:54:52.051651206Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 12:54:52.052168 kubelet[2794]: I0430 12:54:52.052021 2794 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 12:54:53.138925 systemd[1]: Created slice kubepods-besteffort-pod6946cdaf_d1f9_4fdc_901a_72019c91c8d9.slice - libcontainer container kubepods-besteffort-pod6946cdaf_d1f9_4fdc_901a_72019c91c8d9.slice. Apr 30 12:54:53.164326 systemd[1]: Created slice kubepods-burstable-pod7f6287a5_e686_43a9_9b3e_d09836a18e00.slice - libcontainer container kubepods-burstable-pod7f6287a5_e686_43a9_9b3e_d09836a18e00.slice. Apr 30 12:54:53.165879 kubelet[2794]: I0430 12:54:53.165450 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-bpf-maps\") pod \"cilium-82wrp\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " pod="kube-system/cilium-82wrp" Apr 30 12:54:53.165879 kubelet[2794]: I0430 12:54:53.165478 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-lib-modules\") pod \"cilium-82wrp\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " pod="kube-system/cilium-82wrp" Apr 30 12:54:53.165879 kubelet[2794]: I0430 12:54:53.165494 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-cni-path\") pod \"cilium-82wrp\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " pod="kube-system/cilium-82wrp" Apr 30 12:54:53.165879 kubelet[2794]: I0430 12:54:53.165524 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6946cdaf-d1f9-4fdc-901a-72019c91c8d9-xtables-lock\") pod \"kube-proxy-nxss7\" (UID: \"6946cdaf-d1f9-4fdc-901a-72019c91c8d9\") " pod="kube-system/kube-proxy-nxss7" Apr 30 12:54:53.165879 kubelet[2794]: I0430 12:54:53.165539 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsdxf\" (UniqueName: \"kubernetes.io/projected/6946cdaf-d1f9-4fdc-901a-72019c91c8d9-kube-api-access-rsdxf\") pod \"kube-proxy-nxss7\" (UID: \"6946cdaf-d1f9-4fdc-901a-72019c91c8d9\") " pod="kube-system/kube-proxy-nxss7" Apr 30 12:54:53.165879 kubelet[2794]: I0430 12:54:53.165552 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-xtables-lock\") pod \"cilium-82wrp\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " pod="kube-system/cilium-82wrp" Apr 30 12:54:53.166248 kubelet[2794]: I0430 12:54:53.165564 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-host-proc-sys-kernel\") pod \"cilium-82wrp\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " pod="kube-system/cilium-82wrp" Apr 30 12:54:53.166248 kubelet[2794]: I0430 12:54:53.165576 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-etc-cni-netd\") pod \"cilium-82wrp\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " pod="kube-system/cilium-82wrp" Apr 30 12:54:53.166248 kubelet[2794]: I0430 12:54:53.165587 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-host-proc-sys-net\") pod \"cilium-82wrp\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " pod="kube-system/cilium-82wrp" Apr 30 12:54:53.166248 kubelet[2794]: I0430 12:54:53.165627 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f6287a5-e686-43a9-9b3e-d09836a18e00-hubble-tls\") pod \"cilium-82wrp\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " pod="kube-system/cilium-82wrp" Apr 30 12:54:53.166248 kubelet[2794]: I0430 12:54:53.165641 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f6287a5-e686-43a9-9b3e-d09836a18e00-cilium-config-path\") pod \"cilium-82wrp\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " pod="kube-system/cilium-82wrp" Apr 30 12:54:53.166338 kubelet[2794]: I0430 12:54:53.165652 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6946cdaf-d1f9-4fdc-901a-72019c91c8d9-lib-modules\") pod \"kube-proxy-nxss7\" (UID: \"6946cdaf-d1f9-4fdc-901a-72019c91c8d9\") " pod="kube-system/kube-proxy-nxss7" Apr 30 12:54:53.166338 kubelet[2794]: I0430 12:54:53.165664 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-cilium-run\") pod \"cilium-82wrp\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " pod="kube-system/cilium-82wrp" Apr 30 12:54:53.166338 kubelet[2794]: I0430 12:54:53.165675 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-hostproc\") pod \"cilium-82wrp\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " pod="kube-system/cilium-82wrp" Apr 30 12:54:53.166338 kubelet[2794]: I0430 12:54:53.165685 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-cilium-cgroup\") pod \"cilium-82wrp\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " pod="kube-system/cilium-82wrp" Apr 30 12:54:53.166338 kubelet[2794]: I0430 12:54:53.165699 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6946cdaf-d1f9-4fdc-901a-72019c91c8d9-kube-proxy\") pod \"kube-proxy-nxss7\" (UID: \"6946cdaf-d1f9-4fdc-901a-72019c91c8d9\") " pod="kube-system/kube-proxy-nxss7" Apr 30 12:54:53.166338 kubelet[2794]: I0430 12:54:53.165709 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f6287a5-e686-43a9-9b3e-d09836a18e00-clustermesh-secrets\") pod \"cilium-82wrp\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " pod="kube-system/cilium-82wrp" Apr 30 12:54:53.166439 kubelet[2794]: I0430 12:54:53.165723 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn8xj\" (UniqueName: \"kubernetes.io/projected/7f6287a5-e686-43a9-9b3e-d09836a18e00-kube-api-access-qn8xj\") pod \"cilium-82wrp\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " pod="kube-system/cilium-82wrp" Apr 30 12:54:53.245740 systemd[1]: Created slice kubepods-besteffort-podd7fd7bd8_6735_4ab8_8c66_fc22c66e77e1.slice - libcontainer container kubepods-besteffort-podd7fd7bd8_6735_4ab8_8c66_fc22c66e77e1.slice. Apr 30 12:54:53.266903 kubelet[2794]: I0430 12:54:53.266858 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn4gb\" (UniqueName: \"kubernetes.io/projected/d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1-kube-api-access-gn4gb\") pod \"cilium-operator-5d85765b45-5mq6v\" (UID: \"d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1\") " pod="kube-system/cilium-operator-5d85765b45-5mq6v" Apr 30 12:54:53.267250 kubelet[2794]: I0430 12:54:53.267225 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1-cilium-config-path\") pod \"cilium-operator-5d85765b45-5mq6v\" (UID: \"d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1\") " pod="kube-system/cilium-operator-5d85765b45-5mq6v" Apr 30 12:54:53.454204 containerd[1515]: time="2025-04-30T12:54:53.454062526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nxss7,Uid:6946cdaf-d1f9-4fdc-901a-72019c91c8d9,Namespace:kube-system,Attempt:0,}" Apr 30 12:54:53.470756 containerd[1515]: time="2025-04-30T12:54:53.470314343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-82wrp,Uid:7f6287a5-e686-43a9-9b3e-d09836a18e00,Namespace:kube-system,Attempt:0,}" Apr 30 12:54:53.471974 containerd[1515]: time="2025-04-30T12:54:53.471856099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:54:53.471974 containerd[1515]: time="2025-04-30T12:54:53.471914227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:54:53.471974 containerd[1515]: time="2025-04-30T12:54:53.471932993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:54:53.472415 containerd[1515]: time="2025-04-30T12:54:53.472004607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:54:53.492992 systemd[1]: Started cri-containerd-68d891c007d831c1665ed358e051e25ebc7f75c25ae668842f48637ea1ac31aa.scope - libcontainer container 68d891c007d831c1665ed358e051e25ebc7f75c25ae668842f48637ea1ac31aa. Apr 30 12:54:53.499859 containerd[1515]: time="2025-04-30T12:54:53.498225163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:54:53.499859 containerd[1515]: time="2025-04-30T12:54:53.499645821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:54:53.502179 containerd[1515]: time="2025-04-30T12:54:53.502029537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:54:53.502476 containerd[1515]: time="2025-04-30T12:54:53.502429598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:54:53.525831 systemd[1]: Started cri-containerd-fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5.scope - libcontainer container fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5. Apr 30 12:54:53.530231 containerd[1515]: time="2025-04-30T12:54:53.530170149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nxss7,Uid:6946cdaf-d1f9-4fdc-901a-72019c91c8d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"68d891c007d831c1665ed358e051e25ebc7f75c25ae668842f48637ea1ac31aa\"" Apr 30 12:54:53.534477 containerd[1515]: time="2025-04-30T12:54:53.534321394Z" level=info msg="CreateContainer within sandbox \"68d891c007d831c1665ed358e051e25ebc7f75c25ae668842f48637ea1ac31aa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 12:54:53.550286 containerd[1515]: time="2025-04-30T12:54:53.550170695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5mq6v,Uid:d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1,Namespace:kube-system,Attempt:0,}" Apr 30 12:54:53.553025 containerd[1515]: time="2025-04-30T12:54:53.552902244Z" level=info msg="CreateContainer within sandbox \"68d891c007d831c1665ed358e051e25ebc7f75c25ae668842f48637ea1ac31aa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"25ab604f1417bf7c56760d72d2a61c92ba3188ccfe72b5adbb02808db9790844\"" Apr 30 12:54:53.553649 containerd[1515]: time="2025-04-30T12:54:53.553522709Z" level=info msg="StartContainer for \"25ab604f1417bf7c56760d72d2a61c92ba3188ccfe72b5adbb02808db9790844\"" Apr 30 12:54:53.574762 containerd[1515]: time="2025-04-30T12:54:53.574699605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-82wrp,Uid:7f6287a5-e686-43a9-9b3e-d09836a18e00,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5\"" Apr 30 12:54:53.577491 containerd[1515]: time="2025-04-30T12:54:53.577455960Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 12:54:53.588829 containerd[1515]: time="2025-04-30T12:54:53.588429917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:54:53.588957 containerd[1515]: time="2025-04-30T12:54:53.588836721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:54:53.588957 containerd[1515]: time="2025-04-30T12:54:53.588862369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:54:53.589090 containerd[1515]: time="2025-04-30T12:54:53.589047126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:54:53.590319 systemd[1]: Started cri-containerd-25ab604f1417bf7c56760d72d2a61c92ba3188ccfe72b5adbb02808db9790844.scope - libcontainer container 25ab604f1417bf7c56760d72d2a61c92ba3188ccfe72b5adbb02808db9790844. Apr 30 12:54:53.603878 systemd[1]: Started cri-containerd-e4fed704bc49f9697679da1ced788717de296c8aa453f85fafbd507eed4e2dc1.scope - libcontainer container e4fed704bc49f9697679da1ced788717de296c8aa453f85fafbd507eed4e2dc1. Apr 30 12:54:53.629380 containerd[1515]: time="2025-04-30T12:54:53.629314820Z" level=info msg="StartContainer for \"25ab604f1417bf7c56760d72d2a61c92ba3188ccfe72b5adbb02808db9790844\" returns successfully" Apr 30 12:54:53.650576 containerd[1515]: time="2025-04-30T12:54:53.650536509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5mq6v,Uid:d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4fed704bc49f9697679da1ced788717de296c8aa453f85fafbd507eed4e2dc1\"" Apr 30 12:54:54.203407 kubelet[2794]: I0430 12:54:54.203218 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nxss7" podStartSLOduration=1.203192842 podStartE2EDuration="1.203192842s" podCreationTimestamp="2025-04-30 12:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:54:54.202649411 +0000 UTC m=+7.162433182" watchObservedRunningTime="2025-04-30 12:54:54.203192842 +0000 UTC m=+7.162976602" Apr 30 12:55:02.797794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3950324866.mount: Deactivated successfully. Apr 30 12:55:04.274001 containerd[1515]: time="2025-04-30T12:55:04.273911604Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:55:04.275579 containerd[1515]: time="2025-04-30T12:55:04.275537064Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 12:55:04.275906 containerd[1515]: time="2025-04-30T12:55:04.275868337Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:55:04.277127 containerd[1515]: time="2025-04-30T12:55:04.277101031Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.699446267s" Apr 30 12:55:04.277175 containerd[1515]: time="2025-04-30T12:55:04.277130346Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 12:55:04.285779 containerd[1515]: time="2025-04-30T12:55:04.285735540Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 12:55:04.302054 containerd[1515]: time="2025-04-30T12:55:04.301999120Z" level=info msg="CreateContainer within sandbox \"fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:55:04.359265 containerd[1515]: time="2025-04-30T12:55:04.359208515Z" level=info msg="CreateContainer within sandbox \"fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202\"" Apr 30 12:55:04.360051 containerd[1515]: time="2025-04-30T12:55:04.359670833Z" level=info msg="StartContainer for \"a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202\"" Apr 30 12:55:04.444943 systemd[1]: Started cri-containerd-a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202.scope - libcontainer container a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202. Apr 30 12:55:04.470671 containerd[1515]: time="2025-04-30T12:55:04.470380971Z" level=info msg="StartContainer for \"a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202\" returns successfully" Apr 30 12:55:04.478728 systemd[1]: cri-containerd-a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202.scope: Deactivated successfully. Apr 30 12:55:04.517150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202-rootfs.mount: Deactivated successfully. Apr 30 12:55:04.583520 containerd[1515]: time="2025-04-30T12:55:04.561145445Z" level=info msg="shim disconnected" id=a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202 namespace=k8s.io Apr 30 12:55:04.583520 containerd[1515]: time="2025-04-30T12:55:04.583426361Z" level=warning msg="cleaning up after shim disconnected" id=a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202 namespace=k8s.io Apr 30 12:55:04.583520 containerd[1515]: time="2025-04-30T12:55:04.583442671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:55:05.219676 containerd[1515]: time="2025-04-30T12:55:05.219521564Z" level=info msg="CreateContainer within sandbox \"fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:55:05.242907 containerd[1515]: time="2025-04-30T12:55:05.242855676Z" level=info msg="CreateContainer within sandbox \"fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c\"" Apr 30 12:55:05.244584 containerd[1515]: time="2025-04-30T12:55:05.243520114Z" level=info msg="StartContainer for \"8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c\"" Apr 30 12:55:05.275748 systemd[1]: Started cri-containerd-8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c.scope - libcontainer container 8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c. Apr 30 12:55:05.302975 containerd[1515]: time="2025-04-30T12:55:05.302879562Z" level=info msg="StartContainer for \"8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c\" returns successfully" Apr 30 12:55:05.314440 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:55:05.315997 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:55:05.316313 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:55:05.325894 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:55:05.326180 systemd[1]: cri-containerd-8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c.scope: Deactivated successfully. Apr 30 12:55:05.345083 containerd[1515]: time="2025-04-30T12:55:05.345032601Z" level=info msg="shim disconnected" id=8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c namespace=k8s.io Apr 30 12:55:05.345294 containerd[1515]: time="2025-04-30T12:55:05.345278072Z" level=warning msg="cleaning up after shim disconnected" id=8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c namespace=k8s.io Apr 30 12:55:05.345371 containerd[1515]: time="2025-04-30T12:55:05.345360466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:55:05.349266 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:55:05.862473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177854970.mount: Deactivated successfully. Apr 30 12:55:06.138275 containerd[1515]: time="2025-04-30T12:55:06.138120154Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:55:06.138948 containerd[1515]: time="2025-04-30T12:55:06.138908243Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 12:55:06.139673 containerd[1515]: time="2025-04-30T12:55:06.139630208Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:55:06.140638 containerd[1515]: time="2025-04-30T12:55:06.140509991Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.854748422s" Apr 30 12:55:06.140638 containerd[1515]: time="2025-04-30T12:55:06.140533765Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 12:55:06.153869 containerd[1515]: time="2025-04-30T12:55:06.153833270Z" level=info msg="CreateContainer within sandbox \"e4fed704bc49f9697679da1ced788717de296c8aa453f85fafbd507eed4e2dc1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 12:55:06.164959 containerd[1515]: time="2025-04-30T12:55:06.164919280Z" level=info msg="CreateContainer within sandbox \"e4fed704bc49f9697679da1ced788717de296c8aa453f85fafbd507eed4e2dc1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a\"" Apr 30 12:55:06.166722 containerd[1515]: time="2025-04-30T12:55:06.165307568Z" level=info msg="StartContainer for \"15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a\"" Apr 30 12:55:06.189806 systemd[1]: Started cri-containerd-15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a.scope - libcontainer container 15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a. Apr 30 12:55:06.211945 containerd[1515]: time="2025-04-30T12:55:06.210904070Z" level=info msg="StartContainer for \"15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a\" returns successfully" Apr 30 12:55:06.244368 containerd[1515]: time="2025-04-30T12:55:06.244330005Z" level=info msg="CreateContainer within sandbox \"fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:55:06.281365 containerd[1515]: time="2025-04-30T12:55:06.281304181Z" level=info msg="CreateContainer within sandbox \"fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77\"" Apr 30 12:55:06.287648 containerd[1515]: time="2025-04-30T12:55:06.285164938Z" level=info msg="StartContainer for \"426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77\"" Apr 30 12:55:06.288171 kubelet[2794]: I0430 12:55:06.288128 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-5mq6v" podStartSLOduration=0.794017474 podStartE2EDuration="13.288110427s" podCreationTimestamp="2025-04-30 12:54:53 +0000 UTC" firstStartedPulling="2025-04-30 12:54:53.651837282 +0000 UTC m=+6.611621012" lastFinishedPulling="2025-04-30 12:55:06.145930235 +0000 UTC m=+19.105713965" observedRunningTime="2025-04-30 12:55:06.284345179 +0000 UTC m=+19.244128909" watchObservedRunningTime="2025-04-30 12:55:06.288110427 +0000 UTC m=+19.247894157" Apr 30 12:55:06.314799 systemd[1]: Started cri-containerd-426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77.scope - libcontainer container 426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77. Apr 30 12:55:06.357110 containerd[1515]: time="2025-04-30T12:55:06.357057771Z" level=info msg="StartContainer for \"426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77\" returns successfully" Apr 30 12:55:06.378195 systemd[1]: cri-containerd-426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77.scope: Deactivated successfully. Apr 30 12:55:06.378515 systemd[1]: cri-containerd-426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77.scope: Consumed 18ms CPU time, 5.3M memory peak, 1M read from disk. Apr 30 12:55:06.424232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77-rootfs.mount: Deactivated successfully. Apr 30 12:55:06.463828 containerd[1515]: time="2025-04-30T12:55:06.463753554Z" level=info msg="shim disconnected" id=426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77 namespace=k8s.io Apr 30 12:55:06.463828 containerd[1515]: time="2025-04-30T12:55:06.463804099Z" level=warning msg="cleaning up after shim disconnected" id=426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77 namespace=k8s.io Apr 30 12:55:06.463828 containerd[1515]: time="2025-04-30T12:55:06.463810731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:55:07.249563 containerd[1515]: time="2025-04-30T12:55:07.249383452Z" level=info msg="CreateContainer within sandbox \"fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:55:07.271068 containerd[1515]: time="2025-04-30T12:55:07.270917543Z" level=info msg="CreateContainer within sandbox \"fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e\"" Apr 30 12:55:07.276253 containerd[1515]: time="2025-04-30T12:55:07.275093201Z" level=info msg="StartContainer for \"4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e\"" Apr 30 12:55:07.321895 systemd[1]: Started cri-containerd-4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e.scope - libcontainer container 4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e. Apr 30 12:55:07.356470 systemd[1]: cri-containerd-4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e.scope: Deactivated successfully. Apr 30 12:55:07.359186 containerd[1515]: time="2025-04-30T12:55:07.359149263Z" level=info msg="StartContainer for \"4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e\" returns successfully" Apr 30 12:55:07.375496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e-rootfs.mount: Deactivated successfully. Apr 30 12:55:07.383162 containerd[1515]: time="2025-04-30T12:55:07.383093790Z" level=info msg="shim disconnected" id=4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e namespace=k8s.io Apr 30 12:55:07.383162 containerd[1515]: time="2025-04-30T12:55:07.383153902Z" level=warning msg="cleaning up after shim disconnected" id=4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e namespace=k8s.io Apr 30 12:55:07.383162 containerd[1515]: time="2025-04-30T12:55:07.383161898Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:55:08.251717 containerd[1515]: time="2025-04-30T12:55:08.251657093Z" level=info msg="CreateContainer within sandbox \"fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:55:08.278321 containerd[1515]: time="2025-04-30T12:55:08.277968631Z" level=info msg="CreateContainer within sandbox \"fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0\"" Apr 30 12:55:08.280588 containerd[1515]: time="2025-04-30T12:55:08.280555667Z" level=info msg="StartContainer for \"3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0\"" Apr 30 12:55:08.320903 systemd[1]: Started cri-containerd-3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0.scope - libcontainer container 3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0. Apr 30 12:55:08.353856 containerd[1515]: time="2025-04-30T12:55:08.353805211Z" level=info msg="StartContainer for \"3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0\" returns successfully" Apr 30 12:55:08.425355 systemd[1]: run-containerd-runc-k8s.io-3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0-runc.XjAUtN.mount: Deactivated successfully. Apr 30 12:55:08.583390 kubelet[2794]: I0430 12:55:08.583245 2794 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Apr 30 12:55:08.618802 kubelet[2794]: W0430 12:55:08.618712 2794 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4230-1-1-d-a2f51ba0c1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-d-a2f51ba0c1' and this object Apr 30 12:55:08.618802 kubelet[2794]: E0430 12:55:08.618773 2794 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4230-1-1-d-a2f51ba0c1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-1-d-a2f51ba0c1' and this object" logger="UnhandledError" Apr 30 12:55:08.622945 systemd[1]: Created slice kubepods-burstable-pode3315e73_925a_459f_906a_cb29d5da1809.slice - libcontainer container kubepods-burstable-pode3315e73_925a_459f_906a_cb29d5da1809.slice. Apr 30 12:55:08.629255 systemd[1]: Created slice kubepods-burstable-pod36db4aad_bd48_4b0e_815e_4fd3fb367bbd.slice - libcontainer container kubepods-burstable-pod36db4aad_bd48_4b0e_815e_4fd3fb367bbd.slice. Apr 30 12:55:08.671320 kubelet[2794]: I0430 12:55:08.671128 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhqrw\" (UniqueName: \"kubernetes.io/projected/36db4aad-bd48-4b0e-815e-4fd3fb367bbd-kube-api-access-nhqrw\") pod \"coredns-6f6b679f8f-rhl9p\" (UID: \"36db4aad-bd48-4b0e-815e-4fd3fb367bbd\") " pod="kube-system/coredns-6f6b679f8f-rhl9p" Apr 30 12:55:08.671320 kubelet[2794]: I0430 12:55:08.671198 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z7z4\" (UniqueName: \"kubernetes.io/projected/e3315e73-925a-459f-906a-cb29d5da1809-kube-api-access-7z7z4\") pod \"coredns-6f6b679f8f-dbbqh\" (UID: \"e3315e73-925a-459f-906a-cb29d5da1809\") " pod="kube-system/coredns-6f6b679f8f-dbbqh" Apr 30 12:55:08.671320 kubelet[2794]: I0430 12:55:08.671227 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3315e73-925a-459f-906a-cb29d5da1809-config-volume\") pod \"coredns-6f6b679f8f-dbbqh\" (UID: \"e3315e73-925a-459f-906a-cb29d5da1809\") " pod="kube-system/coredns-6f6b679f8f-dbbqh" Apr 30 12:55:08.671320 kubelet[2794]: I0430 12:55:08.671253 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36db4aad-bd48-4b0e-815e-4fd3fb367bbd-config-volume\") pod \"coredns-6f6b679f8f-rhl9p\" (UID: \"36db4aad-bd48-4b0e-815e-4fd3fb367bbd\") " pod="kube-system/coredns-6f6b679f8f-rhl9p" Apr 30 12:55:09.772507 kubelet[2794]: E0430 12:55:09.772456 2794 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Apr 30 12:55:09.773096 kubelet[2794]: E0430 12:55:09.772592 2794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36db4aad-bd48-4b0e-815e-4fd3fb367bbd-config-volume podName:36db4aad-bd48-4b0e-815e-4fd3fb367bbd nodeName:}" failed. No retries permitted until 2025-04-30 12:55:10.272568001 +0000 UTC m=+23.232351732 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/36db4aad-bd48-4b0e-815e-4fd3fb367bbd-config-volume") pod "coredns-6f6b679f8f-rhl9p" (UID: "36db4aad-bd48-4b0e-815e-4fd3fb367bbd") : failed to sync configmap cache: timed out waiting for the condition Apr 30 12:55:09.773096 kubelet[2794]: E0430 12:55:09.772457 2794 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Apr 30 12:55:09.773096 kubelet[2794]: E0430 12:55:09.772948 2794 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3315e73-925a-459f-906a-cb29d5da1809-config-volume podName:e3315e73-925a-459f-906a-cb29d5da1809 nodeName:}" failed. No retries permitted until 2025-04-30 12:55:10.272930742 +0000 UTC m=+23.232714472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3315e73-925a-459f-906a-cb29d5da1809-config-volume") pod "coredns-6f6b679f8f-dbbqh" (UID: "e3315e73-925a-459f-906a-cb29d5da1809") : failed to sync configmap cache: timed out waiting for the condition Apr 30 12:55:10.427720 containerd[1515]: time="2025-04-30T12:55:10.427673078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dbbqh,Uid:e3315e73-925a-459f-906a-cb29d5da1809,Namespace:kube-system,Attempt:0,}" Apr 30 12:55:10.432586 containerd[1515]: time="2025-04-30T12:55:10.432449754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rhl9p,Uid:36db4aad-bd48-4b0e-815e-4fd3fb367bbd,Namespace:kube-system,Attempt:0,}" Apr 30 12:55:10.564159 systemd-networkd[1428]: cilium_host: Link UP Apr 30 12:55:10.568557 systemd-networkd[1428]: cilium_net: Link UP Apr 30 12:55:10.568834 systemd-networkd[1428]: cilium_net: Gained carrier Apr 30 12:55:10.569034 systemd-networkd[1428]: cilium_host: Gained carrier Apr 30 12:55:10.571691 systemd-networkd[1428]: cilium_net: Gained IPv6LL Apr 30 12:55:10.671216 systemd-networkd[1428]: cilium_vxlan: Link UP Apr 30 12:55:10.671226 systemd-networkd[1428]: cilium_vxlan: Gained carrier Apr 30 12:55:11.023796 kernel: NET: Registered PF_ALG protocol family Apr 30 12:55:11.043783 systemd-networkd[1428]: cilium_host: Gained IPv6LL Apr 30 12:55:11.717527 systemd-networkd[1428]: lxc_health: Link UP Apr 30 12:55:11.722283 systemd-networkd[1428]: lxc_health: Gained carrier Apr 30 12:55:12.005264 systemd-networkd[1428]: lxc6c1be0988d26: Link UP Apr 30 12:55:12.008656 kernel: eth0: renamed from tmp000e1 Apr 30 12:55:12.011948 systemd-networkd[1428]: lxc6c1be0988d26: Gained carrier Apr 30 12:55:12.019452 systemd-networkd[1428]: lxc12ee39eef4be: Link UP Apr 30 12:55:12.025570 kernel: eth0: renamed from tmp1401d Apr 30 12:55:12.036836 systemd-networkd[1428]: lxc12ee39eef4be: Gained carrier Apr 30 12:55:12.659768 systemd-networkd[1428]: cilium_vxlan: Gained IPv6LL Apr 30 12:55:13.235782 systemd-networkd[1428]: lxc12ee39eef4be: Gained IPv6LL Apr 30 12:55:13.497764 kubelet[2794]: I0430 12:55:13.497647 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-82wrp" podStartSLOduration=9.788561683 podStartE2EDuration="20.49762859s" podCreationTimestamp="2025-04-30 12:54:53 +0000 UTC" firstStartedPulling="2025-04-30 12:54:53.576071832 +0000 UTC m=+6.535855562" lastFinishedPulling="2025-04-30 12:55:04.28513874 +0000 UTC m=+17.244922469" observedRunningTime="2025-04-30 12:55:09.273103543 +0000 UTC m=+22.232887303" watchObservedRunningTime="2025-04-30 12:55:13.49762859 +0000 UTC m=+26.457412330" Apr 30 12:55:13.619815 systemd-networkd[1428]: lxc_health: Gained IPv6LL Apr 30 12:55:13.811742 systemd-networkd[1428]: lxc6c1be0988d26: Gained IPv6LL Apr 30 12:55:15.428574 containerd[1515]: time="2025-04-30T12:55:15.428297523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:55:15.428574 containerd[1515]: time="2025-04-30T12:55:15.428340844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:55:15.428574 containerd[1515]: time="2025-04-30T12:55:15.428349810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:55:15.428574 containerd[1515]: time="2025-04-30T12:55:15.428401898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:55:15.453682 containerd[1515]: time="2025-04-30T12:55:15.449916388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:55:15.453682 containerd[1515]: time="2025-04-30T12:55:15.449972774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:55:15.453682 containerd[1515]: time="2025-04-30T12:55:15.449985988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:55:15.453682 containerd[1515]: time="2025-04-30T12:55:15.450045199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:55:15.470741 systemd[1]: Started cri-containerd-000e1cf3ed6fcfd33280c2548751bb73b7c05d17b3043170210fc6af50baac0d.scope - libcontainer container 000e1cf3ed6fcfd33280c2548751bb73b7c05d17b3043170210fc6af50baac0d. Apr 30 12:55:15.481219 systemd[1]: Started cri-containerd-1401d7219de53d4e39dda932080056e1ccfbc5951df318d5ae41e8737d6c9b7b.scope - libcontainer container 1401d7219de53d4e39dda932080056e1ccfbc5951df318d5ae41e8737d6c9b7b. Apr 30 12:55:15.545270 containerd[1515]: time="2025-04-30T12:55:15.544649465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rhl9p,Uid:36db4aad-bd48-4b0e-815e-4fd3fb367bbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"000e1cf3ed6fcfd33280c2548751bb73b7c05d17b3043170210fc6af50baac0d\"" Apr 30 12:55:15.549818 containerd[1515]: time="2025-04-30T12:55:15.549784843Z" level=info msg="CreateContainer within sandbox \"000e1cf3ed6fcfd33280c2548751bb73b7c05d17b3043170210fc6af50baac0d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:55:15.572740 containerd[1515]: time="2025-04-30T12:55:15.572627606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dbbqh,Uid:e3315e73-925a-459f-906a-cb29d5da1809,Namespace:kube-system,Attempt:0,} returns sandbox id \"1401d7219de53d4e39dda932080056e1ccfbc5951df318d5ae41e8737d6c9b7b\"" Apr 30 12:55:15.576125 containerd[1515]: time="2025-04-30T12:55:15.576053345Z" level=info msg="CreateContainer within sandbox \"1401d7219de53d4e39dda932080056e1ccfbc5951df318d5ae41e8737d6c9b7b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:55:15.579108 containerd[1515]: time="2025-04-30T12:55:15.579069978Z" level=info msg="CreateContainer within sandbox \"000e1cf3ed6fcfd33280c2548751bb73b7c05d17b3043170210fc6af50baac0d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3c4c6aae0a861dbaade10d6281b8b1ad2046efd142b8a7e2ddd21bedd7bd4bf0\"" Apr 30 12:55:15.579741 containerd[1515]: time="2025-04-30T12:55:15.579690732Z" level=info msg="StartContainer for \"3c4c6aae0a861dbaade10d6281b8b1ad2046efd142b8a7e2ddd21bedd7bd4bf0\"" Apr 30 12:55:15.591339 containerd[1515]: time="2025-04-30T12:55:15.591286235Z" level=info msg="CreateContainer within sandbox \"1401d7219de53d4e39dda932080056e1ccfbc5951df318d5ae41e8737d6c9b7b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8ff2fe9628a2648fee1c1826856f273411d9a85b8ff4152e147af2aadd85ee6\"" Apr 30 12:55:15.592449 containerd[1515]: time="2025-04-30T12:55:15.592422107Z" level=info msg="StartContainer for \"f8ff2fe9628a2648fee1c1826856f273411d9a85b8ff4152e147af2aadd85ee6\"" Apr 30 12:55:15.612924 systemd[1]: Started cri-containerd-3c4c6aae0a861dbaade10d6281b8b1ad2046efd142b8a7e2ddd21bedd7bd4bf0.scope - libcontainer container 3c4c6aae0a861dbaade10d6281b8b1ad2046efd142b8a7e2ddd21bedd7bd4bf0. Apr 30 12:55:15.626936 systemd[1]: Started cri-containerd-f8ff2fe9628a2648fee1c1826856f273411d9a85b8ff4152e147af2aadd85ee6.scope - libcontainer container f8ff2fe9628a2648fee1c1826856f273411d9a85b8ff4152e147af2aadd85ee6. Apr 30 12:55:15.651366 containerd[1515]: time="2025-04-30T12:55:15.651260748Z" level=info msg="StartContainer for \"3c4c6aae0a861dbaade10d6281b8b1ad2046efd142b8a7e2ddd21bedd7bd4bf0\" returns successfully" Apr 30 12:55:15.657881 containerd[1515]: time="2025-04-30T12:55:15.657852129Z" level=info msg="StartContainer for \"f8ff2fe9628a2648fee1c1826856f273411d9a85b8ff4152e147af2aadd85ee6\" returns successfully" Apr 30 12:55:16.291614 kubelet[2794]: I0430 12:55:16.291474 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rhl9p" podStartSLOduration=23.291451619 podStartE2EDuration="23.291451619s" podCreationTimestamp="2025-04-30 12:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:55:16.289109434 +0000 UTC m=+29.248893163" watchObservedRunningTime="2025-04-30 12:55:16.291451619 +0000 UTC m=+29.251235349" Apr 30 12:55:16.306626 kubelet[2794]: I0430 12:55:16.305380 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-dbbqh" podStartSLOduration=23.305359802 podStartE2EDuration="23.305359802s" podCreationTimestamp="2025-04-30 12:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:55:16.30422854 +0000 UTC m=+29.264012290" watchObservedRunningTime="2025-04-30 12:55:16.305359802 +0000 UTC m=+29.265143532" Apr 30 12:55:16.437215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3584522906.mount: Deactivated successfully. Apr 30 12:55:21.396964 kubelet[2794]: I0430 12:55:21.396794 2794 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 12:57:16.385925 systemd[1]: Started sshd@7-37.27.3.216:22-139.178.68.195:49038.service - OpenSSH per-connection server daemon (139.178.68.195:49038). Apr 30 12:57:17.380583 sshd[4187]: Accepted publickey for core from 139.178.68.195 port 49038 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:57:17.383057 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:57:17.389219 systemd-logind[1497]: New session 8 of user core. Apr 30 12:57:17.397805 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 12:57:18.511306 sshd[4189]: Connection closed by 139.178.68.195 port 49038 Apr 30 12:57:18.511782 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Apr 30 12:57:18.515424 systemd[1]: sshd@7-37.27.3.216:22-139.178.68.195:49038.service: Deactivated successfully. Apr 30 12:57:18.517334 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 12:57:18.519193 systemd-logind[1497]: Session 8 logged out. Waiting for processes to exit. Apr 30 12:57:18.520419 systemd-logind[1497]: Removed session 8. Apr 30 12:57:23.685979 systemd[1]: Started sshd@8-37.27.3.216:22-139.178.68.195:49042.service - OpenSSH per-connection server daemon (139.178.68.195:49042). Apr 30 12:57:24.658920 sshd[4202]: Accepted publickey for core from 139.178.68.195 port 49042 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:57:24.660481 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:57:24.665370 systemd-logind[1497]: New session 9 of user core. Apr 30 12:57:24.671780 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 12:57:25.409519 sshd[4206]: Connection closed by 139.178.68.195 port 49042 Apr 30 12:57:25.410305 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Apr 30 12:57:25.414835 systemd[1]: sshd@8-37.27.3.216:22-139.178.68.195:49042.service: Deactivated successfully. Apr 30 12:57:25.418066 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 12:57:25.419030 systemd-logind[1497]: Session 9 logged out. Waiting for processes to exit. Apr 30 12:57:25.420681 systemd-logind[1497]: Removed session 9. Apr 30 12:57:30.586023 systemd[1]: Started sshd@9-37.27.3.216:22-139.178.68.195:37622.service - OpenSSH per-connection server daemon (139.178.68.195:37622). Apr 30 12:57:31.558781 sshd[4220]: Accepted publickey for core from 139.178.68.195 port 37622 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:57:31.560492 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:57:31.566579 systemd-logind[1497]: New session 10 of user core. Apr 30 12:57:31.574866 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 12:57:32.307554 sshd[4225]: Connection closed by 139.178.68.195 port 37622 Apr 30 12:57:32.308270 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Apr 30 12:57:32.312358 systemd-logind[1497]: Session 10 logged out. Waiting for processes to exit. Apr 30 12:57:32.313068 systemd[1]: sshd@9-37.27.3.216:22-139.178.68.195:37622.service: Deactivated successfully. Apr 30 12:57:32.315430 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 12:57:32.316916 systemd-logind[1497]: Removed session 10. Apr 30 12:57:37.481950 systemd[1]: Started sshd@10-37.27.3.216:22-139.178.68.195:38218.service - OpenSSH per-connection server daemon (139.178.68.195:38218). Apr 30 12:57:38.466780 sshd[4239]: Accepted publickey for core from 139.178.68.195 port 38218 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:57:38.468323 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:57:38.473346 systemd-logind[1497]: New session 11 of user core. Apr 30 12:57:38.480793 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 12:57:39.215800 sshd[4241]: Connection closed by 139.178.68.195 port 38218 Apr 30 12:57:39.216467 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Apr 30 12:57:39.219316 systemd[1]: sshd@10-37.27.3.216:22-139.178.68.195:38218.service: Deactivated successfully. Apr 30 12:57:39.221116 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 12:57:39.223022 systemd-logind[1497]: Session 11 logged out. Waiting for processes to exit. Apr 30 12:57:39.224475 systemd-logind[1497]: Removed session 11. Apr 30 12:57:39.386847 systemd[1]: Started sshd@11-37.27.3.216:22-139.178.68.195:38234.service - OpenSSH per-connection server daemon (139.178.68.195:38234). Apr 30 12:57:40.358261 sshd[4254]: Accepted publickey for core from 139.178.68.195 port 38234 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:57:40.359705 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:57:40.364120 systemd-logind[1497]: New session 12 of user core. Apr 30 12:57:40.366784 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 12:57:41.122442 sshd[4256]: Connection closed by 139.178.68.195 port 38234 Apr 30 12:57:41.123382 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Apr 30 12:57:41.126443 systemd[1]: sshd@11-37.27.3.216:22-139.178.68.195:38234.service: Deactivated successfully. Apr 30 12:57:41.127932 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 12:57:41.129113 systemd-logind[1497]: Session 12 logged out. Waiting for processes to exit. Apr 30 12:57:41.130223 systemd-logind[1497]: Removed session 12. Apr 30 12:57:41.293842 systemd[1]: Started sshd@12-37.27.3.216:22-139.178.68.195:38244.service - OpenSSH per-connection server daemon (139.178.68.195:38244). Apr 30 12:57:42.270495 sshd[4266]: Accepted publickey for core from 139.178.68.195 port 38244 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:57:42.272920 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:57:42.280305 systemd-logind[1497]: New session 13 of user core. Apr 30 12:57:42.288891 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 12:57:43.000281 sshd[4268]: Connection closed by 139.178.68.195 port 38244 Apr 30 12:57:43.000986 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Apr 30 12:57:43.003813 systemd[1]: sshd@12-37.27.3.216:22-139.178.68.195:38244.service: Deactivated successfully. Apr 30 12:57:43.005577 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 12:57:43.007274 systemd-logind[1497]: Session 13 logged out. Waiting for processes to exit. Apr 30 12:57:43.008209 systemd-logind[1497]: Removed session 13. Apr 30 12:57:48.183967 systemd[1]: Started sshd@13-37.27.3.216:22-139.178.68.195:38370.service - OpenSSH per-connection server daemon (139.178.68.195:38370). Apr 30 12:57:49.160207 sshd[4282]: Accepted publickey for core from 139.178.68.195 port 38370 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:57:49.161810 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:57:49.166402 systemd-logind[1497]: New session 14 of user core. Apr 30 12:57:49.172745 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 12:57:49.913333 sshd[4284]: Connection closed by 139.178.68.195 port 38370 Apr 30 12:57:49.914025 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Apr 30 12:57:49.917443 systemd-logind[1497]: Session 14 logged out. Waiting for processes to exit. Apr 30 12:57:49.917861 systemd[1]: sshd@13-37.27.3.216:22-139.178.68.195:38370.service: Deactivated successfully. Apr 30 12:57:49.919806 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 12:57:49.920920 systemd-logind[1497]: Removed session 14. Apr 30 12:57:50.086923 systemd[1]: Started sshd@14-37.27.3.216:22-139.178.68.195:38378.service - OpenSSH per-connection server daemon (139.178.68.195:38378). Apr 30 12:57:51.064583 sshd[4296]: Accepted publickey for core from 139.178.68.195 port 38378 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:57:51.065879 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:57:51.070452 systemd-logind[1497]: New session 15 of user core. Apr 30 12:57:51.076754 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 12:57:51.980696 sshd[4298]: Connection closed by 139.178.68.195 port 38378 Apr 30 12:57:51.981509 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Apr 30 12:57:51.988073 systemd[1]: sshd@14-37.27.3.216:22-139.178.68.195:38378.service: Deactivated successfully. Apr 30 12:57:51.990056 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 12:57:51.991342 systemd-logind[1497]: Session 15 logged out. Waiting for processes to exit. Apr 30 12:57:51.992996 systemd-logind[1497]: Removed session 15. Apr 30 12:57:52.152859 systemd[1]: Started sshd@15-37.27.3.216:22-139.178.68.195:38392.service - OpenSSH per-connection server daemon (139.178.68.195:38392). Apr 30 12:57:53.127950 sshd[4308]: Accepted publickey for core from 139.178.68.195 port 38392 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:57:53.129396 sshd-session[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:57:53.135282 systemd-logind[1497]: New session 16 of user core. Apr 30 12:57:53.139789 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 12:57:55.549177 sshd[4310]: Connection closed by 139.178.68.195 port 38392 Apr 30 12:57:55.550797 sshd-session[4308]: pam_unix(sshd:session): session closed for user core Apr 30 12:57:55.554568 systemd[1]: sshd@15-37.27.3.216:22-139.178.68.195:38392.service: Deactivated successfully. Apr 30 12:57:55.556869 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 12:57:55.558381 systemd-logind[1497]: Session 16 logged out. Waiting for processes to exit. Apr 30 12:57:55.560430 systemd-logind[1497]: Removed session 16. Apr 30 12:57:55.719832 systemd[1]: Started sshd@16-37.27.3.216:22-139.178.68.195:57922.service - OpenSSH per-connection server daemon (139.178.68.195:57922). Apr 30 12:57:56.701718 sshd[4329]: Accepted publickey for core from 139.178.68.195 port 57922 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:57:56.703125 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:57:56.707502 systemd-logind[1497]: New session 17 of user core. Apr 30 12:57:56.712761 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 12:57:57.564383 sshd[4331]: Connection closed by 139.178.68.195 port 57922 Apr 30 12:57:57.565038 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Apr 30 12:57:57.568455 systemd-logind[1497]: Session 17 logged out. Waiting for processes to exit. Apr 30 12:57:57.568628 systemd[1]: sshd@16-37.27.3.216:22-139.178.68.195:57922.service: Deactivated successfully. Apr 30 12:57:57.570427 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 12:57:57.571450 systemd-logind[1497]: Removed session 17. Apr 30 12:57:57.740947 systemd[1]: Started sshd@17-37.27.3.216:22-139.178.68.195:57936.service - OpenSSH per-connection server daemon (139.178.68.195:57936). Apr 30 12:57:58.706517 sshd[4340]: Accepted publickey for core from 139.178.68.195 port 57936 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:57:58.708138 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:57:58.713841 systemd-logind[1497]: New session 18 of user core. Apr 30 12:57:58.717758 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 12:57:59.441049 sshd[4342]: Connection closed by 139.178.68.195 port 57936 Apr 30 12:57:59.441635 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Apr 30 12:57:59.445031 systemd-logind[1497]: Session 18 logged out. Waiting for processes to exit. Apr 30 12:57:59.445675 systemd[1]: sshd@17-37.27.3.216:22-139.178.68.195:57936.service: Deactivated successfully. Apr 30 12:57:59.447530 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 12:57:59.448570 systemd-logind[1497]: Removed session 18. Apr 30 12:58:04.611965 systemd[1]: Started sshd@18-37.27.3.216:22-139.178.68.195:57948.service - OpenSSH per-connection server daemon (139.178.68.195:57948). Apr 30 12:58:05.577679 sshd[4357]: Accepted publickey for core from 139.178.68.195 port 57948 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:58:05.579503 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:58:05.584726 systemd-logind[1497]: New session 19 of user core. Apr 30 12:58:05.589865 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 12:58:06.308280 sshd[4359]: Connection closed by 139.178.68.195 port 57948 Apr 30 12:58:06.309239 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Apr 30 12:58:06.313548 systemd[1]: sshd@18-37.27.3.216:22-139.178.68.195:57948.service: Deactivated successfully. Apr 30 12:58:06.317014 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 12:58:06.319259 systemd-logind[1497]: Session 19 logged out. Waiting for processes to exit. Apr 30 12:58:06.320368 systemd-logind[1497]: Removed session 19. Apr 30 12:58:11.477799 systemd[1]: Started sshd@19-37.27.3.216:22-139.178.68.195:35506.service - OpenSSH per-connection server daemon (139.178.68.195:35506). Apr 30 12:58:12.452021 sshd[4371]: Accepted publickey for core from 139.178.68.195 port 35506 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:58:12.453385 sshd-session[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:58:12.458303 systemd-logind[1497]: New session 20 of user core. Apr 30 12:58:12.467834 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 12:58:13.186123 sshd[4373]: Connection closed by 139.178.68.195 port 35506 Apr 30 12:58:13.186775 sshd-session[4371]: pam_unix(sshd:session): session closed for user core Apr 30 12:58:13.190192 systemd-logind[1497]: Session 20 logged out. Waiting for processes to exit. Apr 30 12:58:13.191014 systemd[1]: sshd@19-37.27.3.216:22-139.178.68.195:35506.service: Deactivated successfully. Apr 30 12:58:13.193434 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 12:58:13.194911 systemd-logind[1497]: Removed session 20. Apr 30 12:58:13.359052 systemd[1]: Started sshd@20-37.27.3.216:22-139.178.68.195:35512.service - OpenSSH per-connection server daemon (139.178.68.195:35512). Apr 30 12:58:14.327627 sshd[4385]: Accepted publickey for core from 139.178.68.195 port 35512 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:58:14.329072 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:58:14.333587 systemd-logind[1497]: New session 21 of user core. Apr 30 12:58:14.340784 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 12:58:16.199154 containerd[1515]: time="2025-04-30T12:58:16.199109604Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:58:16.238517 containerd[1515]: time="2025-04-30T12:58:16.238479369Z" level=info msg="StopContainer for \"3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0\" with timeout 2 (s)" Apr 30 12:58:16.238820 containerd[1515]: time="2025-04-30T12:58:16.238479509Z" level=info msg="StopContainer for \"15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a\" with timeout 30 (s)" Apr 30 12:58:16.242193 containerd[1515]: time="2025-04-30T12:58:16.242107063Z" level=info msg="Stop container \"3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0\" with signal terminated" Apr 30 12:58:16.242404 containerd[1515]: time="2025-04-30T12:58:16.242371890Z" level=info msg="Stop container \"15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a\" with signal terminated" Apr 30 12:58:16.251406 systemd-networkd[1428]: lxc_health: Link DOWN Apr 30 12:58:16.251413 systemd-networkd[1428]: lxc_health: Lost carrier Apr 30 12:58:16.258899 systemd[1]: cri-containerd-15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a.scope: Deactivated successfully. Apr 30 12:58:16.281356 systemd[1]: cri-containerd-3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0.scope: Deactivated successfully. Apr 30 12:58:16.282008 systemd[1]: cri-containerd-3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0.scope: Consumed 6.865s CPU time, 191.4M memory peak, 71.1M read from disk, 13.3M written to disk. Apr 30 12:58:16.296154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a-rootfs.mount: Deactivated successfully. Apr 30 12:58:16.304787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0-rootfs.mount: Deactivated successfully. Apr 30 12:58:16.306247 containerd[1515]: time="2025-04-30T12:58:16.305774428Z" level=info msg="shim disconnected" id=15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a namespace=k8s.io Apr 30 12:58:16.306247 containerd[1515]: time="2025-04-30T12:58:16.306150825Z" level=warning msg="cleaning up after shim disconnected" id=15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a namespace=k8s.io Apr 30 12:58:16.306247 containerd[1515]: time="2025-04-30T12:58:16.306165091Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:58:16.310874 containerd[1515]: time="2025-04-30T12:58:16.310831885Z" level=info msg="shim disconnected" id=3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0 namespace=k8s.io Apr 30 12:58:16.311078 containerd[1515]: time="2025-04-30T12:58:16.310976146Z" level=warning msg="cleaning up after shim disconnected" id=3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0 namespace=k8s.io Apr 30 12:58:16.311078 containerd[1515]: time="2025-04-30T12:58:16.310990473Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:58:16.325820 containerd[1515]: time="2025-04-30T12:58:16.325705977Z" level=info msg="StopContainer for \"15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a\" returns successfully" Apr 30 12:58:16.326623 containerd[1515]: time="2025-04-30T12:58:16.326527850Z" level=info msg="StopPodSandbox for \"e4fed704bc49f9697679da1ced788717de296c8aa453f85fafbd507eed4e2dc1\"" Apr 30 12:58:16.329758 containerd[1515]: time="2025-04-30T12:58:16.329438770Z" level=info msg="StopContainer for \"3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0\" returns successfully" Apr 30 12:58:16.330889 containerd[1515]: time="2025-04-30T12:58:16.330858814Z" level=info msg="StopPodSandbox for \"fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5\"" Apr 30 12:58:16.337437 containerd[1515]: time="2025-04-30T12:58:16.326579637Z" level=info msg="Container to stop \"15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:58:16.337437 containerd[1515]: time="2025-04-30T12:58:16.330888298Z" level=info msg="Container to stop \"8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:58:16.337437 containerd[1515]: time="2025-04-30T12:58:16.337331065Z" level=info msg="Container to stop \"3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:58:16.337437 containerd[1515]: time="2025-04-30T12:58:16.337340583Z" level=info msg="Container to stop \"a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:58:16.337437 containerd[1515]: time="2025-04-30T12:58:16.337348147Z" level=info msg="Container to stop \"426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:58:16.337437 containerd[1515]: time="2025-04-30T12:58:16.337355050Z" level=info msg="Container to stop \"4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:58:16.340248 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4fed704bc49f9697679da1ced788717de296c8aa453f85fafbd507eed4e2dc1-shm.mount: Deactivated successfully. Apr 30 12:58:16.340367 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5-shm.mount: Deactivated successfully. Apr 30 12:58:16.347269 systemd[1]: cri-containerd-fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5.scope: Deactivated successfully. Apr 30 12:58:16.353929 systemd[1]: cri-containerd-e4fed704bc49f9697679da1ced788717de296c8aa453f85fafbd507eed4e2dc1.scope: Deactivated successfully. Apr 30 12:58:16.373322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5-rootfs.mount: Deactivated successfully. Apr 30 12:58:16.382116 containerd[1515]: time="2025-04-30T12:58:16.381969953Z" level=info msg="shim disconnected" id=fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5 namespace=k8s.io Apr 30 12:58:16.382116 containerd[1515]: time="2025-04-30T12:58:16.382014757Z" level=warning msg="cleaning up after shim disconnected" id=fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5 namespace=k8s.io Apr 30 12:58:16.382116 containerd[1515]: time="2025-04-30T12:58:16.382022682Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:58:16.382267 containerd[1515]: time="2025-04-30T12:58:16.382120737Z" level=info msg="shim disconnected" id=e4fed704bc49f9697679da1ced788717de296c8aa453f85fafbd507eed4e2dc1 namespace=k8s.io Apr 30 12:58:16.382267 containerd[1515]: time="2025-04-30T12:58:16.382146014Z" level=warning msg="cleaning up after shim disconnected" id=e4fed704bc49f9697679da1ced788717de296c8aa453f85fafbd507eed4e2dc1 namespace=k8s.io Apr 30 12:58:16.382267 containerd[1515]: time="2025-04-30T12:58:16.382152486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:58:16.395953 containerd[1515]: time="2025-04-30T12:58:16.395793235Z" level=warning msg="cleanup warnings time=\"2025-04-30T12:58:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 12:58:16.396446 containerd[1515]: time="2025-04-30T12:58:16.396310404Z" level=info msg="TearDown network for sandbox \"fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5\" successfully" Apr 30 12:58:16.396446 containerd[1515]: time="2025-04-30T12:58:16.396336784Z" level=info msg="StopPodSandbox for \"fb7603010103d777d7f83ee89eda4c793cfbfad8907ed81fbaba67cf9ee3ccb5\" returns successfully" Apr 30 12:58:16.396826 containerd[1515]: time="2025-04-30T12:58:16.396574239Z" level=info msg="TearDown network for sandbox \"e4fed704bc49f9697679da1ced788717de296c8aa453f85fafbd507eed4e2dc1\" successfully" Apr 30 12:58:16.396826 containerd[1515]: time="2025-04-30T12:58:16.396590500Z" level=info msg="StopPodSandbox for \"e4fed704bc49f9697679da1ced788717de296c8aa453f85fafbd507eed4e2dc1\" returns successfully" Apr 30 12:58:16.528630 kubelet[2794]: I0430 12:58:16.528448 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-hostproc\") pod \"7f6287a5-e686-43a9-9b3e-d09836a18e00\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " Apr 30 12:58:16.528630 kubelet[2794]: I0430 12:58:16.528551 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn4gb\" (UniqueName: \"kubernetes.io/projected/d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1-kube-api-access-gn4gb\") pod \"d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1\" (UID: \"d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1\") " Apr 30 12:58:16.529192 kubelet[2794]: I0430 12:58:16.527593 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-hostproc" (OuterVolumeSpecName: "hostproc") pod "7f6287a5-e686-43a9-9b3e-d09836a18e00" (UID: "7f6287a5-e686-43a9-9b3e-d09836a18e00"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:58:16.529861 kubelet[2794]: I0430 12:58:16.528591 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-bpf-maps\") pod \"7f6287a5-e686-43a9-9b3e-d09836a18e00\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " Apr 30 12:58:16.529861 kubelet[2794]: I0430 12:58:16.529292 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-etc-cni-netd\") pod \"7f6287a5-e686-43a9-9b3e-d09836a18e00\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " Apr 30 12:58:16.529861 kubelet[2794]: I0430 12:58:16.529318 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-host-proc-sys-net\") pod \"7f6287a5-e686-43a9-9b3e-d09836a18e00\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " Apr 30 12:58:16.529861 kubelet[2794]: I0430 12:58:16.529338 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-cilium-run\") pod \"7f6287a5-e686-43a9-9b3e-d09836a18e00\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " Apr 30 12:58:16.529861 kubelet[2794]: I0430 12:58:16.529362 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1-cilium-config-path\") pod \"d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1\" (UID: \"d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1\") " Apr 30 12:58:16.529861 kubelet[2794]: I0430 12:58:16.529384 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-xtables-lock\") pod \"7f6287a5-e686-43a9-9b3e-d09836a18e00\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " Apr 30 12:58:16.530121 kubelet[2794]: I0430 12:58:16.529404 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-cilium-cgroup\") pod \"7f6287a5-e686-43a9-9b3e-d09836a18e00\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " Apr 30 12:58:16.530121 kubelet[2794]: I0430 12:58:16.529430 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f6287a5-e686-43a9-9b3e-d09836a18e00-clustermesh-secrets\") pod \"7f6287a5-e686-43a9-9b3e-d09836a18e00\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " Apr 30 12:58:16.530121 kubelet[2794]: I0430 12:58:16.529456 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f6287a5-e686-43a9-9b3e-d09836a18e00-cilium-config-path\") pod \"7f6287a5-e686-43a9-9b3e-d09836a18e00\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " Apr 30 12:58:16.530121 kubelet[2794]: I0430 12:58:16.529481 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn8xj\" (UniqueName: \"kubernetes.io/projected/7f6287a5-e686-43a9-9b3e-d09836a18e00-kube-api-access-qn8xj\") pod \"7f6287a5-e686-43a9-9b3e-d09836a18e00\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " Apr 30 12:58:16.530121 kubelet[2794]: I0430 12:58:16.529506 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f6287a5-e686-43a9-9b3e-d09836a18e00-hubble-tls\") pod \"7f6287a5-e686-43a9-9b3e-d09836a18e00\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " Apr 30 12:58:16.530121 kubelet[2794]: I0430 12:58:16.529529 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-lib-modules\") pod \"7f6287a5-e686-43a9-9b3e-d09836a18e00\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " Apr 30 12:58:16.530333 kubelet[2794]: I0430 12:58:16.529551 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-cni-path\") pod \"7f6287a5-e686-43a9-9b3e-d09836a18e00\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " Apr 30 12:58:16.530333 kubelet[2794]: I0430 12:58:16.529571 2794 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-host-proc-sys-kernel\") pod \"7f6287a5-e686-43a9-9b3e-d09836a18e00\" (UID: \"7f6287a5-e686-43a9-9b3e-d09836a18e00\") " Apr 30 12:58:16.531585 kubelet[2794]: I0430 12:58:16.531348 2794 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-hostproc\") on node \"ci-4230-1-1-d-a2f51ba0c1\" DevicePath \"\"" Apr 30 12:58:16.531585 kubelet[2794]: I0430 12:58:16.531414 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7f6287a5-e686-43a9-9b3e-d09836a18e00" (UID: "7f6287a5-e686-43a9-9b3e-d09836a18e00"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:58:16.531585 kubelet[2794]: I0430 12:58:16.531454 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7f6287a5-e686-43a9-9b3e-d09836a18e00" (UID: "7f6287a5-e686-43a9-9b3e-d09836a18e00"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:58:16.531585 kubelet[2794]: I0430 12:58:16.531480 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7f6287a5-e686-43a9-9b3e-d09836a18e00" (UID: "7f6287a5-e686-43a9-9b3e-d09836a18e00"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:58:16.531585 kubelet[2794]: I0430 12:58:16.531500 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7f6287a5-e686-43a9-9b3e-d09836a18e00" (UID: "7f6287a5-e686-43a9-9b3e-d09836a18e00"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:58:16.532627 kubelet[2794]: I0430 12:58:16.531972 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7f6287a5-e686-43a9-9b3e-d09836a18e00" (UID: "7f6287a5-e686-43a9-9b3e-d09836a18e00"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:58:16.532760 kubelet[2794]: I0430 12:58:16.532727 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1-kube-api-access-gn4gb" (OuterVolumeSpecName: "kube-api-access-gn4gb") pod "d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1" (UID: "d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1"). InnerVolumeSpecName "kube-api-access-gn4gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:58:16.535087 kubelet[2794]: I0430 12:58:16.535063 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1" (UID: "d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 12:58:16.535706 kubelet[2794]: I0430 12:58:16.535673 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f6287a5-e686-43a9-9b3e-d09836a18e00-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7f6287a5-e686-43a9-9b3e-d09836a18e00" (UID: "7f6287a5-e686-43a9-9b3e-d09836a18e00"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 12:58:16.535770 kubelet[2794]: I0430 12:58:16.535718 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7f6287a5-e686-43a9-9b3e-d09836a18e00" (UID: "7f6287a5-e686-43a9-9b3e-d09836a18e00"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:58:16.535770 kubelet[2794]: I0430 12:58:16.535740 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7f6287a5-e686-43a9-9b3e-d09836a18e00" (UID: "7f6287a5-e686-43a9-9b3e-d09836a18e00"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:58:16.538073 kubelet[2794]: I0430 12:58:16.538045 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f6287a5-e686-43a9-9b3e-d09836a18e00-kube-api-access-qn8xj" (OuterVolumeSpecName: "kube-api-access-qn8xj") pod "7f6287a5-e686-43a9-9b3e-d09836a18e00" (UID: "7f6287a5-e686-43a9-9b3e-d09836a18e00"). InnerVolumeSpecName "kube-api-access-qn8xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:58:16.538164 kubelet[2794]: I0430 12:58:16.538067 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f6287a5-e686-43a9-9b3e-d09836a18e00-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7f6287a5-e686-43a9-9b3e-d09836a18e00" (UID: "7f6287a5-e686-43a9-9b3e-d09836a18e00"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 12:58:16.538281 kubelet[2794]: I0430 12:58:16.538095 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7f6287a5-e686-43a9-9b3e-d09836a18e00" (UID: "7f6287a5-e686-43a9-9b3e-d09836a18e00"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:58:16.538385 kubelet[2794]: I0430 12:58:16.538364 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-cni-path" (OuterVolumeSpecName: "cni-path") pod "7f6287a5-e686-43a9-9b3e-d09836a18e00" (UID: "7f6287a5-e686-43a9-9b3e-d09836a18e00"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:58:16.540368 kubelet[2794]: I0430 12:58:16.540273 2794 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f6287a5-e686-43a9-9b3e-d09836a18e00-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7f6287a5-e686-43a9-9b3e-d09836a18e00" (UID: "7f6287a5-e686-43a9-9b3e-d09836a18e00"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:58:16.632170 kubelet[2794]: I0430 12:58:16.632115 2794 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f6287a5-e686-43a9-9b3e-d09836a18e00-cilium-config-path\") on node \"ci-4230-1-1-d-a2f51ba0c1\" DevicePath \"\"" Apr 30 12:58:16.632170 kubelet[2794]: I0430 12:58:16.632150 2794 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f6287a5-e686-43a9-9b3e-d09836a18e00-clustermesh-secrets\") on node \"ci-4230-1-1-d-a2f51ba0c1\" DevicePath \"\"" Apr 30 12:58:16.632170 kubelet[2794]: I0430 12:58:16.632160 2794 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qn8xj\" (UniqueName: \"kubernetes.io/projected/7f6287a5-e686-43a9-9b3e-d09836a18e00-kube-api-access-qn8xj\") on node \"ci-4230-1-1-d-a2f51ba0c1\" DevicePath \"\"" Apr 30 12:58:16.632170 kubelet[2794]: I0430 12:58:16.632168 2794 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-lib-modules\") on node \"ci-4230-1-1-d-a2f51ba0c1\" DevicePath \"\"" Apr 30 12:58:16.632170 kubelet[2794]: I0430 12:58:16.632177 2794 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-cni-path\") on node \"ci-4230-1-1-d-a2f51ba0c1\" DevicePath \"\"" Apr 30 12:58:16.632170 kubelet[2794]: I0430 12:58:16.632183 2794 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-host-proc-sys-kernel\") on node \"ci-4230-1-1-d-a2f51ba0c1\" DevicePath \"\"" Apr 30 12:58:16.632469 kubelet[2794]: I0430 12:58:16.632190 2794 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f6287a5-e686-43a9-9b3e-d09836a18e00-hubble-tls\") on node \"ci-4230-1-1-d-a2f51ba0c1\" DevicePath \"\"" Apr 30 12:58:16.632469 kubelet[2794]: I0430 12:58:16.632197 2794 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gn4gb\" (UniqueName: \"kubernetes.io/projected/d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1-kube-api-access-gn4gb\") on node \"ci-4230-1-1-d-a2f51ba0c1\" DevicePath \"\"" Apr 30 12:58:16.632469 kubelet[2794]: I0430 12:58:16.632204 2794 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-bpf-maps\") on node \"ci-4230-1-1-d-a2f51ba0c1\" DevicePath \"\"" Apr 30 12:58:16.632469 kubelet[2794]: I0430 12:58:16.632210 2794 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-etc-cni-netd\") on node \"ci-4230-1-1-d-a2f51ba0c1\" DevicePath \"\"" Apr 30 12:58:16.632469 kubelet[2794]: I0430 12:58:16.632217 2794 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-cilium-run\") on node \"ci-4230-1-1-d-a2f51ba0c1\" DevicePath \"\"" Apr 30 12:58:16.632469 kubelet[2794]: I0430 12:58:16.632223 2794 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1-cilium-config-path\") on node \"ci-4230-1-1-d-a2f51ba0c1\" DevicePath \"\"" Apr 30 12:58:16.632469 kubelet[2794]: I0430 12:58:16.632229 2794 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-xtables-lock\") on node \"ci-4230-1-1-d-a2f51ba0c1\" DevicePath \"\"" Apr 30 12:58:16.632469 kubelet[2794]: I0430 12:58:16.632235 2794 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-host-proc-sys-net\") on node \"ci-4230-1-1-d-a2f51ba0c1\" DevicePath \"\"" Apr 30 12:58:16.632712 kubelet[2794]: I0430 12:58:16.632242 2794 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f6287a5-e686-43a9-9b3e-d09836a18e00-cilium-cgroup\") on node \"ci-4230-1-1-d-a2f51ba0c1\" DevicePath \"\"" Apr 30 12:58:16.658218 systemd[1]: Removed slice kubepods-besteffort-podd7fd7bd8_6735_4ab8_8c66_fc22c66e77e1.slice - libcontainer container kubepods-besteffort-podd7fd7bd8_6735_4ab8_8c66_fc22c66e77e1.slice. Apr 30 12:58:16.667287 kubelet[2794]: I0430 12:58:16.667221 2794 scope.go:117] "RemoveContainer" containerID="15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a" Apr 30 12:58:16.679580 containerd[1515]: time="2025-04-30T12:58:16.678534550Z" level=info msg="RemoveContainer for \"15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a\"" Apr 30 12:58:16.685377 systemd[1]: Removed slice kubepods-burstable-pod7f6287a5_e686_43a9_9b3e_d09836a18e00.slice - libcontainer container kubepods-burstable-pod7f6287a5_e686_43a9_9b3e_d09836a18e00.slice. Apr 30 12:58:16.685503 systemd[1]: kubepods-burstable-pod7f6287a5_e686_43a9_9b3e_d09836a18e00.slice: Consumed 6.944s CPU time, 191.8M memory peak, 72.2M read from disk, 13.3M written to disk. Apr 30 12:58:16.691311 containerd[1515]: time="2025-04-30T12:58:16.691194799Z" level=info msg="RemoveContainer for \"15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a\" returns successfully" Apr 30 12:58:16.691497 kubelet[2794]: I0430 12:58:16.691476 2794 scope.go:117] "RemoveContainer" containerID="15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a" Apr 30 12:58:16.691756 containerd[1515]: time="2025-04-30T12:58:16.691719723Z" level=error msg="ContainerStatus for \"15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a\": not found" Apr 30 12:58:16.695975 kubelet[2794]: E0430 12:58:16.695890 2794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a\": not found" containerID="15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a" Apr 30 12:58:16.696196 kubelet[2794]: I0430 12:58:16.695939 2794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a"} err="failed to get container status \"15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a\": rpc error: code = NotFound desc = an error occurred when try to find container \"15ee44f6cc661b68b79e5d9b41adc4c7a19aad0bf202cc526f0b93057499534a\": not found" Apr 30 12:58:16.696373 kubelet[2794]: I0430 12:58:16.696256 2794 scope.go:117] "RemoveContainer" containerID="3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0" Apr 30 12:58:16.699022 containerd[1515]: time="2025-04-30T12:58:16.698993979Z" level=info msg="RemoveContainer for \"3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0\"" Apr 30 12:58:16.703567 containerd[1515]: time="2025-04-30T12:58:16.702876012Z" level=info msg="RemoveContainer for \"3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0\" returns successfully" Apr 30 12:58:16.704993 kubelet[2794]: I0430 12:58:16.704957 2794 scope.go:117] "RemoveContainer" containerID="4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e" Apr 30 12:58:16.710959 containerd[1515]: time="2025-04-30T12:58:16.710938006Z" level=info msg="RemoveContainer for \"4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e\"" Apr 30 12:58:16.714005 containerd[1515]: time="2025-04-30T12:58:16.713985853Z" level=info msg="RemoveContainer for \"4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e\" returns successfully" Apr 30 12:58:16.714253 kubelet[2794]: I0430 12:58:16.714211 2794 scope.go:117] "RemoveContainer" containerID="426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77" Apr 30 12:58:16.715438 containerd[1515]: time="2025-04-30T12:58:16.715225869Z" level=info msg="RemoveContainer for \"426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77\"" Apr 30 12:58:16.717521 containerd[1515]: time="2025-04-30T12:58:16.717503642Z" level=info msg="RemoveContainer for \"426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77\" returns successfully" Apr 30 12:58:16.717794 kubelet[2794]: I0430 12:58:16.717760 2794 scope.go:117] "RemoveContainer" containerID="8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c" Apr 30 12:58:16.718654 containerd[1515]: time="2025-04-30T12:58:16.718615036Z" level=info msg="RemoveContainer for \"8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c\"" Apr 30 12:58:16.722026 containerd[1515]: time="2025-04-30T12:58:16.721985870Z" level=info msg="RemoveContainer for \"8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c\" returns successfully" Apr 30 12:58:16.722154 kubelet[2794]: I0430 12:58:16.722113 2794 scope.go:117] "RemoveContainer" containerID="a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202" Apr 30 12:58:16.722870 containerd[1515]: time="2025-04-30T12:58:16.722826598Z" level=info msg="RemoveContainer for \"a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202\"" Apr 30 12:58:16.724981 containerd[1515]: time="2025-04-30T12:58:16.724956663Z" level=info msg="RemoveContainer for \"a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202\" returns successfully" Apr 30 12:58:16.725372 kubelet[2794]: I0430 12:58:16.725079 2794 scope.go:117] "RemoveContainer" containerID="3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0" Apr 30 12:58:16.725372 kubelet[2794]: E0430 12:58:16.725291 2794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0\": not found" containerID="3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0" Apr 30 12:58:16.725372 kubelet[2794]: I0430 12:58:16.725309 2794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0"} err="failed to get container status \"3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0\": rpc error: code = NotFound desc = an error occurred when try to find container \"3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0\": not found" Apr 30 12:58:16.725372 kubelet[2794]: I0430 12:58:16.725325 2794 scope.go:117] "RemoveContainer" containerID="4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e" Apr 30 12:58:16.725477 containerd[1515]: time="2025-04-30T12:58:16.725193677Z" level=error msg="ContainerStatus for \"3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3bb2dd1509bec9b04d596dfebe548ce29a2461b396a9f96fbe2a34e056d892f0\": not found" Apr 30 12:58:16.725569 containerd[1515]: time="2025-04-30T12:58:16.725527314Z" level=error msg="ContainerStatus for \"4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e\": not found" Apr 30 12:58:16.725706 kubelet[2794]: E0430 12:58:16.725684 2794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e\": not found" containerID="4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e" Apr 30 12:58:16.725766 kubelet[2794]: I0430 12:58:16.725705 2794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e"} err="failed to get container status \"4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e\": rpc error: code = NotFound desc = an error occurred when try to find container \"4220d49d88d7691dbca1c90513e756251c2d0e96a4e94c631832b24ed8fdac1e\": not found" Apr 30 12:58:16.725766 kubelet[2794]: I0430 12:58:16.725719 2794 scope.go:117] "RemoveContainer" containerID="426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77" Apr 30 12:58:16.725882 containerd[1515]: time="2025-04-30T12:58:16.725852083Z" level=error msg="ContainerStatus for \"426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77\": not found" Apr 30 12:58:16.725953 kubelet[2794]: E0430 12:58:16.725932 2794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77\": not found" containerID="426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77" Apr 30 12:58:16.725988 kubelet[2794]: I0430 12:58:16.725950 2794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77"} err="failed to get container status \"426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77\": rpc error: code = NotFound desc = an error occurred when try to find container \"426664a920afaaba8fec77c476a552bdf55eedb5d9b0d013238a6177a5f7aa77\": not found" Apr 30 12:58:16.725988 kubelet[2794]: I0430 12:58:16.725961 2794 scope.go:117] "RemoveContainer" containerID="8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c" Apr 30 12:58:16.726096 containerd[1515]: time="2025-04-30T12:58:16.726068128Z" level=error msg="ContainerStatus for \"8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c\": not found" Apr 30 12:58:16.726203 kubelet[2794]: E0430 12:58:16.726179 2794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c\": not found" containerID="8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c" Apr 30 12:58:16.726256 kubelet[2794]: I0430 12:58:16.726197 2794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c"} err="failed to get container status \"8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c\": rpc error: code = NotFound desc = an error occurred when try to find container \"8555e1b299d74d47a02508fb10520084c403ede2d5608557d409738eacb0cd8c\": not found" Apr 30 12:58:16.726256 kubelet[2794]: I0430 12:58:16.726247 2794 scope.go:117] "RemoveContainer" containerID="a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202" Apr 30 12:58:16.726485 containerd[1515]: time="2025-04-30T12:58:16.726354385Z" level=error msg="ContainerStatus for \"a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202\": not found" Apr 30 12:58:16.726530 kubelet[2794]: E0430 12:58:16.726441 2794 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202\": not found" containerID="a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202" Apr 30 12:58:16.726530 kubelet[2794]: I0430 12:58:16.726470 2794 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202"} err="failed to get container status \"a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202\": rpc error: code = NotFound desc = an error occurred when try to find container \"a628c0ac7d637b10c826b0e762ed73f7cdbf6e2ee760a3143dfc88dd4a0f6202\": not found" Apr 30 12:58:17.141355 kubelet[2794]: I0430 12:58:17.141313 2794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f6287a5-e686-43a9-9b3e-d09836a18e00" path="/var/lib/kubelet/pods/7f6287a5-e686-43a9-9b3e-d09836a18e00/volumes" Apr 30 12:58:17.141996 kubelet[2794]: I0430 12:58:17.141958 2794 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1" path="/var/lib/kubelet/pods/d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1/volumes" Apr 30 12:58:17.179494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4fed704bc49f9697679da1ced788717de296c8aa453f85fafbd507eed4e2dc1-rootfs.mount: Deactivated successfully. Apr 30 12:58:17.179853 systemd[1]: var-lib-kubelet-pods-d7fd7bd8\x2d6735\x2d4ab8\x2d8c66\x2dfc22c66e77e1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgn4gb.mount: Deactivated successfully. Apr 30 12:58:17.180030 systemd[1]: var-lib-kubelet-pods-7f6287a5\x2de686\x2d43a9\x2d9b3e\x2dd09836a18e00-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqn8xj.mount: Deactivated successfully. Apr 30 12:58:17.180192 systemd[1]: var-lib-kubelet-pods-7f6287a5\x2de686\x2d43a9\x2d9b3e\x2dd09836a18e00-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 12:58:17.180342 systemd[1]: var-lib-kubelet-pods-7f6287a5\x2de686\x2d43a9\x2d9b3e\x2dd09836a18e00-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 12:58:17.247163 kubelet[2794]: E0430 12:58:17.241509 2794 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 12:58:18.245835 sshd[4387]: Connection closed by 139.178.68.195 port 35512 Apr 30 12:58:18.246753 sshd-session[4385]: pam_unix(sshd:session): session closed for user core Apr 30 12:58:18.250916 systemd[1]: sshd@20-37.27.3.216:22-139.178.68.195:35512.service: Deactivated successfully. Apr 30 12:58:18.253054 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 12:58:18.254040 systemd-logind[1497]: Session 21 logged out. Waiting for processes to exit. Apr 30 12:58:18.255283 systemd-logind[1497]: Removed session 21. Apr 30 12:58:18.418965 systemd[1]: Started sshd@21-37.27.3.216:22-139.178.68.195:34856.service - OpenSSH per-connection server daemon (139.178.68.195:34856). Apr 30 12:58:19.399011 sshd[4551]: Accepted publickey for core from 139.178.68.195 port 34856 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:58:19.400284 sshd-session[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:58:19.405489 systemd-logind[1497]: New session 22 of user core. Apr 30 12:58:19.413744 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 12:58:20.358534 kubelet[2794]: E0430 12:58:20.358406 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f6287a5-e686-43a9-9b3e-d09836a18e00" containerName="mount-cgroup" Apr 30 12:58:20.358534 kubelet[2794]: E0430 12:58:20.358440 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1" containerName="cilium-operator" Apr 30 12:58:20.358534 kubelet[2794]: E0430 12:58:20.358447 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f6287a5-e686-43a9-9b3e-d09836a18e00" containerName="mount-bpf-fs" Apr 30 12:58:20.358534 kubelet[2794]: E0430 12:58:20.358452 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f6287a5-e686-43a9-9b3e-d09836a18e00" containerName="clean-cilium-state" Apr 30 12:58:20.358534 kubelet[2794]: E0430 12:58:20.358456 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f6287a5-e686-43a9-9b3e-d09836a18e00" containerName="cilium-agent" Apr 30 12:58:20.358534 kubelet[2794]: E0430 12:58:20.358474 2794 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f6287a5-e686-43a9-9b3e-d09836a18e00" containerName="apply-sysctl-overwrites" Apr 30 12:58:20.363502 kubelet[2794]: I0430 12:58:20.358508 2794 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7fd7bd8-6735-4ab8-8c66-fc22c66e77e1" containerName="cilium-operator" Apr 30 12:58:20.363502 kubelet[2794]: I0430 12:58:20.363300 2794 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f6287a5-e686-43a9-9b3e-d09836a18e00" containerName="cilium-agent" Apr 30 12:58:20.372909 systemd[1]: Created slice kubepods-burstable-podc43d6e1b_8d9d_453f_9744_e28eee56c1ce.slice - libcontainer container kubepods-burstable-podc43d6e1b_8d9d_453f_9744_e28eee56c1ce.slice. Apr 30 12:58:20.456551 kubelet[2794]: I0430 12:58:20.455937 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c43d6e1b-8d9d-453f-9744-e28eee56c1ce-etc-cni-netd\") pod \"cilium-kdkns\" (UID: \"c43d6e1b-8d9d-453f-9744-e28eee56c1ce\") " pod="kube-system/cilium-kdkns" Apr 30 12:58:20.456551 kubelet[2794]: I0430 12:58:20.455990 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c43d6e1b-8d9d-453f-9744-e28eee56c1ce-lib-modules\") pod \"cilium-kdkns\" (UID: \"c43d6e1b-8d9d-453f-9744-e28eee56c1ce\") " pod="kube-system/cilium-kdkns" Apr 30 12:58:20.456551 kubelet[2794]: I0430 12:58:20.456008 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c43d6e1b-8d9d-453f-9744-e28eee56c1ce-bpf-maps\") pod \"cilium-kdkns\" (UID: \"c43d6e1b-8d9d-453f-9744-e28eee56c1ce\") " pod="kube-system/cilium-kdkns" Apr 30 12:58:20.456551 kubelet[2794]: I0430 12:58:20.456024 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c43d6e1b-8d9d-453f-9744-e28eee56c1ce-hostproc\") pod \"cilium-kdkns\" (UID: \"c43d6e1b-8d9d-453f-9744-e28eee56c1ce\") " pod="kube-system/cilium-kdkns" Apr 30 12:58:20.456551 kubelet[2794]: I0430 12:58:20.456038 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c43d6e1b-8d9d-453f-9744-e28eee56c1ce-cilium-config-path\") pod \"cilium-kdkns\" (UID: \"c43d6e1b-8d9d-453f-9744-e28eee56c1ce\") " pod="kube-system/cilium-kdkns" Apr 30 12:58:20.456551 kubelet[2794]: I0430 12:58:20.456153 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c43d6e1b-8d9d-453f-9744-e28eee56c1ce-host-proc-sys-kernel\") pod \"cilium-kdkns\" (UID: \"c43d6e1b-8d9d-453f-9744-e28eee56c1ce\") " pod="kube-system/cilium-kdkns" Apr 30 12:58:20.456929 kubelet[2794]: I0430 12:58:20.456216 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c43d6e1b-8d9d-453f-9744-e28eee56c1ce-hubble-tls\") pod \"cilium-kdkns\" (UID: \"c43d6e1b-8d9d-453f-9744-e28eee56c1ce\") " pod="kube-system/cilium-kdkns" Apr 30 12:58:20.456929 kubelet[2794]: I0430 12:58:20.456277 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sxgh\" (UniqueName: \"kubernetes.io/projected/c43d6e1b-8d9d-453f-9744-e28eee56c1ce-kube-api-access-5sxgh\") pod \"cilium-kdkns\" (UID: \"c43d6e1b-8d9d-453f-9744-e28eee56c1ce\") " pod="kube-system/cilium-kdkns" Apr 30 12:58:20.456929 kubelet[2794]: I0430 12:58:20.456313 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c43d6e1b-8d9d-453f-9744-e28eee56c1ce-cilium-ipsec-secrets\") pod \"cilium-kdkns\" (UID: \"c43d6e1b-8d9d-453f-9744-e28eee56c1ce\") " pod="kube-system/cilium-kdkns" Apr 30 12:58:20.456929 kubelet[2794]: I0430 12:58:20.456351 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c43d6e1b-8d9d-453f-9744-e28eee56c1ce-cilium-cgroup\") pod \"cilium-kdkns\" (UID: \"c43d6e1b-8d9d-453f-9744-e28eee56c1ce\") " pod="kube-system/cilium-kdkns" Apr 30 12:58:20.456929 kubelet[2794]: I0430 12:58:20.456389 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c43d6e1b-8d9d-453f-9744-e28eee56c1ce-cni-path\") pod \"cilium-kdkns\" (UID: \"c43d6e1b-8d9d-453f-9744-e28eee56c1ce\") " pod="kube-system/cilium-kdkns" Apr 30 12:58:20.456929 kubelet[2794]: I0430 12:58:20.456420 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c43d6e1b-8d9d-453f-9744-e28eee56c1ce-clustermesh-secrets\") pod \"cilium-kdkns\" (UID: \"c43d6e1b-8d9d-453f-9744-e28eee56c1ce\") " pod="kube-system/cilium-kdkns" Apr 30 12:58:20.457059 kubelet[2794]: I0430 12:58:20.456460 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c43d6e1b-8d9d-453f-9744-e28eee56c1ce-cilium-run\") pod \"cilium-kdkns\" (UID: \"c43d6e1b-8d9d-453f-9744-e28eee56c1ce\") " pod="kube-system/cilium-kdkns" Apr 30 12:58:20.457059 kubelet[2794]: I0430 12:58:20.456483 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c43d6e1b-8d9d-453f-9744-e28eee56c1ce-xtables-lock\") pod \"cilium-kdkns\" (UID: \"c43d6e1b-8d9d-453f-9744-e28eee56c1ce\") " pod="kube-system/cilium-kdkns" Apr 30 12:58:20.457059 kubelet[2794]: I0430 12:58:20.456497 2794 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c43d6e1b-8d9d-453f-9744-e28eee56c1ce-host-proc-sys-net\") pod \"cilium-kdkns\" (UID: \"c43d6e1b-8d9d-453f-9744-e28eee56c1ce\") " pod="kube-system/cilium-kdkns" Apr 30 12:58:20.563634 sshd[4553]: Connection closed by 139.178.68.195 port 34856 Apr 30 12:58:20.565239 sshd-session[4551]: pam_unix(sshd:session): session closed for user core Apr 30 12:58:20.573517 systemd[1]: sshd@21-37.27.3.216:22-139.178.68.195:34856.service: Deactivated successfully. Apr 30 12:58:20.577217 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 12:58:20.581525 systemd-logind[1497]: Session 22 logged out. Waiting for processes to exit. Apr 30 12:58:20.593274 systemd-logind[1497]: Removed session 22. Apr 30 12:58:20.686012 containerd[1515]: time="2025-04-30T12:58:20.684551597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kdkns,Uid:c43d6e1b-8d9d-453f-9744-e28eee56c1ce,Namespace:kube-system,Attempt:0,}" Apr 30 12:58:20.714000 containerd[1515]: time="2025-04-30T12:58:20.713268176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:58:20.714000 containerd[1515]: time="2025-04-30T12:58:20.713819169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:58:20.714000 containerd[1515]: time="2025-04-30T12:58:20.713844958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:58:20.714000 containerd[1515]: time="2025-04-30T12:58:20.713931871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:58:20.735025 systemd[1]: Started sshd@22-37.27.3.216:22-139.178.68.195:34862.service - OpenSSH per-connection server daemon (139.178.68.195:34862). Apr 30 12:58:20.746944 systemd[1]: Started cri-containerd-3976b8750cc09e7db6668643fe5f187f26613960f57cc43990452a2179c59e35.scope - libcontainer container 3976b8750cc09e7db6668643fe5f187f26613960f57cc43990452a2179c59e35. Apr 30 12:58:20.773436 containerd[1515]: time="2025-04-30T12:58:20.773268903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kdkns,Uid:c43d6e1b-8d9d-453f-9744-e28eee56c1ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"3976b8750cc09e7db6668643fe5f187f26613960f57cc43990452a2179c59e35\"" Apr 30 12:58:20.780814 containerd[1515]: time="2025-04-30T12:58:20.780539911Z" level=info msg="CreateContainer within sandbox \"3976b8750cc09e7db6668643fe5f187f26613960f57cc43990452a2179c59e35\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:58:20.792379 containerd[1515]: time="2025-04-30T12:58:20.792322905Z" level=info msg="CreateContainer within sandbox \"3976b8750cc09e7db6668643fe5f187f26613960f57cc43990452a2179c59e35\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7b66ab3da3aa01b359504cfd4ca1f72323764259b78a3d9d39adaf4b520eb7d3\"" Apr 30 12:58:20.794488 containerd[1515]: time="2025-04-30T12:58:20.793667408Z" level=info msg="StartContainer for \"7b66ab3da3aa01b359504cfd4ca1f72323764259b78a3d9d39adaf4b520eb7d3\"" Apr 30 12:58:20.819809 systemd[1]: Started cri-containerd-7b66ab3da3aa01b359504cfd4ca1f72323764259b78a3d9d39adaf4b520eb7d3.scope - libcontainer container 7b66ab3da3aa01b359504cfd4ca1f72323764259b78a3d9d39adaf4b520eb7d3. Apr 30 12:58:20.835143 kubelet[2794]: I0430 12:58:20.835088 2794 setters.go:600] "Node became not ready" node="ci-4230-1-1-d-a2f51ba0c1" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T12:58:20Z","lastTransitionTime":"2025-04-30T12:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 12:58:20.850101 containerd[1515]: time="2025-04-30T12:58:20.849951271Z" level=info msg="StartContainer for \"7b66ab3da3aa01b359504cfd4ca1f72323764259b78a3d9d39adaf4b520eb7d3\" returns successfully" Apr 30 12:58:20.864337 systemd[1]: cri-containerd-7b66ab3da3aa01b359504cfd4ca1f72323764259b78a3d9d39adaf4b520eb7d3.scope: Deactivated successfully. Apr 30 12:58:20.864649 systemd[1]: cri-containerd-7b66ab3da3aa01b359504cfd4ca1f72323764259b78a3d9d39adaf4b520eb7d3.scope: Consumed 20ms CPU time, 8.6M memory peak, 2.2M read from disk. Apr 30 12:58:20.896471 containerd[1515]: time="2025-04-30T12:58:20.896406917Z" level=info msg="shim disconnected" id=7b66ab3da3aa01b359504cfd4ca1f72323764259b78a3d9d39adaf4b520eb7d3 namespace=k8s.io Apr 30 12:58:20.896471 containerd[1515]: time="2025-04-30T12:58:20.896460918Z" level=warning msg="cleaning up after shim disconnected" id=7b66ab3da3aa01b359504cfd4ca1f72323764259b78a3d9d39adaf4b520eb7d3 namespace=k8s.io Apr 30 12:58:20.896471 containerd[1515]: time="2025-04-30T12:58:20.896468372Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:58:20.906469 containerd[1515]: time="2025-04-30T12:58:20.906412937Z" level=warning msg="cleanup warnings time=\"2025-04-30T12:58:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 12:58:21.704449 sshd[4597]: Accepted publickey for core from 139.178.68.195 port 34862 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:58:21.705943 sshd-session[4597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:58:21.710372 systemd-logind[1497]: New session 23 of user core. Apr 30 12:58:21.713593 containerd[1515]: time="2025-04-30T12:58:21.713289138Z" level=info msg="CreateContainer within sandbox \"3976b8750cc09e7db6668643fe5f187f26613960f57cc43990452a2179c59e35\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:58:21.716745 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 12:58:21.729234 containerd[1515]: time="2025-04-30T12:58:21.729169458Z" level=info msg="CreateContainer within sandbox \"3976b8750cc09e7db6668643fe5f187f26613960f57cc43990452a2179c59e35\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"010a207966999064dd5a51c615960bf4962ddd75af7d56f25bb6ce86e346cf3e\"" Apr 30 12:58:21.730071 containerd[1515]: time="2025-04-30T12:58:21.730052655Z" level=info msg="StartContainer for \"010a207966999064dd5a51c615960bf4962ddd75af7d56f25bb6ce86e346cf3e\"" Apr 30 12:58:21.765767 systemd[1]: Started cri-containerd-010a207966999064dd5a51c615960bf4962ddd75af7d56f25bb6ce86e346cf3e.scope - libcontainer container 010a207966999064dd5a51c615960bf4962ddd75af7d56f25bb6ce86e346cf3e. Apr 30 12:58:21.789629 containerd[1515]: time="2025-04-30T12:58:21.789564303Z" level=info msg="StartContainer for \"010a207966999064dd5a51c615960bf4962ddd75af7d56f25bb6ce86e346cf3e\" returns successfully" Apr 30 12:58:21.798488 systemd[1]: cri-containerd-010a207966999064dd5a51c615960bf4962ddd75af7d56f25bb6ce86e346cf3e.scope: Deactivated successfully. Apr 30 12:58:21.798941 systemd[1]: cri-containerd-010a207966999064dd5a51c615960bf4962ddd75af7d56f25bb6ce86e346cf3e.scope: Consumed 16ms CPU time, 6.6M memory peak, 1.2M read from disk. Apr 30 12:58:21.823821 containerd[1515]: time="2025-04-30T12:58:21.823730715Z" level=info msg="shim disconnected" id=010a207966999064dd5a51c615960bf4962ddd75af7d56f25bb6ce86e346cf3e namespace=k8s.io Apr 30 12:58:21.823821 containerd[1515]: time="2025-04-30T12:58:21.823790548Z" level=warning msg="cleaning up after shim disconnected" id=010a207966999064dd5a51c615960bf4962ddd75af7d56f25bb6ce86e346cf3e namespace=k8s.io Apr 30 12:58:21.823821 containerd[1515]: time="2025-04-30T12:58:21.823802730Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:58:22.248817 kubelet[2794]: E0430 12:58:22.248739 2794 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 12:58:22.377576 sshd[4677]: Connection closed by 139.178.68.195 port 34862 Apr 30 12:58:22.378266 sshd-session[4597]: pam_unix(sshd:session): session closed for user core Apr 30 12:58:22.382387 systemd-logind[1497]: Session 23 logged out. Waiting for processes to exit. Apr 30 12:58:22.382746 systemd[1]: sshd@22-37.27.3.216:22-139.178.68.195:34862.service: Deactivated successfully. Apr 30 12:58:22.384756 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 12:58:22.386015 systemd-logind[1497]: Removed session 23. Apr 30 12:58:22.549911 systemd[1]: Started sshd@23-37.27.3.216:22-139.178.68.195:34864.service - OpenSSH per-connection server daemon (139.178.68.195:34864). Apr 30 12:58:22.564497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-010a207966999064dd5a51c615960bf4962ddd75af7d56f25bb6ce86e346cf3e-rootfs.mount: Deactivated successfully. Apr 30 12:58:22.719654 containerd[1515]: time="2025-04-30T12:58:22.717419508Z" level=info msg="CreateContainer within sandbox \"3976b8750cc09e7db6668643fe5f187f26613960f57cc43990452a2179c59e35\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:58:22.740088 containerd[1515]: time="2025-04-30T12:58:22.740029392Z" level=info msg="CreateContainer within sandbox \"3976b8750cc09e7db6668643fe5f187f26613960f57cc43990452a2179c59e35\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1bbcfbfd997981b42f905c77ab91dd67ef7be2e3ce6a014eaf6bd2c892c54e59\"" Apr 30 12:58:22.740644 containerd[1515]: time="2025-04-30T12:58:22.740622125Z" level=info msg="StartContainer for \"1bbcfbfd997981b42f905c77ab91dd67ef7be2e3ce6a014eaf6bd2c892c54e59\"" Apr 30 12:58:22.769738 systemd[1]: Started cri-containerd-1bbcfbfd997981b42f905c77ab91dd67ef7be2e3ce6a014eaf6bd2c892c54e59.scope - libcontainer container 1bbcfbfd997981b42f905c77ab91dd67ef7be2e3ce6a014eaf6bd2c892c54e59. Apr 30 12:58:22.792938 containerd[1515]: time="2025-04-30T12:58:22.792765790Z" level=info msg="StartContainer for \"1bbcfbfd997981b42f905c77ab91dd67ef7be2e3ce6a014eaf6bd2c892c54e59\" returns successfully" Apr 30 12:58:22.798075 systemd[1]: cri-containerd-1bbcfbfd997981b42f905c77ab91dd67ef7be2e3ce6a014eaf6bd2c892c54e59.scope: Deactivated successfully. Apr 30 12:58:22.821151 containerd[1515]: time="2025-04-30T12:58:22.821022196Z" level=info msg="shim disconnected" id=1bbcfbfd997981b42f905c77ab91dd67ef7be2e3ce6a014eaf6bd2c892c54e59 namespace=k8s.io Apr 30 12:58:22.821517 containerd[1515]: time="2025-04-30T12:58:22.821328130Z" level=warning msg="cleaning up after shim disconnected" id=1bbcfbfd997981b42f905c77ab91dd67ef7be2e3ce6a014eaf6bd2c892c54e59 namespace=k8s.io Apr 30 12:58:22.821517 containerd[1515]: time="2025-04-30T12:58:22.821348228Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:58:23.523711 sshd[4749]: Accepted publickey for core from 139.178.68.195 port 34864 ssh2: RSA SHA256:dV5pBDhQJF3aurfsxX04IrzkXSu11tyU76+45DL2eXQ Apr 30 12:58:23.525101 sshd-session[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:58:23.530432 systemd-logind[1497]: New session 24 of user core. Apr 30 12:58:23.535777 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 12:58:23.564534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bbcfbfd997981b42f905c77ab91dd67ef7be2e3ce6a014eaf6bd2c892c54e59-rootfs.mount: Deactivated successfully. Apr 30 12:58:23.719313 containerd[1515]: time="2025-04-30T12:58:23.719260445Z" level=info msg="CreateContainer within sandbox \"3976b8750cc09e7db6668643fe5f187f26613960f57cc43990452a2179c59e35\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:58:23.738542 containerd[1515]: time="2025-04-30T12:58:23.738234588Z" level=info msg="CreateContainer within sandbox \"3976b8750cc09e7db6668643fe5f187f26613960f57cc43990452a2179c59e35\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b370da35a26bb8efb37444e06ea02e4321e3cab1a4ebf9231e7ffa17ba16583a\"" Apr 30 12:58:23.740977 containerd[1515]: time="2025-04-30T12:58:23.740252234Z" level=info msg="StartContainer for \"b370da35a26bb8efb37444e06ea02e4321e3cab1a4ebf9231e7ffa17ba16583a\"" Apr 30 12:58:23.770742 systemd[1]: Started cri-containerd-b370da35a26bb8efb37444e06ea02e4321e3cab1a4ebf9231e7ffa17ba16583a.scope - libcontainer container b370da35a26bb8efb37444e06ea02e4321e3cab1a4ebf9231e7ffa17ba16583a. Apr 30 12:58:23.788330 systemd[1]: cri-containerd-b370da35a26bb8efb37444e06ea02e4321e3cab1a4ebf9231e7ffa17ba16583a.scope: Deactivated successfully. Apr 30 12:58:23.791952 containerd[1515]: time="2025-04-30T12:58:23.791913274Z" level=info msg="StartContainer for \"b370da35a26bb8efb37444e06ea02e4321e3cab1a4ebf9231e7ffa17ba16583a\" returns successfully" Apr 30 12:58:23.815384 containerd[1515]: time="2025-04-30T12:58:23.815315344Z" level=info msg="shim disconnected" id=b370da35a26bb8efb37444e06ea02e4321e3cab1a4ebf9231e7ffa17ba16583a namespace=k8s.io Apr 30 12:58:23.815384 containerd[1515]: time="2025-04-30T12:58:23.815376599Z" level=warning msg="cleaning up after shim disconnected" id=b370da35a26bb8efb37444e06ea02e4321e3cab1a4ebf9231e7ffa17ba16583a namespace=k8s.io Apr 30 12:58:23.815384 containerd[1515]: time="2025-04-30T12:58:23.815384273Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:58:24.564313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b370da35a26bb8efb37444e06ea02e4321e3cab1a4ebf9231e7ffa17ba16583a-rootfs.mount: Deactivated successfully. Apr 30 12:58:24.725182 containerd[1515]: time="2025-04-30T12:58:24.725134136Z" level=info msg="CreateContainer within sandbox \"3976b8750cc09e7db6668643fe5f187f26613960f57cc43990452a2179c59e35\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:58:24.744409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3217378943.mount: Deactivated successfully. Apr 30 12:58:24.745408 containerd[1515]: time="2025-04-30T12:58:24.744808051Z" level=info msg="CreateContainer within sandbox \"3976b8750cc09e7db6668643fe5f187f26613960f57cc43990452a2179c59e35\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e367dafe0bc39c184279c52517047621ce5064c4c02cfdaef48c12076633cc13\"" Apr 30 12:58:24.750652 containerd[1515]: time="2025-04-30T12:58:24.750327134Z" level=info msg="StartContainer for \"e367dafe0bc39c184279c52517047621ce5064c4c02cfdaef48c12076633cc13\"" Apr 30 12:58:24.779832 systemd[1]: Started cri-containerd-e367dafe0bc39c184279c52517047621ce5064c4c02cfdaef48c12076633cc13.scope - libcontainer container e367dafe0bc39c184279c52517047621ce5064c4c02cfdaef48c12076633cc13. Apr 30 12:58:24.807414 containerd[1515]: time="2025-04-30T12:58:24.807379159Z" level=info msg="StartContainer for \"e367dafe0bc39c184279c52517047621ce5064c4c02cfdaef48c12076633cc13\" returns successfully" Apr 30 12:58:25.315649 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 12:58:25.743321 kubelet[2794]: I0430 12:58:25.740095 2794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kdkns" podStartSLOduration=5.740075365 podStartE2EDuration="5.740075365s" podCreationTimestamp="2025-04-30 12:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:58:25.7397576 +0000 UTC m=+218.699541330" watchObservedRunningTime="2025-04-30 12:58:25.740075365 +0000 UTC m=+218.699859096" Apr 30 12:58:26.411070 systemd[1]: run-containerd-runc-k8s.io-e367dafe0bc39c184279c52517047621ce5064c4c02cfdaef48c12076633cc13-runc.R0lfpa.mount: Deactivated successfully. Apr 30 12:58:28.369720 systemd-networkd[1428]: lxc_health: Link UP Apr 30 12:58:28.370401 systemd-networkd[1428]: lxc_health: Gained carrier Apr 30 12:58:28.583915 systemd[1]: run-containerd-runc-k8s.io-e367dafe0bc39c184279c52517047621ce5064c4c02cfdaef48c12076633cc13-runc.htQd18.mount: Deactivated successfully. Apr 30 12:58:30.166145 systemd-networkd[1428]: lxc_health: Gained IPv6LL Apr 30 12:58:35.194730 sshd[4808]: Connection closed by 139.178.68.195 port 34864 Apr 30 12:58:35.195793 sshd-session[4749]: pam_unix(sshd:session): session closed for user core Apr 30 12:58:35.198911 systemd[1]: sshd@23-37.27.3.216:22-139.178.68.195:34864.service: Deactivated successfully. Apr 30 12:58:35.201425 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 12:58:35.203115 systemd-logind[1497]: Session 24 logged out. Waiting for processes to exit. Apr 30 12:58:35.204362 systemd-logind[1497]: Removed session 24.