Apr 30 00:36:11.019772 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 22:31:30 -00 2025 Apr 30 00:36:11.019806 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:36:11.019820 kernel: BIOS-provided physical RAM map: Apr 30 00:36:11.019830 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 00:36:11.019839 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 00:36:11.019848 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 00:36:11.019859 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Apr 30 00:36:11.019869 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Apr 30 00:36:11.019881 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 30 00:36:11.019902 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 30 00:36:11.019911 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 00:36:11.019921 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 00:36:11.019930 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 30 00:36:11.019939 kernel: NX (Execute Disable) protection: active Apr 30 00:36:11.019953 kernel: APIC: Static calls initialized Apr 30 00:36:11.019964 kernel: SMBIOS 3.0.0 present. Apr 30 00:36:11.019975 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 30 00:36:11.019985 kernel: Hypervisor detected: KVM Apr 30 00:36:11.019995 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 00:36:11.020005 kernel: kvm-clock: using sched offset of 3100503534 cycles Apr 30 00:36:11.020016 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 00:36:11.020027 kernel: tsc: Detected 2495.310 MHz processor Apr 30 00:36:11.020038 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 00:36:11.020051 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 00:36:11.020062 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Apr 30 00:36:11.020073 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 00:36:11.020084 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 00:36:11.020094 kernel: Using GB pages for direct mapping Apr 30 00:36:11.020105 kernel: ACPI: Early table checksum verification disabled Apr 30 00:36:11.020115 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Apr 30 00:36:11.020126 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:36:11.020136 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:36:11.020172 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:36:11.020182 kernel: ACPI: FACS 0x000000007CFE0000 000040 Apr 30 00:36:11.020193 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:36:11.020204 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:36:11.020214 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:36:11.020224 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:36:11.020235 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Apr 30 00:36:11.020246 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Apr 30 00:36:11.020263 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Apr 30 00:36:11.020274 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Apr 30 00:36:11.020284 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Apr 30 00:36:11.020295 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Apr 30 00:36:11.020306 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Apr 30 00:36:11.020317 kernel: No NUMA configuration found Apr 30 00:36:11.020330 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Apr 30 00:36:11.020341 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Apr 30 00:36:11.020352 kernel: Zone ranges: Apr 30 00:36:11.020363 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 00:36:11.020374 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Apr 30 00:36:11.020385 kernel: Normal empty Apr 30 00:36:11.020396 kernel: Movable zone start for each node Apr 30 00:36:11.020406 kernel: Early memory node ranges Apr 30 00:36:11.020417 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 00:36:11.020428 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Apr 30 00:36:11.020441 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Apr 30 00:36:11.020452 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 00:36:11.020463 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 00:36:11.020474 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 30 00:36:11.020484 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 00:36:11.020495 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 00:36:11.020506 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 00:36:11.020517 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 00:36:11.020528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 00:36:11.020541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 00:36:11.020552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 00:36:11.020563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 00:36:11.020574 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 00:36:11.020585 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 00:36:11.020595 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 30 00:36:11.020606 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 00:36:11.020617 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 30 00:36:11.020628 kernel: Booting paravirtualized kernel on KVM Apr 30 00:36:11.020641 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 00:36:11.020653 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 30 00:36:11.020664 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Apr 30 00:36:11.020674 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Apr 30 00:36:11.020685 kernel: pcpu-alloc: [0] 0 1 Apr 30 00:36:11.020710 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 30 00:36:11.020725 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:36:11.020737 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:36:11.020750 kernel: random: crng init done Apr 30 00:36:11.020761 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:36:11.020772 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 00:36:11.020783 kernel: Fallback order for Node 0: 0 Apr 30 00:36:11.020794 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Apr 30 00:36:11.020804 kernel: Policy zone: DMA32 Apr 30 00:36:11.020815 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:36:11.020827 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42992K init, 2200K bss, 125152K reserved, 0K cma-reserved) Apr 30 00:36:11.020838 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:36:11.020852 kernel: ftrace: allocating 37946 entries in 149 pages Apr 30 00:36:11.020862 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 00:36:11.020873 kernel: Dynamic Preempt: voluntary Apr 30 00:36:11.020884 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:36:11.020896 kernel: rcu: RCU event tracing is enabled. Apr 30 00:36:11.020907 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:36:11.020918 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:36:11.020929 kernel: Rude variant of Tasks RCU enabled. Apr 30 00:36:11.020940 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:36:11.020951 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:36:11.020964 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:36:11.020975 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 30 00:36:11.020986 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:36:11.020997 kernel: Console: colour VGA+ 80x25 Apr 30 00:36:11.021008 kernel: printk: console [tty0] enabled Apr 30 00:36:11.021019 kernel: printk: console [ttyS0] enabled Apr 30 00:36:11.021030 kernel: ACPI: Core revision 20230628 Apr 30 00:36:11.021041 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 00:36:11.021052 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 00:36:11.021065 kernel: x2apic enabled Apr 30 00:36:11.021076 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 00:36:11.021087 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 00:36:11.021098 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 30 00:36:11.021109 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495310) Apr 30 00:36:11.021120 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 00:36:11.021131 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 30 00:36:11.024770 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 30 00:36:11.024801 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 00:36:11.024813 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 00:36:11.024825 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 00:36:11.024838 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 00:36:11.024850 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 30 00:36:11.024861 kernel: RETBleed: Mitigation: untrained return thunk Apr 30 00:36:11.024873 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 00:36:11.024885 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 00:36:11.024897 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 00:36:11.024910 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 00:36:11.024922 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 00:36:11.024933 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 00:36:11.024945 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Apr 30 00:36:11.024957 kernel: Freeing SMP alternatives memory: 32K Apr 30 00:36:11.024968 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:36:11.024980 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:36:11.024991 kernel: landlock: Up and running. Apr 30 00:36:11.025004 kernel: SELinux: Initializing. Apr 30 00:36:11.025016 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 00:36:11.025028 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 00:36:11.025040 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 30 00:36:11.025051 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:36:11.025063 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:36:11.025074 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:36:11.025087 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 30 00:36:11.025098 kernel: ... version: 0 Apr 30 00:36:11.025111 kernel: ... bit width: 48 Apr 30 00:36:11.025123 kernel: ... generic registers: 6 Apr 30 00:36:11.025134 kernel: ... value mask: 0000ffffffffffff Apr 30 00:36:11.025163 kernel: ... max period: 00007fffffffffff Apr 30 00:36:11.025174 kernel: ... fixed-purpose events: 0 Apr 30 00:36:11.025186 kernel: ... event mask: 000000000000003f Apr 30 00:36:11.025197 kernel: signal: max sigframe size: 1776 Apr 30 00:36:11.025209 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:36:11.025221 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:36:11.025235 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:36:11.025246 kernel: smpboot: x86: Booting SMP configuration: Apr 30 00:36:11.025258 kernel: .... node #0, CPUs: #1 Apr 30 00:36:11.025269 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:36:11.025281 kernel: smpboot: Max logical packages: 1 Apr 30 00:36:11.025292 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Apr 30 00:36:11.025304 kernel: devtmpfs: initialized Apr 30 00:36:11.025315 kernel: x86/mm: Memory block size: 128MB Apr 30 00:36:11.025327 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:36:11.025341 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:36:11.025352 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:36:11.025364 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:36:11.025376 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:36:11.025387 kernel: audit: type=2000 audit(1745973369.567:1): state=initialized audit_enabled=0 res=1 Apr 30 00:36:11.025399 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:36:11.025410 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 00:36:11.025421 kernel: cpuidle: using governor menu Apr 30 00:36:11.025433 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:36:11.025446 kernel: dca service started, version 1.12.1 Apr 30 00:36:11.025458 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 30 00:36:11.025470 kernel: PCI: Using configuration type 1 for base access Apr 30 00:36:11.025481 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 00:36:11.025493 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:36:11.025505 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:36:11.025516 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:36:11.025528 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:36:11.025539 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:36:11.025553 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:36:11.025564 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:36:11.025576 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:36:11.025587 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:36:11.025599 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 00:36:11.025610 kernel: ACPI: Interpreter enabled Apr 30 00:36:11.025622 kernel: ACPI: PM: (supports S0 S5) Apr 30 00:36:11.025634 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 00:36:11.025645 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 00:36:11.025659 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 00:36:11.025671 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 00:36:11.025682 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 00:36:11.025958 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:36:11.026090 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 30 00:36:11.026258 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 30 00:36:11.026275 kernel: PCI host bridge to bus 0000:00 Apr 30 00:36:11.026399 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 00:36:11.026515 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 00:36:11.026623 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 00:36:11.026744 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Apr 30 00:36:11.026852 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 30 00:36:11.026958 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 30 00:36:11.027063 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 00:36:11.028340 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 00:36:11.028491 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Apr 30 00:36:11.028625 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Apr 30 00:36:11.028764 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Apr 30 00:36:11.028890 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Apr 30 00:36:11.029014 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Apr 30 00:36:11.029138 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 00:36:11.032293 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 00:36:11.032419 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Apr 30 00:36:11.032552 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 00:36:11.032674 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Apr 30 00:36:11.032818 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 00:36:11.032942 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Apr 30 00:36:11.033078 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 00:36:11.033266 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Apr 30 00:36:11.033397 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 00:36:11.033520 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Apr 30 00:36:11.033649 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 00:36:11.033791 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Apr 30 00:36:11.033928 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 00:36:11.034051 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Apr 30 00:36:11.034301 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 00:36:11.034426 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Apr 30 00:36:11.034553 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 30 00:36:11.034673 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Apr 30 00:36:11.034779 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 00:36:11.034853 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 00:36:11.034946 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 00:36:11.035017 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Apr 30 00:36:11.035085 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Apr 30 00:36:11.035203 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 00:36:11.035281 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 30 00:36:11.035364 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 00:36:11.035437 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Apr 30 00:36:11.035510 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Apr 30 00:36:11.035583 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Apr 30 00:36:11.035654 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 00:36:11.035739 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 00:36:11.035811 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 00:36:11.035895 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 00:36:11.035970 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Apr 30 00:36:11.036042 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 00:36:11.036113 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 00:36:11.036215 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 00:36:11.036296 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 30 00:36:11.036373 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Apr 30 00:36:11.036447 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Apr 30 00:36:11.036516 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 00:36:11.036586 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 00:36:11.036654 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 00:36:11.036752 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 30 00:36:11.036827 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Apr 30 00:36:11.036903 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 00:36:11.036974 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 00:36:11.037044 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 00:36:11.037124 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 00:36:11.037229 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Apr 30 00:36:11.037304 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Apr 30 00:36:11.037376 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 00:36:11.037449 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 00:36:11.037519 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 00:36:11.037600 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 30 00:36:11.037674 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Apr 30 00:36:11.037760 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Apr 30 00:36:11.037833 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 00:36:11.037903 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 00:36:11.037973 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 00:36:11.037985 kernel: acpiphp: Slot [0] registered Apr 30 00:36:11.038066 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 00:36:11.038158 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Apr 30 00:36:11.038235 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Apr 30 00:36:11.038309 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Apr 30 00:36:11.038379 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 00:36:11.038450 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 00:36:11.038524 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 00:36:11.038533 kernel: acpiphp: Slot [0-2] registered Apr 30 00:36:11.038602 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 00:36:11.038673 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Apr 30 00:36:11.038755 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 00:36:11.038765 kernel: acpiphp: Slot [0-3] registered Apr 30 00:36:11.038834 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 00:36:11.038904 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 00:36:11.038975 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 00:36:11.038987 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 00:36:11.038994 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 00:36:11.039001 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 00:36:11.039008 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 00:36:11.039015 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 00:36:11.039022 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 00:36:11.039029 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 00:36:11.039035 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 00:36:11.039043 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 00:36:11.039050 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 00:36:11.039057 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 00:36:11.039064 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 00:36:11.039071 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 00:36:11.039077 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 00:36:11.039084 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 00:36:11.039091 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 00:36:11.039098 kernel: iommu: Default domain type: Translated Apr 30 00:36:11.039106 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 00:36:11.039113 kernel: PCI: Using ACPI for IRQ routing Apr 30 00:36:11.039120 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 00:36:11.039126 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 00:36:11.039133 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Apr 30 00:36:11.039276 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 00:36:11.039369 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 00:36:11.039441 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 00:36:11.039450 kernel: vgaarb: loaded Apr 30 00:36:11.039461 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 00:36:11.039468 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 00:36:11.039475 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 00:36:11.039482 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:36:11.039489 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:36:11.039496 kernel: pnp: PnP ACPI init Apr 30 00:36:11.039572 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 30 00:36:11.039583 kernel: pnp: PnP ACPI: found 5 devices Apr 30 00:36:11.039592 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 00:36:11.039599 kernel: NET: Registered PF_INET protocol family Apr 30 00:36:11.039606 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:36:11.039613 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 00:36:11.039623 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:36:11.039632 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 00:36:11.039647 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 00:36:11.039658 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 00:36:11.039668 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 00:36:11.039681 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 00:36:11.039690 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:36:11.039708 kernel: NET: Registered PF_XDP protocol family Apr 30 00:36:11.039798 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 00:36:11.039871 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 00:36:11.039952 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 00:36:11.040023 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 00:36:11.040098 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 00:36:11.040186 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 00:36:11.040257 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 00:36:11.040327 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 00:36:11.040396 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 00:36:11.040468 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 00:36:11.040539 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 00:36:11.040610 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 00:36:11.040684 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 00:36:11.040773 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 00:36:11.040844 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 00:36:11.040915 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 00:36:11.040985 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 00:36:11.041055 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 00:36:11.041126 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 00:36:11.041337 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 00:36:11.041428 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 00:36:11.041500 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 00:36:11.041570 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 00:36:11.041640 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 00:36:11.041719 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 00:36:11.041788 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 30 00:36:11.041858 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 00:36:11.041926 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 00:36:11.041994 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 00:36:11.042064 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 30 00:36:11.042136 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Apr 30 00:36:11.042317 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 00:36:11.042390 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 00:36:11.042560 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 30 00:36:11.042637 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 00:36:11.042726 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 00:36:11.042800 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 00:36:11.042864 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 00:36:11.042926 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 00:36:11.042990 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Apr 30 00:36:11.043056 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 30 00:36:11.043118 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 30 00:36:11.043277 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Apr 30 00:36:11.043345 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Apr 30 00:36:11.043416 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Apr 30 00:36:11.043481 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 00:36:11.043561 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Apr 30 00:36:11.043632 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 00:36:11.043730 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Apr 30 00:36:11.043798 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 00:36:11.043870 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Apr 30 00:36:11.043935 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 00:36:11.044006 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Apr 30 00:36:11.044075 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 00:36:11.044160 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 30 00:36:11.044229 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Apr 30 00:36:11.044294 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 00:36:11.044364 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 30 00:36:11.044430 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Apr 30 00:36:11.044498 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 00:36:11.044613 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 30 00:36:11.044723 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Apr 30 00:36:11.044793 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 00:36:11.044803 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 00:36:11.044811 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:36:11.044818 kernel: Initialise system trusted keyrings Apr 30 00:36:11.044826 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 00:36:11.044836 kernel: Key type asymmetric registered Apr 30 00:36:11.044843 kernel: Asymmetric key parser 'x509' registered Apr 30 00:36:11.044851 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 00:36:11.044859 kernel: io scheduler mq-deadline registered Apr 30 00:36:11.044866 kernel: io scheduler kyber registered Apr 30 00:36:11.044873 kernel: io scheduler bfq registered Apr 30 00:36:11.044947 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 30 00:36:11.045019 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 30 00:36:11.045090 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 30 00:36:11.045239 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 30 00:36:11.045313 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 30 00:36:11.045384 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 30 00:36:11.045454 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 30 00:36:11.045524 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 30 00:36:11.045593 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 30 00:36:11.045661 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 30 00:36:11.045749 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 30 00:36:11.045829 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 30 00:36:11.045901 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 30 00:36:11.045972 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 30 00:36:11.046042 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 30 00:36:11.046111 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 30 00:36:11.046122 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 00:36:11.046203 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 30 00:36:11.046274 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 30 00:36:11.046288 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 00:36:11.046296 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 30 00:36:11.046303 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:36:11.046311 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 00:36:11.046318 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 00:36:11.046326 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 00:36:11.046333 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 00:36:11.046443 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 30 00:36:11.046456 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 00:36:11.046524 kernel: rtc_cmos 00:03: registered as rtc0 Apr 30 00:36:11.046588 kernel: rtc_cmos 00:03: setting system clock to 2025-04-30T00:36:10 UTC (1745973370) Apr 30 00:36:11.046651 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 30 00:36:11.046661 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 30 00:36:11.046669 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:36:11.046676 kernel: Segment Routing with IPv6 Apr 30 00:36:11.046686 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:36:11.046707 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:36:11.046716 kernel: Key type dns_resolver registered Apr 30 00:36:11.046723 kernel: IPI shorthand broadcast: enabled Apr 30 00:36:11.046730 kernel: sched_clock: Marking stable (1289009811, 142283795)->(1442741055, -11447449) Apr 30 00:36:11.046738 kernel: registered taskstats version 1 Apr 30 00:36:11.046745 kernel: Loading compiled-in X.509 certificates Apr 30 00:36:11.046753 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: eb8928891d93dabd1aa89590482110d196038597' Apr 30 00:36:11.046760 kernel: Key type .fscrypt registered Apr 30 00:36:11.046767 kernel: Key type fscrypt-provisioning registered Apr 30 00:36:11.046775 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:36:11.046784 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:36:11.046791 kernel: ima: No architecture policies found Apr 30 00:36:11.046799 kernel: clk: Disabling unused clocks Apr 30 00:36:11.046806 kernel: Freeing unused kernel image (initmem) memory: 42992K Apr 30 00:36:11.046813 kernel: Write protecting the kernel read-only data: 36864k Apr 30 00:36:11.046822 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Apr 30 00:36:11.046829 kernel: Run /init as init process Apr 30 00:36:11.046837 kernel: with arguments: Apr 30 00:36:11.046844 kernel: /init Apr 30 00:36:11.046852 kernel: with environment: Apr 30 00:36:11.046859 kernel: HOME=/ Apr 30 00:36:11.046866 kernel: TERM=linux Apr 30 00:36:11.046874 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:36:11.046883 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:36:11.046893 systemd[1]: Detected virtualization kvm. Apr 30 00:36:11.046901 systemd[1]: Detected architecture x86-64. Apr 30 00:36:11.046910 systemd[1]: Running in initrd. Apr 30 00:36:11.046918 systemd[1]: No hostname configured, using default hostname. Apr 30 00:36:11.046925 systemd[1]: Hostname set to . Apr 30 00:36:11.046933 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:36:11.046941 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:36:11.046949 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:36:11.046956 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:36:11.046964 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:36:11.046974 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:36:11.046981 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:36:11.046989 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:36:11.046998 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:36:11.047006 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:36:11.047048 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:36:11.047056 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:36:11.047065 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:36:11.047073 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:36:11.047080 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:36:11.047088 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:36:11.047095 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:36:11.047103 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:36:11.047111 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:36:11.047119 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:36:11.047126 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:36:11.047135 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:36:11.047197 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:36:11.047205 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:36:11.047213 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:36:11.047220 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:36:11.047228 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:36:11.047235 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:36:11.047243 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:36:11.047253 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:36:11.047283 systemd-journald[189]: Collecting audit messages is disabled. Apr 30 00:36:11.047304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:11.047312 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:36:11.047321 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:36:11.047330 systemd-journald[189]: Journal started Apr 30 00:36:11.047347 systemd-journald[189]: Runtime Journal (/run/log/journal/962868080cae44adbe39a143febfd642) is 4.8M, max 38.4M, 33.6M free. Apr 30 00:36:11.042590 systemd-modules-load[190]: Inserted module 'overlay' Apr 30 00:36:11.081857 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:36:11.081881 kernel: Bridge firewalling registered Apr 30 00:36:11.081891 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:36:11.069269 systemd-modules-load[190]: Inserted module 'br_netfilter' Apr 30 00:36:11.082437 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:36:11.083233 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:36:11.084168 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:11.090261 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:36:11.094454 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:36:11.103446 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:36:11.105199 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:36:11.108662 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:11.120327 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:36:11.121445 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:36:11.129362 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:36:11.133547 dracut-cmdline[214]: dracut-dracut-053 Apr 30 00:36:11.136997 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=079594ab73b0b9c3f57b251ae4a9c4ba48b1d8cf52fcc550cc89261eb22129fc Apr 30 00:36:11.135300 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:36:11.136003 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:36:11.143622 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:36:11.153883 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:36:11.171991 systemd-resolved[235]: Positive Trust Anchors: Apr 30 00:36:11.172228 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:36:11.172259 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:36:11.175568 systemd-resolved[235]: Defaulting to hostname 'linux'. Apr 30 00:36:11.176432 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:36:11.180944 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:36:11.197184 kernel: SCSI subsystem initialized Apr 30 00:36:11.206173 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:36:11.216180 kernel: iscsi: registered transport (tcp) Apr 30 00:36:11.249427 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:36:11.249498 kernel: QLogic iSCSI HBA Driver Apr 30 00:36:11.291493 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:36:11.298402 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:36:11.331506 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:36:11.331593 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:36:11.333785 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:36:11.384175 kernel: raid6: avx2x4 gen() 22621 MB/s Apr 30 00:36:11.402168 kernel: raid6: avx2x2 gen() 23145 MB/s Apr 30 00:36:11.419355 kernel: raid6: avx2x1 gen() 21468 MB/s Apr 30 00:36:11.419426 kernel: raid6: using algorithm avx2x2 gen() 23145 MB/s Apr 30 00:36:11.437428 kernel: raid6: .... xor() 20583 MB/s, rmw enabled Apr 30 00:36:11.437499 kernel: raid6: using avx2x2 recovery algorithm Apr 30 00:36:11.457200 kernel: xor: automatically using best checksumming function avx Apr 30 00:36:11.627200 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:36:11.640435 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:36:11.647364 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:36:11.659054 systemd-udevd[407]: Using default interface naming scheme 'v255'. Apr 30 00:36:11.662885 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:36:11.672343 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:36:11.692454 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Apr 30 00:36:11.725631 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:36:11.732352 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:36:11.778185 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:36:11.789371 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:36:11.815311 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:36:11.818748 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:36:11.821218 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:36:11.823245 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:36:11.831546 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:36:11.846499 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:36:11.870180 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 00:36:11.870239 kernel: scsi host0: Virtio SCSI HBA Apr 30 00:36:11.883994 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 30 00:36:11.890835 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:36:11.892268 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:11.911124 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:36:11.912303 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:36:11.912456 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:11.912982 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:11.925287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:11.927336 kernel: ACPI: bus type USB registered Apr 30 00:36:11.927356 kernel: libata version 3.00 loaded. Apr 30 00:36:11.930268 kernel: usbcore: registered new interface driver usbfs Apr 30 00:36:11.930323 kernel: usbcore: registered new interface driver hub Apr 30 00:36:11.932810 kernel: usbcore: registered new device driver usb Apr 30 00:36:11.955203 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 00:36:11.955255 kernel: AES CTR mode by8 optimization enabled Apr 30 00:36:11.977030 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 00:36:11.979529 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 30 00:36:11.979644 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 00:36:11.979873 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 00:36:11.979991 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 30 00:36:11.980081 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 30 00:36:11.980187 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 00:36:12.066217 kernel: hub 1-0:1.0: USB hub found Apr 30 00:36:12.066381 kernel: hub 1-0:1.0: 4 ports detected Apr 30 00:36:12.066479 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 00:36:12.066629 kernel: hub 2-0:1.0: USB hub found Apr 30 00:36:12.066781 kernel: hub 2-0:1.0: 4 ports detected Apr 30 00:36:12.066896 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 00:36:12.066907 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 00:36:12.067006 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 00:36:12.067106 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 30 00:36:12.067244 kernel: scsi host1: ahci Apr 30 00:36:12.067332 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 30 00:36:12.067428 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 00:36:12.067519 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 30 00:36:12.067611 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 00:36:12.067711 kernel: scsi host2: ahci Apr 30 00:36:12.067799 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:36:12.067809 kernel: GPT:17805311 != 80003071 Apr 30 00:36:12.067820 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:36:12.067830 kernel: GPT:17805311 != 80003071 Apr 30 00:36:12.067839 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:36:12.067847 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:36:12.067857 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 00:36:12.067950 kernel: scsi host3: ahci Apr 30 00:36:12.068040 kernel: scsi host4: ahci Apr 30 00:36:12.068123 kernel: scsi host5: ahci Apr 30 00:36:12.069380 kernel: scsi host6: ahci Apr 30 00:36:12.069463 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 51 Apr 30 00:36:12.069473 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 51 Apr 30 00:36:12.069481 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 51 Apr 30 00:36:12.069490 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 51 Apr 30 00:36:12.069499 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 51 Apr 30 00:36:12.069507 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 51 Apr 30 00:36:12.069516 kernel: BTRFS: device fsid 4a916ed5-00fd-4e52-b8e2-9fed6d007e9f devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (455) Apr 30 00:36:12.069528 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (458) Apr 30 00:36:12.064363 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:12.088402 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 30 00:36:12.093544 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 30 00:36:12.102393 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 30 00:36:12.102978 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 30 00:36:12.108598 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 00:36:12.120333 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:36:12.123987 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:36:12.128383 disk-uuid[566]: Primary Header is updated. Apr 30 00:36:12.128383 disk-uuid[566]: Secondary Entries is updated. Apr 30 00:36:12.128383 disk-uuid[566]: Secondary Header is updated. Apr 30 00:36:12.139680 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:12.144168 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:36:12.222175 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 00:36:12.371186 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:36:12.381667 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 00:36:12.381747 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 00:36:12.381768 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 30 00:36:12.382187 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 00:36:12.385188 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 30 00:36:12.391156 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 30 00:36:12.391186 kernel: ata1.00: applying bridge limits Apr 30 00:36:12.394499 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 00:36:12.395186 kernel: ata1.00: configured for UDMA/100 Apr 30 00:36:12.397420 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 00:36:12.431206 kernel: usbcore: registered new interface driver usbhid Apr 30 00:36:12.431584 kernel: usbhid: USB HID core driver Apr 30 00:36:12.442271 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 30 00:36:12.442342 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 30 00:36:12.459571 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 30 00:36:12.474582 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 00:36:12.474618 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 30 00:36:13.164195 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 00:36:13.164843 disk-uuid[567]: The operation has completed successfully. Apr 30 00:36:13.231287 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:36:13.231448 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:36:13.265289 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:36:13.268015 sh[594]: Success Apr 30 00:36:13.279177 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 30 00:36:13.346904 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:36:13.355287 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:36:13.360016 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:36:13.392836 kernel: BTRFS info (device dm-0): first mount of filesystem 4a916ed5-00fd-4e52-b8e2-9fed6d007e9f Apr 30 00:36:13.392910 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:36:13.396317 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:36:13.399874 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:36:13.402577 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:36:13.415218 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 00:36:13.418066 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:36:13.419797 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:36:13.428374 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:36:13.432374 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:36:13.447282 kernel: BTRFS info (device sda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:36:13.447350 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:36:13.449638 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:36:13.461272 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:36:13.461336 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:36:13.481532 kernel: BTRFS info (device sda6): last unmount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:36:13.480989 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:36:13.491130 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:36:13.502397 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:36:13.549192 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:36:13.556980 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:36:13.580003 systemd-networkd[775]: lo: Link UP Apr 30 00:36:13.580994 systemd-networkd[775]: lo: Gained carrier Apr 30 00:36:13.583423 systemd-networkd[775]: Enumeration completed Apr 30 00:36:13.584130 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:36:13.585246 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:13.585249 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:36:13.586387 systemd[1]: Reached target network.target - Network. Apr 30 00:36:13.586538 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:13.586540 systemd-networkd[775]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:36:13.591301 systemd-networkd[775]: eth0: Link UP Apr 30 00:36:13.591304 systemd-networkd[775]: eth0: Gained carrier Apr 30 00:36:13.591313 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:13.599902 systemd-networkd[775]: eth1: Link UP Apr 30 00:36:13.599905 systemd-networkd[775]: eth1: Gained carrier Apr 30 00:36:13.599913 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:13.615472 ignition[718]: Ignition 2.20.0 Apr 30 00:36:13.615482 ignition[718]: Stage: fetch-offline Apr 30 00:36:13.616790 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:36:13.615512 ignition[718]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:13.615518 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:36:13.615587 ignition[718]: parsed url from cmdline: "" Apr 30 00:36:13.615589 ignition[718]: no config URL provided Apr 30 00:36:13.615594 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:36:13.615599 ignition[718]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:36:13.615603 ignition[718]: failed to fetch config: resource requires networking Apr 30 00:36:13.615925 ignition[718]: Ignition finished successfully Apr 30 00:36:13.624406 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:36:13.632179 systemd-networkd[775]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:36:13.635861 ignition[783]: Ignition 2.20.0 Apr 30 00:36:13.635870 ignition[783]: Stage: fetch Apr 30 00:36:13.636042 ignition[783]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:13.636049 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:36:13.636134 ignition[783]: parsed url from cmdline: "" Apr 30 00:36:13.636136 ignition[783]: no config URL provided Apr 30 00:36:13.636155 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:36:13.636161 ignition[783]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:36:13.636180 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 30 00:36:13.636336 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 30 00:36:13.665273 systemd-networkd[775]: eth0: DHCPv4 address 135.181.102.231/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 00:36:13.837260 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 30 00:36:13.842337 ignition[783]: GET result: OK Apr 30 00:36:13.842434 ignition[783]: parsing config with SHA512: 860a81f9a0c84b02d5b31369e73769e5482cdea36b5e3f73adefb074cdf68e584e75394333812733584b052da2efea364ea8bb34f70a36419a7020d57169e9a5 Apr 30 00:36:13.850476 unknown[783]: fetched base config from "system" Apr 30 00:36:13.850492 unknown[783]: fetched base config from "system" Apr 30 00:36:13.851280 ignition[783]: fetch: fetch complete Apr 30 00:36:13.850502 unknown[783]: fetched user config from "hetzner" Apr 30 00:36:13.851289 ignition[783]: fetch: fetch passed Apr 30 00:36:13.853910 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:36:13.851348 ignition[783]: Ignition finished successfully Apr 30 00:36:13.861394 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:36:13.884579 ignition[791]: Ignition 2.20.0 Apr 30 00:36:13.884596 ignition[791]: Stage: kargs Apr 30 00:36:13.884877 ignition[791]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:13.888180 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:36:13.884893 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:36:13.886620 ignition[791]: kargs: kargs passed Apr 30 00:36:13.886681 ignition[791]: Ignition finished successfully Apr 30 00:36:13.894302 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:36:13.915482 ignition[797]: Ignition 2.20.0 Apr 30 00:36:13.915501 ignition[797]: Stage: disks Apr 30 00:36:13.919879 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:36:13.915819 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:13.921763 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:36:13.915839 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:36:13.922306 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:36:13.918053 ignition[797]: disks: disks passed Apr 30 00:36:13.923798 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:36:13.918124 ignition[797]: Ignition finished successfully Apr 30 00:36:13.925564 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:36:13.927365 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:36:13.936379 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:36:13.950068 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 00:36:13.952087 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:36:13.960238 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:36:14.041175 kernel: EXT4-fs (sda9): mounted filesystem 21480c83-ef05-4682-ad3b-f751980943a0 r/w with ordered data mode. Quota mode: none. Apr 30 00:36:14.042014 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:36:14.042902 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:36:14.048254 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:36:14.051010 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:36:14.055993 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 00:36:14.057575 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:36:14.058598 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:36:14.062199 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:36:14.066180 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (813) Apr 30 00:36:14.067274 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:36:14.079558 kernel: BTRFS info (device sda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:36:14.079599 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:36:14.079608 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:36:14.089075 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:36:14.089112 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:36:14.092993 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:36:14.122999 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:36:14.125776 coreos-metadata[815]: Apr 30 00:36:14.125 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 30 00:36:14.127555 coreos-metadata[815]: Apr 30 00:36:14.127 INFO Fetch successful Apr 30 00:36:14.127555 coreos-metadata[815]: Apr 30 00:36:14.127 INFO wrote hostname ci-4152-2-3-3-307bd18bd0 to /sysroot/etc/hostname Apr 30 00:36:14.129624 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:36:14.129602 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:36:14.133841 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:36:14.136634 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:36:14.206099 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:36:14.212242 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:36:14.215911 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:36:14.221162 kernel: BTRFS info (device sda6): last unmount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:36:14.239968 ignition[934]: INFO : Ignition 2.20.0 Apr 30 00:36:14.239968 ignition[934]: INFO : Stage: mount Apr 30 00:36:14.242107 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:14.242107 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:36:14.243609 ignition[934]: INFO : mount: mount passed Apr 30 00:36:14.243609 ignition[934]: INFO : Ignition finished successfully Apr 30 00:36:14.244166 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:36:14.250342 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:36:14.250987 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:36:14.388504 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:36:14.394404 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:36:14.412205 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (946) Apr 30 00:36:14.417208 kernel: BTRFS info (device sda6): first mount of filesystem e6cdb381-7cd1-4e2a-87c4-f7bcb12f058c Apr 30 00:36:14.417292 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 00:36:14.419655 kernel: BTRFS info (device sda6): using free space tree Apr 30 00:36:14.431663 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 00:36:14.431782 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 00:36:14.439082 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:36:14.480970 ignition[962]: INFO : Ignition 2.20.0 Apr 30 00:36:14.480970 ignition[962]: INFO : Stage: files Apr 30 00:36:14.482413 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:14.482413 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:36:14.483768 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:36:14.485132 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:36:14.485132 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:36:14.489900 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:36:14.491962 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:36:14.491962 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:36:14.490598 unknown[962]: wrote ssh authorized keys file for user: core Apr 30 00:36:14.496362 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:36:14.496362 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:36:14.496362 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 00:36:14.496362 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 00:36:14.664290 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 00:36:15.007259 systemd-networkd[775]: eth1: Gained IPv6LL Apr 30 00:36:15.455376 systemd-networkd[775]: eth0: Gained IPv6LL Apr 30 00:36:16.078490 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 00:36:16.080102 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:36:16.080102 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 00:36:16.728425 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 30 00:36:17.078359 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:36:17.078359 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:36:17.080808 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:36:17.080808 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:36:17.080808 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:36:17.080808 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:36:17.080808 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:36:17.080808 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:36:17.080808 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:36:17.080808 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:36:17.080808 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:36:17.080808 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 00:36:17.080808 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 00:36:17.080808 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 00:36:17.080808 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 00:36:17.647909 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 30 00:36:17.842239 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 00:36:17.842239 ignition[962]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 30 00:36:17.845891 ignition[962]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:36:17.845891 ignition[962]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:36:17.845891 ignition[962]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 30 00:36:17.845891 ignition[962]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 30 00:36:17.845891 ignition[962]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:36:17.845891 ignition[962]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:36:17.845891 ignition[962]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 30 00:36:17.845891 ignition[962]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Apr 30 00:36:17.845891 ignition[962]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 00:36:17.845891 ignition[962]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 00:36:17.845891 ignition[962]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Apr 30 00:36:17.845891 ignition[962]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:36:17.845891 ignition[962]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:36:17.845891 ignition[962]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:36:17.845891 ignition[962]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:36:17.845891 ignition[962]: INFO : files: files passed Apr 30 00:36:17.845891 ignition[962]: INFO : Ignition finished successfully Apr 30 00:36:17.849082 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:36:17.860417 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:36:17.873309 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:36:17.877263 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:36:17.877415 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:36:17.889431 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:36:17.889431 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:36:17.893087 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:36:17.893961 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:36:17.896335 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:36:17.903324 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:36:17.932588 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:36:17.932808 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:36:17.935199 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:36:17.936642 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:36:17.938575 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:36:17.943351 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:36:17.963314 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:36:17.970340 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:36:17.997390 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:36:17.998505 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:36:18.001304 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:36:18.003447 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:36:18.003628 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:36:18.005994 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:36:18.007319 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:36:18.009516 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:36:18.011454 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:36:18.013259 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:36:18.015506 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:36:18.017653 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:36:18.019915 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:36:18.021954 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:36:18.024185 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:36:18.026166 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:36:18.026341 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:36:18.028711 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:36:18.030016 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:36:18.031935 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:36:18.034467 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:36:18.035705 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:36:18.035928 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:36:18.038608 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:36:18.038819 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:36:18.040011 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:36:18.040247 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:36:18.042321 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 00:36:18.042467 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 00:36:18.050596 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:36:18.054879 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:36:18.064236 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:36:18.064443 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:36:18.075031 ignition[1015]: INFO : Ignition 2.20.0 Apr 30 00:36:18.075031 ignition[1015]: INFO : Stage: umount Apr 30 00:36:18.075031 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:36:18.075031 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 00:36:18.075031 ignition[1015]: INFO : umount: umount passed Apr 30 00:36:18.075031 ignition[1015]: INFO : Ignition finished successfully Apr 30 00:36:18.072519 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:36:18.072696 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:36:18.077482 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:36:18.077593 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:36:18.082205 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:36:18.082316 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:36:18.089490 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:36:18.089568 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:36:18.094337 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:36:18.094398 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:36:18.095506 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:36:18.095556 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:36:18.098288 systemd[1]: Stopped target network.target - Network. Apr 30 00:36:18.099093 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:36:18.099182 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:36:18.099990 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:36:18.103278 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:36:18.109051 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:36:18.109744 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:36:18.111194 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:36:18.111665 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:36:18.111736 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:36:18.112193 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:36:18.112220 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:36:18.119975 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:36:18.120053 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:36:18.120838 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:36:18.120887 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:36:18.123645 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:36:18.132208 systemd-networkd[775]: eth1: DHCPv6 lease lost Apr 30 00:36:18.132923 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:36:18.135255 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:36:18.141795 systemd-networkd[775]: eth0: DHCPv6 lease lost Apr 30 00:36:18.143478 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:36:18.143660 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:36:18.145607 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:36:18.145758 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:36:18.157517 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:36:18.157596 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:36:18.170340 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:36:18.171418 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:36:18.171498 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:36:18.172491 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:36:18.172545 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:36:18.174156 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:36:18.174229 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:36:18.176057 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:36:18.176109 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:36:18.177855 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:36:18.181040 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:36:18.181182 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:36:18.188945 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:36:18.189042 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:36:18.194588 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:36:18.194756 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:36:18.198999 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:36:18.199223 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:36:18.200799 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:36:18.200877 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:36:18.202078 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:36:18.202121 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:36:18.203823 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:36:18.203887 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:36:18.206096 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:36:18.206171 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:36:18.207970 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:36:18.208026 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:36:18.215424 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:36:18.217117 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:36:18.217218 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:36:18.220190 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 00:36:18.220256 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:36:18.221087 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:36:18.221135 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:36:18.223104 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:36:18.223190 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:18.225415 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:36:18.225534 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:36:18.227621 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:36:18.236334 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:36:18.246500 systemd[1]: Switching root. Apr 30 00:36:18.299899 systemd-journald[189]: Journal stopped Apr 30 00:36:19.393218 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). Apr 30 00:36:19.393280 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:36:19.393300 kernel: SELinux: policy capability open_perms=1 Apr 30 00:36:19.393309 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:36:19.393321 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:36:19.393330 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:36:19.393339 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:36:19.393348 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:36:19.393359 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:36:19.393371 kernel: audit: type=1403 audit(1745973378.530:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:36:19.393381 systemd[1]: Successfully loaded SELinux policy in 51.724ms. Apr 30 00:36:19.393398 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.911ms. Apr 30 00:36:19.393410 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:36:19.393420 systemd[1]: Detected virtualization kvm. Apr 30 00:36:19.393430 systemd[1]: Detected architecture x86-64. Apr 30 00:36:19.393440 systemd[1]: Detected first boot. Apr 30 00:36:19.393449 systemd[1]: Hostname set to . Apr 30 00:36:19.393459 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:36:19.393471 zram_generator::config[1075]: No configuration found. Apr 30 00:36:19.393483 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:36:19.393493 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:36:19.393503 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 00:36:19.393513 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:36:19.393523 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:36:19.393532 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:36:19.393542 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:36:19.393552 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:36:19.393563 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:36:19.393573 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:36:19.393582 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:36:19.393591 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:36:19.393601 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:36:19.393611 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:36:19.393621 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:36:19.393631 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:36:19.393640 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:36:19.393651 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 00:36:19.393661 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:36:19.393670 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:36:19.393688 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:36:19.393698 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:36:19.393708 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:36:19.393719 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:36:19.393728 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:36:19.393739 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:36:19.393748 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:36:19.393758 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:36:19.393769 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:36:19.393781 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:36:19.393791 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:36:19.393800 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:36:19.393810 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:36:19.393820 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:36:19.393829 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:36:19.393839 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:36:19.393848 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:36:19.393858 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:36:19.393869 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:36:19.393878 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:36:19.393888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:36:19.393898 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:36:19.393908 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:36:19.393918 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:36:19.393928 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:36:19.393938 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:36:19.393948 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:36:19.393959 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:36:19.393969 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:36:19.393979 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 30 00:36:19.393989 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 30 00:36:19.393999 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:36:19.394008 kernel: fuse: init (API version 7.39) Apr 30 00:36:19.394017 kernel: loop: module loaded Apr 30 00:36:19.394027 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:36:19.394038 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:36:19.394047 kernel: ACPI: bus type drm_connector registered Apr 30 00:36:19.394057 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:36:19.394090 systemd-journald[1173]: Collecting audit messages is disabled. Apr 30 00:36:19.394110 systemd-journald[1173]: Journal started Apr 30 00:36:19.394130 systemd-journald[1173]: Runtime Journal (/run/log/journal/962868080cae44adbe39a143febfd642) is 4.8M, max 38.4M, 33.6M free. Apr 30 00:36:19.398180 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:36:19.401440 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:36:19.409346 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:36:19.405046 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:36:19.405574 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:36:19.406072 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:36:19.406598 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:36:19.407098 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:36:19.407602 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:36:19.408334 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:36:19.409053 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:36:19.410218 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:36:19.410350 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:36:19.410998 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:36:19.411117 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:36:19.412018 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:36:19.412152 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:36:19.412942 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:36:19.413064 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:36:19.413921 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:36:19.414041 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:36:19.414747 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:36:19.414924 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:36:19.416543 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:36:19.418419 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:36:19.419435 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:36:19.428905 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:36:19.437226 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:36:19.439220 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:36:19.440198 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:36:19.450612 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:36:19.453429 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:36:19.456277 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:36:19.468438 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:36:19.469528 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:36:19.471715 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:36:19.480018 systemd-journald[1173]: Time spent on flushing to /var/log/journal/962868080cae44adbe39a143febfd642 is 23.400ms for 1122 entries. Apr 30 00:36:19.480018 systemd-journald[1173]: System Journal (/var/log/journal/962868080cae44adbe39a143febfd642) is 8.0M, max 584.8M, 576.8M free. Apr 30 00:36:19.530219 systemd-journald[1173]: Received client request to flush runtime journal. Apr 30 00:36:19.484247 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:36:19.488577 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:36:19.489266 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:36:19.489778 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:36:19.490521 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:36:19.495351 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:36:19.501876 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:36:19.521044 udevadm[1227]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 00:36:19.531443 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:36:19.533589 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Apr 30 00:36:19.533601 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Apr 30 00:36:19.536667 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:36:19.540756 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:36:19.548267 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:36:19.570197 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:36:19.575394 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:36:19.587368 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Apr 30 00:36:19.587640 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Apr 30 00:36:19.591020 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:36:20.050397 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:36:20.065409 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:36:20.083958 systemd-udevd[1247]: Using default interface naming scheme 'v255'. Apr 30 00:36:20.121733 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:36:20.136387 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:36:20.158321 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:36:20.194698 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 30 00:36:20.219938 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:36:20.242175 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 00:36:20.264200 kernel: ACPI: button: Power Button [PWRF] Apr 30 00:36:20.272188 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 00:36:20.287791 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Apr 30 00:36:20.287805 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 30 00:36:20.287847 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:36:20.287954 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:36:20.295275 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:36:20.298520 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:36:20.302306 systemd-networkd[1254]: lo: Link UP Apr 30 00:36:20.302758 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:36:20.303456 systemd-networkd[1254]: lo: Gained carrier Apr 30 00:36:20.305602 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:36:20.305768 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:36:20.305813 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:36:20.306064 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:36:20.306202 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:36:20.311721 systemd-networkd[1254]: Enumeration completed Apr 30 00:36:20.312550 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:36:20.313127 systemd-networkd[1254]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:20.313131 systemd-networkd[1254]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:36:20.316801 systemd-networkd[1254]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:20.316806 systemd-networkd[1254]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:36:20.317572 systemd-networkd[1254]: eth0: Link UP Apr 30 00:36:20.317577 systemd-networkd[1254]: eth0: Gained carrier Apr 30 00:36:20.317587 systemd-networkd[1254]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:20.321982 systemd-networkd[1254]: eth1: Link UP Apr 30 00:36:20.322618 systemd-networkd[1254]: eth1: Gained carrier Apr 30 00:36:20.322647 systemd-networkd[1254]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:20.327597 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:36:20.329712 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:36:20.329854 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:36:20.334188 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1249) Apr 30 00:36:20.330576 systemd-networkd[1254]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:36:20.346460 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:36:20.346621 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:36:20.354772 systemd-networkd[1254]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:36:20.366165 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 30 00:36:20.386229 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 00:36:20.389797 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 00:36:20.389926 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 00:36:20.388494 systemd-networkd[1254]: eth0: DHCPv4 address 135.181.102.231/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 00:36:20.411460 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 30 00:36:20.411510 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 30 00:36:20.413977 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:36:20.414267 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:36:20.418156 kernel: Console: switching to colour dummy device 80x25 Apr 30 00:36:20.422275 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 00:36:20.422308 kernel: [drm] features: -context_init Apr 30 00:36:20.422319 kernel: [drm] number of scanouts: 1 Apr 30 00:36:20.422334 kernel: [drm] number of cap sets: 0 Apr 30 00:36:20.427259 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:20.430175 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 30 00:36:20.430264 kernel: EDAC MC: Ver: 3.0.0 Apr 30 00:36:20.430945 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:36:20.431106 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:20.435448 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:20.437165 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 30 00:36:20.443443 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 00:36:20.451359 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 00:36:20.452792 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:36:20.453872 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:20.461232 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 00:36:20.468295 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:36:20.527670 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:36:20.563398 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:36:20.575524 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:36:20.588168 lvm[1315]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:36:20.627507 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:36:20.628922 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:36:20.634341 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:36:20.649753 lvm[1318]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:36:20.686843 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:36:20.687939 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:36:20.688088 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:36:20.688129 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:36:20.688268 systemd[1]: Reached target machines.target - Containers. Apr 30 00:36:20.690520 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:36:20.698345 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:36:20.701021 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:36:20.703433 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:36:20.709354 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:36:20.714549 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:36:20.719548 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:36:20.720230 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:36:20.736460 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:36:20.760180 kernel: loop0: detected capacity change from 0 to 140992 Apr 30 00:36:20.775654 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:36:20.779353 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:36:20.809217 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:36:20.827898 kernel: loop1: detected capacity change from 0 to 8 Apr 30 00:36:20.852134 kernel: loop2: detected capacity change from 0 to 138184 Apr 30 00:36:20.892193 kernel: loop3: detected capacity change from 0 to 210664 Apr 30 00:36:20.959193 kernel: loop4: detected capacity change from 0 to 140992 Apr 30 00:36:20.997185 kernel: loop5: detected capacity change from 0 to 8 Apr 30 00:36:21.004285 kernel: loop6: detected capacity change from 0 to 138184 Apr 30 00:36:21.031206 kernel: loop7: detected capacity change from 0 to 210664 Apr 30 00:36:21.063109 (sd-merge)[1339]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 30 00:36:21.064085 (sd-merge)[1339]: Merged extensions into '/usr'. Apr 30 00:36:21.070160 systemd[1]: Reloading requested from client PID 1326 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:36:21.070244 systemd[1]: Reloading... Apr 30 00:36:21.153427 zram_generator::config[1379]: No configuration found. Apr 30 00:36:21.219718 ldconfig[1322]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:36:21.259136 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:36:21.323744 systemd[1]: Reloading finished in 253 ms. Apr 30 00:36:21.338454 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:36:21.341190 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:36:21.351252 systemd[1]: Starting ensure-sysext.service... Apr 30 00:36:21.354475 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:36:21.368714 systemd[1]: Reloading requested from client PID 1417 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:36:21.368877 systemd[1]: Reloading... Apr 30 00:36:21.373697 systemd-tmpfiles[1418]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:36:21.373994 systemd-tmpfiles[1418]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:36:21.374747 systemd-tmpfiles[1418]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:36:21.374991 systemd-tmpfiles[1418]: ACLs are not supported, ignoring. Apr 30 00:36:21.375039 systemd-tmpfiles[1418]: ACLs are not supported, ignoring. Apr 30 00:36:21.377612 systemd-tmpfiles[1418]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:36:21.377625 systemd-tmpfiles[1418]: Skipping /boot Apr 30 00:36:21.383459 systemd-tmpfiles[1418]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:36:21.383471 systemd-tmpfiles[1418]: Skipping /boot Apr 30 00:36:21.430173 zram_generator::config[1448]: No configuration found. Apr 30 00:36:21.536109 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:36:21.602648 systemd[1]: Reloading finished in 233 ms. Apr 30 00:36:21.618166 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:36:21.637306 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:36:21.647356 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:36:21.660271 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:36:21.670238 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:36:21.679332 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:36:21.688401 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:36:21.688787 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:36:21.690118 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:36:21.703462 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:36:21.711898 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:36:21.712558 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:36:21.712661 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:36:21.713447 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:36:21.713587 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:36:21.719106 augenrules[1528]: No rules Apr 30 00:36:21.720507 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:36:21.720799 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:36:21.730772 systemd-networkd[1254]: eth0: Gained IPv6LL Apr 30 00:36:21.731935 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:36:21.732097 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:36:21.738471 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:36:21.740308 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:36:21.749466 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:36:21.761002 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:36:21.771483 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:36:21.780798 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:36:21.788464 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:36:21.789229 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:36:21.795544 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:36:21.810462 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:36:21.817723 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:36:21.830452 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:36:21.832839 systemd-resolved[1507]: Positive Trust Anchors: Apr 30 00:36:21.832851 systemd-resolved[1507]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:36:21.832881 systemd-resolved[1507]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:36:21.834710 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:36:21.839505 systemd-resolved[1507]: Using system hostname 'ci-4152-2-3-3-307bd18bd0'. Apr 30 00:36:21.846328 augenrules[1546]: /sbin/augenrules: No change Apr 30 00:36:21.845424 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:36:21.846987 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 00:36:21.850416 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:36:21.854307 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:36:21.855081 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:36:21.856704 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:36:21.860955 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:36:21.861115 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:36:21.861932 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:36:21.862055 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:36:21.864135 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:36:21.864289 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:36:21.865009 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:36:21.870449 augenrules[1574]: No rules Apr 30 00:36:21.871833 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:36:21.872053 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:36:21.876568 systemd[1]: Finished ensure-sysext.service. Apr 30 00:36:21.882033 systemd[1]: Reached target network.target - Network. Apr 30 00:36:21.885868 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:36:21.886271 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:36:21.886640 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:36:21.886705 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:36:21.897370 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 00:36:21.898242 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:36:21.942471 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 00:36:21.943949 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:36:21.946430 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:36:21.947383 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:36:21.948726 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:36:21.949594 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:36:21.949666 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:36:21.951198 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:36:21.952873 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:36:21.954543 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:36:21.955634 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:36:21.958641 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:36:21.962473 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:36:21.967908 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:36:21.968877 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:36:21.969328 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:36:21.969714 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:36:21.971371 systemd[1]: System is tainted: cgroupsv1 Apr 30 00:36:21.971420 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:36:21.971444 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:36:21.973974 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:36:21.981284 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 00:36:21.988302 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:36:21.999231 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:36:22.004261 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:36:22.005691 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:36:22.010803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:36:22.017186 jq[1600]: false Apr 30 00:36:22.021955 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:36:22.030274 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:36:22.041714 coreos-metadata[1596]: Apr 30 00:36:22.041 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 30 00:36:22.046659 coreos-metadata[1596]: Apr 30 00:36:22.042 INFO Fetch successful Apr 30 00:36:22.046659 coreos-metadata[1596]: Apr 30 00:36:22.042 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 30 00:36:22.046659 coreos-metadata[1596]: Apr 30 00:36:22.043 INFO Fetch successful Apr 30 00:36:22.043248 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:36:22.050333 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 30 00:36:22.058642 systemd-networkd[1254]: eth1: Gained IPv6LL Apr 30 00:36:22.062329 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:36:22.063416 dbus-daemon[1599]: [system] SELinux support is enabled Apr 30 00:36:22.068564 systemd-timesyncd[1590]: Contacted time server 78.46.102.180:123 (0.flatcar.pool.ntp.org). Apr 30 00:36:22.068607 systemd-timesyncd[1590]: Initial clock synchronization to Wed 2025-04-30 00:36:22.324443 UTC. Apr 30 00:36:22.076583 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:36:22.081364 extend-filesystems[1601]: Found loop4 Apr 30 00:36:22.081364 extend-filesystems[1601]: Found loop5 Apr 30 00:36:22.081364 extend-filesystems[1601]: Found loop6 Apr 30 00:36:22.082718 extend-filesystems[1601]: Found loop7 Apr 30 00:36:22.082718 extend-filesystems[1601]: Found sda Apr 30 00:36:22.082718 extend-filesystems[1601]: Found sda1 Apr 30 00:36:22.082718 extend-filesystems[1601]: Found sda2 Apr 30 00:36:22.082718 extend-filesystems[1601]: Found sda3 Apr 30 00:36:22.082718 extend-filesystems[1601]: Found usr Apr 30 00:36:22.082718 extend-filesystems[1601]: Found sda4 Apr 30 00:36:22.082718 extend-filesystems[1601]: Found sda6 Apr 30 00:36:22.082718 extend-filesystems[1601]: Found sda7 Apr 30 00:36:22.082718 extend-filesystems[1601]: Found sda9 Apr 30 00:36:22.082718 extend-filesystems[1601]: Checking size of /dev/sda9 Apr 30 00:36:22.118617 extend-filesystems[1601]: Resized partition /dev/sda9 Apr 30 00:36:22.090352 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:36:22.119062 extend-filesystems[1637]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:36:22.132077 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 30 00:36:22.098980 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:36:22.109341 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:36:22.124628 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:36:22.126829 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:36:22.141724 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:36:22.141925 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:36:22.146453 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:36:22.146651 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:36:22.150495 jq[1638]: true Apr 30 00:36:22.156424 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:36:22.165856 update_engine[1632]: I20250430 00:36:22.165796 1632 main.cc:92] Flatcar Update Engine starting Apr 30 00:36:22.174450 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:36:22.174686 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:36:22.175087 update_engine[1632]: I20250430 00:36:22.174985 1632 update_check_scheduler.cc:74] Next update check in 7m25s Apr 30 00:36:22.185180 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1259) Apr 30 00:36:22.218470 (ntainerd)[1656]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:36:22.227391 jq[1647]: true Apr 30 00:36:22.239740 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:36:22.247850 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:36:22.254388 tar[1646]: linux-amd64/helm Apr 30 00:36:22.247891 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:36:22.249366 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:36:22.249380 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:36:22.256245 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:36:22.257413 systemd-logind[1629]: New seat seat0. Apr 30 00:36:22.262265 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:36:22.263570 systemd-logind[1629]: Watching system buttons on /dev/input/event2 (Power Button) Apr 30 00:36:22.263586 systemd-logind[1629]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 00:36:22.267414 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:36:22.319172 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 00:36:22.324355 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:36:22.395260 bash[1688]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:36:22.398185 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:36:22.400443 locksmithd[1677]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:36:22.403533 sshd_keygen[1640]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:36:22.418535 systemd[1]: Starting sshkeys.service... Apr 30 00:36:22.431860 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 00:36:22.442854 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 00:36:22.447871 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:36:22.464877 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:36:22.468090 coreos-metadata[1709]: Apr 30 00:36:22.468 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 30 00:36:22.471234 coreos-metadata[1709]: Apr 30 00:36:22.469 INFO Fetch successful Apr 30 00:36:22.476330 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:36:22.476565 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:36:22.490156 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:36:22.496769 unknown[1709]: wrote ssh authorized keys file for user: core Apr 30 00:36:22.501716 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:36:22.512746 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:36:22.520078 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 00:36:22.531901 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:36:22.546213 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 30 00:36:22.574422 extend-filesystems[1637]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 00:36:22.574422 extend-filesystems[1637]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 30 00:36:22.574422 extend-filesystems[1637]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 30 00:36:22.577588 extend-filesystems[1601]: Resized filesystem in /dev/sda9 Apr 30 00:36:22.577588 extend-filesystems[1601]: Found sr0 Apr 30 00:36:22.578433 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:36:22.578649 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:36:22.591995 update-ssh-keys[1726]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:36:22.594331 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 00:36:22.602451 systemd[1]: Finished sshkeys.service. Apr 30 00:36:22.618663 containerd[1656]: time="2025-04-30T00:36:22.618600675Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 00:36:22.660924 containerd[1656]: time="2025-04-30T00:36:22.660867852Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:36:22.664156 containerd[1656]: time="2025-04-30T00:36:22.663421722Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:36:22.664156 containerd[1656]: time="2025-04-30T00:36:22.663446759Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:36:22.664156 containerd[1656]: time="2025-04-30T00:36:22.663461847Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:36:22.664156 containerd[1656]: time="2025-04-30T00:36:22.663635814Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:36:22.664156 containerd[1656]: time="2025-04-30T00:36:22.663650371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:36:22.664156 containerd[1656]: time="2025-04-30T00:36:22.663710854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:36:22.664156 containerd[1656]: time="2025-04-30T00:36:22.663721424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:36:22.664156 containerd[1656]: time="2025-04-30T00:36:22.663897825Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:36:22.664156 containerd[1656]: time="2025-04-30T00:36:22.663908795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:36:22.664156 containerd[1656]: time="2025-04-30T00:36:22.663920297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:36:22.664156 containerd[1656]: time="2025-04-30T00:36:22.663927761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:36:22.664365 containerd[1656]: time="2025-04-30T00:36:22.663985078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:36:22.665508 containerd[1656]: time="2025-04-30T00:36:22.664133978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:36:22.665794 containerd[1656]: time="2025-04-30T00:36:22.665649571Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:36:22.665794 containerd[1656]: time="2025-04-30T00:36:22.665664258Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:36:22.665794 containerd[1656]: time="2025-04-30T00:36:22.665739299Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:36:22.665794 containerd[1656]: time="2025-04-30T00:36:22.665773944Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:36:22.677946 containerd[1656]: time="2025-04-30T00:36:22.677906981Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:36:22.678136 containerd[1656]: time="2025-04-30T00:36:22.678124268Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:36:22.678240 containerd[1656]: time="2025-04-30T00:36:22.678229145Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:36:22.678294 containerd[1656]: time="2025-04-30T00:36:22.678277355Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:36:22.678415 containerd[1656]: time="2025-04-30T00:36:22.678366272Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:36:22.678623 containerd[1656]: time="2025-04-30T00:36:22.678609588Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:36:22.679517 containerd[1656]: time="2025-04-30T00:36:22.679485441Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:36:22.679645 containerd[1656]: time="2025-04-30T00:36:22.679632286Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:36:22.679707 containerd[1656]: time="2025-04-30T00:36:22.679698341Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:36:22.679748 containerd[1656]: time="2025-04-30T00:36:22.679740629Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:36:22.681076 containerd[1656]: time="2025-04-30T00:36:22.679796485Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:36:22.681076 containerd[1656]: time="2025-04-30T00:36:22.679811082Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:36:22.681076 containerd[1656]: time="2025-04-30T00:36:22.679823124Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:36:22.681076 containerd[1656]: time="2025-04-30T00:36:22.679837041Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:36:22.681076 containerd[1656]: time="2025-04-30T00:36:22.679851067Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:36:22.681076 containerd[1656]: time="2025-04-30T00:36:22.679862769Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:36:22.681076 containerd[1656]: time="2025-04-30T00:36:22.679873890Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:36:22.681076 containerd[1656]: time="2025-04-30T00:36:22.679885011Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:36:22.681076 containerd[1656]: time="2025-04-30T00:36:22.679903455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:36:22.681076 containerd[1656]: time="2025-04-30T00:36:22.679916740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:36:22.681076 containerd[1656]: time="2025-04-30T00:36:22.679928111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:36:22.681076 containerd[1656]: time="2025-04-30T00:36:22.679940284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:36:22.681076 containerd[1656]: time="2025-04-30T00:36:22.679952016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:36:22.681076 containerd[1656]: time="2025-04-30T00:36:22.679963899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:36:22.681348 containerd[1656]: time="2025-04-30T00:36:22.679974599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:36:22.681348 containerd[1656]: time="2025-04-30T00:36:22.679985880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:36:22.681348 containerd[1656]: time="2025-04-30T00:36:22.679997752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:36:22.681348 containerd[1656]: time="2025-04-30T00:36:22.680010816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:36:22.681348 containerd[1656]: time="2025-04-30T00:36:22.680021927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:36:22.681348 containerd[1656]: time="2025-04-30T00:36:22.680032557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:36:22.681348 containerd[1656]: time="2025-04-30T00:36:22.680046473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:36:22.681348 containerd[1656]: time="2025-04-30T00:36:22.680059528Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:36:22.681348 containerd[1656]: time="2025-04-30T00:36:22.680078914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:36:22.681348 containerd[1656]: time="2025-04-30T00:36:22.680089935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:36:22.681348 containerd[1656]: time="2025-04-30T00:36:22.680099983Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:36:22.681348 containerd[1656]: time="2025-04-30T00:36:22.680135129Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:36:22.681348 containerd[1656]: time="2025-04-30T00:36:22.680162901Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:36:22.681348 containerd[1656]: time="2025-04-30T00:36:22.680171878Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:36:22.681581 containerd[1656]: time="2025-04-30T00:36:22.680185975Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:36:22.681581 containerd[1656]: time="2025-04-30T00:36:22.680194140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:36:22.681581 containerd[1656]: time="2025-04-30T00:36:22.680204911Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:36:22.681581 containerd[1656]: time="2025-04-30T00:36:22.680218135Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:36:22.681581 containerd[1656]: time="2025-04-30T00:36:22.680230198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:36:22.681662 containerd[1656]: time="2025-04-30T00:36:22.680497039Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:36:22.681662 containerd[1656]: time="2025-04-30T00:36:22.680540741Z" level=info msg="Connect containerd service" Apr 30 00:36:22.681662 containerd[1656]: time="2025-04-30T00:36:22.680575034Z" level=info msg="using legacy CRI server" Apr 30 00:36:22.681662 containerd[1656]: time="2025-04-30T00:36:22.680580745Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:36:22.681662 containerd[1656]: time="2025-04-30T00:36:22.680683378Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:36:22.682020 containerd[1656]: time="2025-04-30T00:36:22.682002712Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:36:22.682342 containerd[1656]: time="2025-04-30T00:36:22.682329135Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:36:22.682409 containerd[1656]: time="2025-04-30T00:36:22.682400449Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:36:22.682492 containerd[1656]: time="2025-04-30T00:36:22.682475198Z" level=info msg="Start subscribing containerd event" Apr 30 00:36:22.682542 containerd[1656]: time="2025-04-30T00:36:22.682535161Z" level=info msg="Start recovering state" Apr 30 00:36:22.682625 containerd[1656]: time="2025-04-30T00:36:22.682616544Z" level=info msg="Start event monitor" Apr 30 00:36:22.682663 containerd[1656]: time="2025-04-30T00:36:22.682656649Z" level=info msg="Start snapshots syncer" Apr 30 00:36:22.682717 containerd[1656]: time="2025-04-30T00:36:22.682709478Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:36:22.682755 containerd[1656]: time="2025-04-30T00:36:22.682747890Z" level=info msg="Start streaming server" Apr 30 00:36:22.682911 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:36:22.689764 containerd[1656]: time="2025-04-30T00:36:22.689736217Z" level=info msg="containerd successfully booted in 0.072365s" Apr 30 00:36:22.906895 tar[1646]: linux-amd64/LICENSE Apr 30 00:36:22.907237 tar[1646]: linux-amd64/README.md Apr 30 00:36:22.917123 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:36:23.452438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:36:23.452578 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:36:23.457316 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:36:23.462844 systemd[1]: Startup finished in 9.224s (kernel) + 4.982s (userspace) = 14.206s. Apr 30 00:36:24.381511 kubelet[1753]: E0430 00:36:24.381449 1753 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:36:24.384425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:36:24.384747 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:36:34.561487 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:36:34.568871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:36:34.725308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:36:34.728473 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:36:34.799049 kubelet[1779]: E0430 00:36:34.798966 1779 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:36:34.805338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:36:34.805578 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:36:44.811368 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:36:44.822454 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:36:44.936280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:36:44.939510 (kubelet)[1799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:36:44.983471 kubelet[1799]: E0430 00:36:44.983433 1799 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:36:44.986889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:36:44.987121 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:36:55.061450 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 00:36:55.069500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:36:55.220757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:36:55.235444 (kubelet)[1819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:36:55.274048 kubelet[1819]: E0430 00:36:55.273979 1819 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:36:55.277648 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:36:55.277864 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:37:05.311092 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 00:37:05.318384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:05.444361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:05.448227 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:37:05.485745 kubelet[1841]: E0430 00:37:05.485657 1841 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:37:05.487882 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:37:05.488121 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:37:07.301519 update_engine[1632]: I20250430 00:37:07.301347 1632 update_attempter.cc:509] Updating boot flags... Apr 30 00:37:07.380185 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1860) Apr 30 00:37:07.449547 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1861) Apr 30 00:37:07.499271 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1861) Apr 30 00:37:15.561339 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 00:37:15.568865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:15.709900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:15.717454 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:37:15.784680 kubelet[1884]: E0430 00:37:15.784608 1884 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:37:15.788785 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:37:15.789010 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:37:25.811348 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 00:37:25.818364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:25.953747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:25.968559 (kubelet)[1905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:37:26.040166 kubelet[1905]: E0430 00:37:26.040081 1905 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:37:26.043556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:37:26.043887 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:37:36.061357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 30 00:37:36.072407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:36.215265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:36.216858 (kubelet)[1925]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:37:36.293056 kubelet[1925]: E0430 00:37:36.292877 1925 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:37:36.296592 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:37:36.296907 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:37:46.311268 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 30 00:37:46.324485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:46.440239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:46.443078 (kubelet)[1946]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:37:46.483206 kubelet[1946]: E0430 00:37:46.483067 1946 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:37:46.486656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:37:46.486828 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:37:56.561722 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 30 00:37:56.569552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:37:56.714348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:37:56.714984 (kubelet)[1968]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:37:56.787035 kubelet[1968]: E0430 00:37:56.786953 1968 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:37:56.791132 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:37:56.791382 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:38:06.811065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 30 00:38:06.816407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:38:06.966304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:38:06.981789 (kubelet)[1989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:38:07.061033 kubelet[1989]: E0430 00:38:07.060965 1989 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:38:07.064379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:38:07.064697 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:38:09.207088 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:38:09.214627 systemd[1]: Started sshd@0-135.181.102.231:22-139.178.89.65:60004.service - OpenSSH per-connection server daemon (139.178.89.65:60004). Apr 30 00:38:10.225476 sshd[1998]: Accepted publickey for core from 139.178.89.65 port 60004 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:38:10.229268 sshd-session[1998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:38:10.243236 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:38:10.248511 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:38:10.252880 systemd-logind[1629]: New session 1 of user core. Apr 30 00:38:10.276385 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:38:10.289557 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:38:10.295327 (systemd)[2004]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:38:10.442877 systemd[2004]: Queued start job for default target default.target. Apr 30 00:38:10.443458 systemd[2004]: Created slice app.slice - User Application Slice. Apr 30 00:38:10.443480 systemd[2004]: Reached target paths.target - Paths. Apr 30 00:38:10.443491 systemd[2004]: Reached target timers.target - Timers. Apr 30 00:38:10.448212 systemd[2004]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:38:10.466400 systemd[2004]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:38:10.466481 systemd[2004]: Reached target sockets.target - Sockets. Apr 30 00:38:10.466507 systemd[2004]: Reached target basic.target - Basic System. Apr 30 00:38:10.466572 systemd[2004]: Reached target default.target - Main User Target. Apr 30 00:38:10.466612 systemd[2004]: Startup finished in 162ms. Apr 30 00:38:10.467304 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:38:10.479606 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:38:11.162584 systemd[1]: Started sshd@1-135.181.102.231:22-139.178.89.65:60012.service - OpenSSH per-connection server daemon (139.178.89.65:60012). Apr 30 00:38:12.161604 sshd[2016]: Accepted publickey for core from 139.178.89.65 port 60012 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:38:12.163751 sshd-session[2016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:38:12.172970 systemd-logind[1629]: New session 2 of user core. Apr 30 00:38:12.179878 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:38:12.838293 sshd[2019]: Connection closed by 139.178.89.65 port 60012 Apr 30 00:38:12.839012 sshd-session[2016]: pam_unix(sshd:session): session closed for user core Apr 30 00:38:12.842952 systemd[1]: sshd@1-135.181.102.231:22-139.178.89.65:60012.service: Deactivated successfully. Apr 30 00:38:12.847039 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:38:12.848026 systemd-logind[1629]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:38:12.849439 systemd-logind[1629]: Removed session 2. Apr 30 00:38:13.002503 systemd[1]: Started sshd@2-135.181.102.231:22-139.178.89.65:60020.service - OpenSSH per-connection server daemon (139.178.89.65:60020). Apr 30 00:38:13.980725 sshd[2024]: Accepted publickey for core from 139.178.89.65 port 60020 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:38:13.982681 sshd-session[2024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:38:13.990057 systemd-logind[1629]: New session 3 of user core. Apr 30 00:38:13.995591 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:38:14.653287 sshd[2027]: Connection closed by 139.178.89.65 port 60020 Apr 30 00:38:14.654130 sshd-session[2024]: pam_unix(sshd:session): session closed for user core Apr 30 00:38:14.657586 systemd[1]: sshd@2-135.181.102.231:22-139.178.89.65:60020.service: Deactivated successfully. Apr 30 00:38:14.660936 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:38:14.661778 systemd-logind[1629]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:38:14.663676 systemd-logind[1629]: Removed session 3. Apr 30 00:38:14.820452 systemd[1]: Started sshd@3-135.181.102.231:22-139.178.89.65:60036.service - OpenSSH per-connection server daemon (139.178.89.65:60036). Apr 30 00:38:15.808661 sshd[2032]: Accepted publickey for core from 139.178.89.65 port 60036 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:38:15.810514 sshd-session[2032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:38:15.816992 systemd-logind[1629]: New session 4 of user core. Apr 30 00:38:15.827671 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:38:16.487613 sshd[2035]: Connection closed by 139.178.89.65 port 60036 Apr 30 00:38:16.488513 sshd-session[2032]: pam_unix(sshd:session): session closed for user core Apr 30 00:38:16.494405 systemd-logind[1629]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:38:16.495555 systemd[1]: sshd@3-135.181.102.231:22-139.178.89.65:60036.service: Deactivated successfully. Apr 30 00:38:16.499848 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:38:16.501548 systemd-logind[1629]: Removed session 4. Apr 30 00:38:16.653509 systemd[1]: Started sshd@4-135.181.102.231:22-139.178.89.65:60044.service - OpenSSH per-connection server daemon (139.178.89.65:60044). Apr 30 00:38:17.311328 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 30 00:38:17.319457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:38:17.473450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:38:17.473494 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:38:17.523126 kubelet[2054]: E0430 00:38:17.523051 2054 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:38:17.526543 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:38:17.526847 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:38:17.651087 sshd[2040]: Accepted publickey for core from 139.178.89.65 port 60044 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:38:17.652983 sshd-session[2040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:38:17.659842 systemd-logind[1629]: New session 5 of user core. Apr 30 00:38:17.669666 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:38:18.182311 sudo[2065]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:38:18.182783 sudo[2065]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:38:18.199061 sudo[2065]: pam_unix(sudo:session): session closed for user root Apr 30 00:38:18.357417 sshd[2064]: Connection closed by 139.178.89.65 port 60044 Apr 30 00:38:18.358692 sshd-session[2040]: pam_unix(sshd:session): session closed for user core Apr 30 00:38:18.363356 systemd[1]: sshd@4-135.181.102.231:22-139.178.89.65:60044.service: Deactivated successfully. Apr 30 00:38:18.368648 systemd-logind[1629]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:38:18.369729 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:38:18.373047 systemd-logind[1629]: Removed session 5. Apr 30 00:38:18.522822 systemd[1]: Started sshd@5-135.181.102.231:22-139.178.89.65:33714.service - OpenSSH per-connection server daemon (139.178.89.65:33714). Apr 30 00:38:19.525867 sshd[2070]: Accepted publickey for core from 139.178.89.65 port 33714 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:38:19.527869 sshd-session[2070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:38:19.534226 systemd-logind[1629]: New session 6 of user core. Apr 30 00:38:19.541562 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:38:20.051923 sudo[2075]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:38:20.052710 sudo[2075]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:38:20.059073 sudo[2075]: pam_unix(sudo:session): session closed for user root Apr 30 00:38:20.068400 sudo[2074]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 00:38:20.068855 sudo[2074]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:38:20.086946 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:38:20.149896 augenrules[2097]: No rules Apr 30 00:38:20.150768 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:38:20.151176 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:38:20.154872 sudo[2074]: pam_unix(sudo:session): session closed for user root Apr 30 00:38:20.313880 sshd[2073]: Connection closed by 139.178.89.65 port 33714 Apr 30 00:38:20.314688 sshd-session[2070]: pam_unix(sshd:session): session closed for user core Apr 30 00:38:20.319090 systemd[1]: sshd@5-135.181.102.231:22-139.178.89.65:33714.service: Deactivated successfully. Apr 30 00:38:20.323919 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:38:20.325261 systemd-logind[1629]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:38:20.326731 systemd-logind[1629]: Removed session 6. Apr 30 00:38:20.480618 systemd[1]: Started sshd@6-135.181.102.231:22-139.178.89.65:33726.service - OpenSSH per-connection server daemon (139.178.89.65:33726). Apr 30 00:38:21.455587 sshd[2106]: Accepted publickey for core from 139.178.89.65 port 33726 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:38:21.457282 sshd-session[2106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:38:21.462231 systemd-logind[1629]: New session 7 of user core. Apr 30 00:38:21.468653 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:38:21.976066 sudo[2110]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:38:21.976724 sudo[2110]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:38:22.426478 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:38:22.429006 (dockerd)[2128]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:38:22.765450 dockerd[2128]: time="2025-04-30T00:38:22.765271702Z" level=info msg="Starting up" Apr 30 00:38:22.870975 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3007278762-merged.mount: Deactivated successfully. Apr 30 00:38:22.932316 dockerd[2128]: time="2025-04-30T00:38:22.932236883Z" level=info msg="Loading containers: start." Apr 30 00:38:23.113208 kernel: Initializing XFRM netlink socket Apr 30 00:38:23.243383 systemd-networkd[1254]: docker0: Link UP Apr 30 00:38:23.278983 dockerd[2128]: time="2025-04-30T00:38:23.278908836Z" level=info msg="Loading containers: done." Apr 30 00:38:23.303016 dockerd[2128]: time="2025-04-30T00:38:23.302944794Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:38:23.303255 dockerd[2128]: time="2025-04-30T00:38:23.303084523Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Apr 30 00:38:23.303327 dockerd[2128]: time="2025-04-30T00:38:23.303278763Z" level=info msg="Daemon has completed initialization" Apr 30 00:38:23.352322 dockerd[2128]: time="2025-04-30T00:38:23.352208990Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:38:23.353060 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:38:24.793373 containerd[1656]: time="2025-04-30T00:38:24.793308238Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 00:38:25.421027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount286800654.mount: Deactivated successfully. Apr 30 00:38:27.189159 containerd[1656]: time="2025-04-30T00:38:27.189078407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:27.190190 containerd[1656]: time="2025-04-30T00:38:27.190160281Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674967" Apr 30 00:38:27.191445 containerd[1656]: time="2025-04-30T00:38:27.191413332Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:27.196480 containerd[1656]: time="2025-04-30T00:38:27.196425387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:27.197367 containerd[1656]: time="2025-04-30T00:38:27.196939103Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.403593153s" Apr 30 00:38:27.197367 containerd[1656]: time="2025-04-30T00:38:27.196965261Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 00:38:27.215275 containerd[1656]: time="2025-04-30T00:38:27.215150815Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 00:38:27.561066 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 30 00:38:27.568502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:38:27.720383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:38:27.733762 (kubelet)[2392]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:38:27.799472 kubelet[2392]: E0430 00:38:27.799396 2392 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:38:27.802352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:38:27.802621 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:38:29.092430 containerd[1656]: time="2025-04-30T00:38:29.092380966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:29.093505 containerd[1656]: time="2025-04-30T00:38:29.093474895Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617556" Apr 30 00:38:29.094998 containerd[1656]: time="2025-04-30T00:38:29.094954162Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:29.099993 containerd[1656]: time="2025-04-30T00:38:29.099970252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:29.100925 containerd[1656]: time="2025-04-30T00:38:29.100833541Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.885616183s" Apr 30 00:38:29.100925 containerd[1656]: time="2025-04-30T00:38:29.100854840Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 00:38:29.118661 containerd[1656]: time="2025-04-30T00:38:29.118619895Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 00:38:30.395697 containerd[1656]: time="2025-04-30T00:38:30.395631248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:30.397125 containerd[1656]: time="2025-04-30T00:38:30.397079079Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903704" Apr 30 00:38:30.397709 containerd[1656]: time="2025-04-30T00:38:30.397625969Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:30.400426 containerd[1656]: time="2025-04-30T00:38:30.400379056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:30.401807 containerd[1656]: time="2025-04-30T00:38:30.401686215Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.283033248s" Apr 30 00:38:30.401807 containerd[1656]: time="2025-04-30T00:38:30.401719848Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 00:38:30.427424 containerd[1656]: time="2025-04-30T00:38:30.427362510Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 00:38:31.576764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164797674.mount: Deactivated successfully. Apr 30 00:38:31.855333 containerd[1656]: time="2025-04-30T00:38:31.855183363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:31.856164 containerd[1656]: time="2025-04-30T00:38:31.855965022Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185845" Apr 30 00:38:31.856966 containerd[1656]: time="2025-04-30T00:38:31.856899378Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:31.858689 containerd[1656]: time="2025-04-30T00:38:31.858664323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:31.859624 containerd[1656]: time="2025-04-30T00:38:31.859034915Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.431617322s" Apr 30 00:38:31.859624 containerd[1656]: time="2025-04-30T00:38:31.859074700Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 00:38:31.877367 containerd[1656]: time="2025-04-30T00:38:31.877335960Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 00:38:32.436263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount759683695.mount: Deactivated successfully. Apr 30 00:38:33.193354 containerd[1656]: time="2025-04-30T00:38:33.193251831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:33.194785 containerd[1656]: time="2025-04-30T00:38:33.194725527Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185843" Apr 30 00:38:33.196077 containerd[1656]: time="2025-04-30T00:38:33.196004450Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:33.199664 containerd[1656]: time="2025-04-30T00:38:33.199618058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:33.201491 containerd[1656]: time="2025-04-30T00:38:33.201454393Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.324088478s" Apr 30 00:38:33.201491 containerd[1656]: time="2025-04-30T00:38:33.201484280Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 00:38:33.222987 containerd[1656]: time="2025-04-30T00:38:33.222877866Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 00:38:33.712407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1432308503.mount: Deactivated successfully. Apr 30 00:38:33.722245 containerd[1656]: time="2025-04-30T00:38:33.722167682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:33.723209 containerd[1656]: time="2025-04-30T00:38:33.723137407Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322312" Apr 30 00:38:33.724684 containerd[1656]: time="2025-04-30T00:38:33.724632423Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:33.727667 containerd[1656]: time="2025-04-30T00:38:33.727600395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:33.728821 containerd[1656]: time="2025-04-30T00:38:33.728655439Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 505.735203ms" Apr 30 00:38:33.728821 containerd[1656]: time="2025-04-30T00:38:33.728703508Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 00:38:33.751707 containerd[1656]: time="2025-04-30T00:38:33.751659559Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 00:38:34.313489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount638053423.mount: Deactivated successfully. Apr 30 00:38:35.814380 containerd[1656]: time="2025-04-30T00:38:35.814292308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:35.815684 containerd[1656]: time="2025-04-30T00:38:35.815627379Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238653" Apr 30 00:38:35.817327 containerd[1656]: time="2025-04-30T00:38:35.817276218Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:35.820506 containerd[1656]: time="2025-04-30T00:38:35.820463759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:38:35.821672 containerd[1656]: time="2025-04-30T00:38:35.821506032Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.069803553s" Apr 30 00:38:35.821672 containerd[1656]: time="2025-04-30T00:38:35.821543221Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 00:38:37.810993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 30 00:38:37.820465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:38:37.933766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:38:37.941612 (kubelet)[2608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:38:37.992406 kubelet[2608]: E0430 00:38:37.992349 2608 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:38:37.995399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:38:37.995584 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:38:39.743174 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:38:39.751519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:38:39.775658 systemd[1]: Reloading requested from client PID 2624 ('systemctl') (unit session-7.scope)... Apr 30 00:38:39.775832 systemd[1]: Reloading... Apr 30 00:38:39.883131 zram_generator::config[2664]: No configuration found. Apr 30 00:38:39.991032 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:38:40.065803 systemd[1]: Reloading finished in 289 ms. Apr 30 00:38:40.111027 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:38:40.114677 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:38:40.115228 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:38:40.120630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:38:40.216250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:38:40.225773 (kubelet)[2733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:38:40.298329 kubelet[2733]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:38:40.298329 kubelet[2733]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:38:40.298329 kubelet[2733]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:38:40.300927 kubelet[2733]: I0430 00:38:40.300739 2733 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:38:40.872901 kubelet[2733]: I0430 00:38:40.872847 2733 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:38:40.872901 kubelet[2733]: I0430 00:38:40.872891 2733 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:38:40.873213 kubelet[2733]: I0430 00:38:40.873188 2733 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:38:40.896603 kubelet[2733]: I0430 00:38:40.896241 2733 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:38:40.899493 kubelet[2733]: E0430 00:38:40.899455 2733 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://135.181.102.231:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 135.181.102.231:6443: connect: connection refused Apr 30 00:38:40.923568 kubelet[2733]: I0430 00:38:40.922810 2733 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:38:40.927042 kubelet[2733]: I0430 00:38:40.926952 2733 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:38:40.927601 kubelet[2733]: I0430 00:38:40.927256 2733 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-3-3-307bd18bd0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:38:40.928519 kubelet[2733]: I0430 00:38:40.928463 2733 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:38:40.928519 kubelet[2733]: I0430 00:38:40.928501 2733 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:38:40.929950 kubelet[2733]: I0430 00:38:40.929895 2733 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:38:40.932052 kubelet[2733]: W0430 00:38:40.931915 2733 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://135.181.102.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-3-307bd18bd0&limit=500&resourceVersion=0": dial tcp 135.181.102.231:6443: connect: connection refused Apr 30 00:38:40.932052 kubelet[2733]: E0430 00:38:40.932002 2733 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://135.181.102.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-3-307bd18bd0&limit=500&resourceVersion=0": dial tcp 135.181.102.231:6443: connect: connection refused Apr 30 00:38:40.933675 kubelet[2733]: I0430 00:38:40.933634 2733 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:38:40.933735 kubelet[2733]: I0430 00:38:40.933676 2733 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:38:40.933735 kubelet[2733]: I0430 00:38:40.933726 2733 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:38:40.933809 kubelet[2733]: I0430 00:38:40.933754 2733 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:38:40.938252 kubelet[2733]: W0430 00:38:40.937553 2733 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://135.181.102.231:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 135.181.102.231:6443: connect: connection refused Apr 30 00:38:40.938252 kubelet[2733]: E0430 00:38:40.937612 2733 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://135.181.102.231:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 135.181.102.231:6443: connect: connection refused Apr 30 00:38:40.938252 kubelet[2733]: I0430 00:38:40.938203 2733 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:38:40.942906 kubelet[2733]: I0430 00:38:40.941650 2733 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:38:40.942906 kubelet[2733]: W0430 00:38:40.941719 2733 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:38:40.942906 kubelet[2733]: I0430 00:38:40.942443 2733 server.go:1264] "Started kubelet" Apr 30 00:38:40.949323 kubelet[2733]: I0430 00:38:40.948493 2733 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:38:40.960170 kubelet[2733]: I0430 00:38:40.959120 2733 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:38:40.960170 kubelet[2733]: I0430 00:38:40.959944 2733 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:38:40.960309 kubelet[2733]: I0430 00:38:40.960266 2733 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:38:40.960915 kubelet[2733]: I0430 00:38:40.960879 2733 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:38:40.960997 kubelet[2733]: I0430 00:38:40.960949 2733 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:38:40.961100 kubelet[2733]: I0430 00:38:40.961077 2733 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:38:40.961249 kubelet[2733]: I0430 00:38:40.961239 2733 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:38:40.966847 kubelet[2733]: I0430 00:38:40.966831 2733 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:38:40.967020 kubelet[2733]: I0430 00:38:40.966992 2733 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:38:40.967445 kubelet[2733]: E0430 00:38:40.967365 2733 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://135.181.102.231:6443/api/v1/namespaces/default/events\": dial tcp 135.181.102.231:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-3-3-307bd18bd0.183af1aaaf02bffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-3-3-307bd18bd0,UID:ci-4152-2-3-3-307bd18bd0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-3-3-307bd18bd0,},FirstTimestamp:2025-04-30 00:38:40.942415866 +0000 UTC m=+0.711378003,LastTimestamp:2025-04-30 00:38:40.942415866 +0000 UTC m=+0.711378003,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-3-3-307bd18bd0,}" Apr 30 00:38:40.967620 kubelet[2733]: W0430 00:38:40.967594 2733 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://135.181.102.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 135.181.102.231:6443: connect: connection refused Apr 30 00:38:40.967687 kubelet[2733]: E0430 00:38:40.967678 2733 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://135.181.102.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 135.181.102.231:6443: connect: connection refused Apr 30 00:38:40.967801 kubelet[2733]: E0430 00:38:40.967785 2733 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.102.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-3-307bd18bd0?timeout=10s\": dial tcp 135.181.102.231:6443: connect: connection refused" interval="200ms" Apr 30 00:38:40.967925 kubelet[2733]: E0430 00:38:40.967914 2733 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:38:40.968909 kubelet[2733]: I0430 00:38:40.968899 2733 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:38:40.980841 kubelet[2733]: I0430 00:38:40.980775 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:38:40.981891 kubelet[2733]: I0430 00:38:40.981866 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:38:40.981891 kubelet[2733]: I0430 00:38:40.981893 2733 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:38:40.981959 kubelet[2733]: I0430 00:38:40.981913 2733 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:38:40.981986 kubelet[2733]: E0430 00:38:40.981954 2733 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:38:40.989265 kubelet[2733]: W0430 00:38:40.989223 2733 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://135.181.102.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 135.181.102.231:6443: connect: connection refused Apr 30 00:38:40.989408 kubelet[2733]: E0430 00:38:40.989399 2733 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://135.181.102.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 135.181.102.231:6443: connect: connection refused Apr 30 00:38:41.012161 kubelet[2733]: I0430 00:38:41.012125 2733 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:38:41.012316 kubelet[2733]: I0430 00:38:41.012308 2733 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:38:41.012365 kubelet[2733]: I0430 00:38:41.012359 2733 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:38:41.017426 kubelet[2733]: I0430 00:38:41.017410 2733 policy_none.go:49] "None policy: Start" Apr 30 00:38:41.018335 kubelet[2733]: I0430 00:38:41.018321 2733 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:38:41.018435 kubelet[2733]: I0430 00:38:41.018429 2733 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:38:41.022888 kubelet[2733]: I0430 00:38:41.022873 2733 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:38:41.023123 kubelet[2733]: I0430 00:38:41.023093 2733 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:38:41.023289 kubelet[2733]: I0430 00:38:41.023279 2733 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:38:41.027111 kubelet[2733]: E0430 00:38:41.027096 2733 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-3-3-307bd18bd0\" not found" Apr 30 00:38:41.062752 kubelet[2733]: I0430 00:38:41.062634 2733 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.063108 kubelet[2733]: E0430 00:38:41.063060 2733 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://135.181.102.231:6443/api/v1/nodes\": dial tcp 135.181.102.231:6443: connect: connection refused" node="ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.082657 kubelet[2733]: I0430 00:38:41.082555 2733 topology_manager.go:215] "Topology Admit Handler" podUID="c0aa777c4b546fc2d3c9bba100e9ba3c" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.085126 kubelet[2733]: I0430 00:38:41.085003 2733 topology_manager.go:215] "Topology Admit Handler" podUID="aad703b95c3cfd2d6e8ca47adb851d31" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.087735 kubelet[2733]: I0430 00:38:41.087476 2733 topology_manager.go:215] "Topology Admit Handler" podUID="359abcbdb3ccd0954b62510f4a4463e1" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.168740 kubelet[2733]: E0430 00:38:41.168572 2733 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.102.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-3-307bd18bd0?timeout=10s\": dial tcp 135.181.102.231:6443: connect: connection refused" interval="400ms" Apr 30 00:38:41.262383 kubelet[2733]: I0430 00:38:41.262296 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c0aa777c4b546fc2d3c9bba100e9ba3c-k8s-certs\") pod \"kube-apiserver-ci-4152-2-3-3-307bd18bd0\" (UID: \"c0aa777c4b546fc2d3c9bba100e9ba3c\") " pod="kube-system/kube-apiserver-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.262383 kubelet[2733]: I0430 00:38:41.262377 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aad703b95c3cfd2d6e8ca47adb851d31-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-3-3-307bd18bd0\" (UID: \"aad703b95c3cfd2d6e8ca47adb851d31\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.262580 kubelet[2733]: I0430 00:38:41.262425 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aad703b95c3cfd2d6e8ca47adb851d31-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-3-3-307bd18bd0\" (UID: \"aad703b95c3cfd2d6e8ca47adb851d31\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.262580 kubelet[2733]: I0430 00:38:41.262469 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/359abcbdb3ccd0954b62510f4a4463e1-kubeconfig\") pod \"kube-scheduler-ci-4152-2-3-3-307bd18bd0\" (UID: \"359abcbdb3ccd0954b62510f4a4463e1\") " pod="kube-system/kube-scheduler-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.262580 kubelet[2733]: I0430 00:38:41.262517 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c0aa777c4b546fc2d3c9bba100e9ba3c-ca-certs\") pod \"kube-apiserver-ci-4152-2-3-3-307bd18bd0\" (UID: \"c0aa777c4b546fc2d3c9bba100e9ba3c\") " pod="kube-system/kube-apiserver-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.262580 kubelet[2733]: I0430 00:38:41.262557 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c0aa777c4b546fc2d3c9bba100e9ba3c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-3-3-307bd18bd0\" (UID: \"c0aa777c4b546fc2d3c9bba100e9ba3c\") " pod="kube-system/kube-apiserver-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.262710 kubelet[2733]: I0430 00:38:41.262600 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aad703b95c3cfd2d6e8ca47adb851d31-ca-certs\") pod \"kube-controller-manager-ci-4152-2-3-3-307bd18bd0\" (UID: \"aad703b95c3cfd2d6e8ca47adb851d31\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.262710 kubelet[2733]: I0430 00:38:41.262643 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aad703b95c3cfd2d6e8ca47adb851d31-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-3-3-307bd18bd0\" (UID: \"aad703b95c3cfd2d6e8ca47adb851d31\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.262710 kubelet[2733]: I0430 00:38:41.262675 2733 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aad703b95c3cfd2d6e8ca47adb851d31-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-3-3-307bd18bd0\" (UID: \"aad703b95c3cfd2d6e8ca47adb851d31\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.265718 kubelet[2733]: I0430 00:38:41.265607 2733 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.266112 kubelet[2733]: E0430 00:38:41.266026 2733 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://135.181.102.231:6443/api/v1/nodes\": dial tcp 135.181.102.231:6443: connect: connection refused" node="ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.361924 kubelet[2733]: E0430 00:38:41.361768 2733 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://135.181.102.231:6443/api/v1/namespaces/default/events\": dial tcp 135.181.102.231:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-3-3-307bd18bd0.183af1aaaf02bffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-3-3-307bd18bd0,UID:ci-4152-2-3-3-307bd18bd0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-3-3-307bd18bd0,},FirstTimestamp:2025-04-30 00:38:40.942415866 +0000 UTC m=+0.711378003,LastTimestamp:2025-04-30 00:38:40.942415866 +0000 UTC m=+0.711378003,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-3-3-307bd18bd0,}" Apr 30 00:38:41.395429 containerd[1656]: time="2025-04-30T00:38:41.395303553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-3-3-307bd18bd0,Uid:c0aa777c4b546fc2d3c9bba100e9ba3c,Namespace:kube-system,Attempt:0,}" Apr 30 00:38:41.396220 containerd[1656]: time="2025-04-30T00:38:41.395303573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-3-3-307bd18bd0,Uid:aad703b95c3cfd2d6e8ca47adb851d31,Namespace:kube-system,Attempt:0,}" Apr 30 00:38:41.400915 containerd[1656]: time="2025-04-30T00:38:41.400865917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-3-3-307bd18bd0,Uid:359abcbdb3ccd0954b62510f4a4463e1,Namespace:kube-system,Attempt:0,}" Apr 30 00:38:41.570248 kubelet[2733]: E0430 00:38:41.570183 2733 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.102.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-3-307bd18bd0?timeout=10s\": dial tcp 135.181.102.231:6443: connect: connection refused" interval="800ms" Apr 30 00:38:41.669625 kubelet[2733]: I0430 00:38:41.669593 2733 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.670390 kubelet[2733]: E0430 00:38:41.670326 2733 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://135.181.102.231:6443/api/v1/nodes\": dial tcp 135.181.102.231:6443: connect: connection refused" node="ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:41.917994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3306086435.mount: Deactivated successfully. Apr 30 00:38:41.925294 containerd[1656]: time="2025-04-30T00:38:41.925222511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:38:41.929337 containerd[1656]: time="2025-04-30T00:38:41.929269013Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Apr 30 00:38:41.930464 containerd[1656]: time="2025-04-30T00:38:41.930417955Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:38:41.931843 containerd[1656]: time="2025-04-30T00:38:41.931740212Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:38:41.934512 containerd[1656]: time="2025-04-30T00:38:41.934426757Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:38:41.935865 containerd[1656]: time="2025-04-30T00:38:41.935798777Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:38:41.937636 containerd[1656]: time="2025-04-30T00:38:41.937450455Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:38:41.938813 containerd[1656]: time="2025-04-30T00:38:41.938676351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:38:41.942967 containerd[1656]: time="2025-04-30T00:38:41.942750386Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 547.309806ms" Apr 30 00:38:41.946308 containerd[1656]: time="2025-04-30T00:38:41.946263735Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 545.295585ms" Apr 30 00:38:41.951822 containerd[1656]: time="2025-04-30T00:38:41.951763322Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 555.692404ms" Apr 30 00:38:42.066073 kubelet[2733]: W0430 00:38:42.065933 2733 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://135.181.102.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 135.181.102.231:6443: connect: connection refused Apr 30 00:38:42.066073 kubelet[2733]: E0430 00:38:42.066008 2733 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://135.181.102.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 135.181.102.231:6443: connect: connection refused Apr 30 00:38:42.101264 kubelet[2733]: W0430 00:38:42.101091 2733 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://135.181.102.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 135.181.102.231:6443: connect: connection refused Apr 30 00:38:42.101728 kubelet[2733]: E0430 00:38:42.101600 2733 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://135.181.102.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 135.181.102.231:6443: connect: connection refused Apr 30 00:38:42.106306 containerd[1656]: time="2025-04-30T00:38:42.105072212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:42.108841 containerd[1656]: time="2025-04-30T00:38:42.108553777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:42.108841 containerd[1656]: time="2025-04-30T00:38:42.108583634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:42.108841 containerd[1656]: time="2025-04-30T00:38:42.108702356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:42.109585 containerd[1656]: time="2025-04-30T00:38:42.109121445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:42.109756 containerd[1656]: time="2025-04-30T00:38:42.109698592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:42.111107 containerd[1656]: time="2025-04-30T00:38:42.105774114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:38:42.111107 containerd[1656]: time="2025-04-30T00:38:42.110847144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:42.111107 containerd[1656]: time="2025-04-30T00:38:42.110962211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:42.111107 containerd[1656]: time="2025-04-30T00:38:42.111064303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:38:42.111107 containerd[1656]: time="2025-04-30T00:38:42.111085723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:42.112232 containerd[1656]: time="2025-04-30T00:38:42.111315565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:38:42.173261 containerd[1656]: time="2025-04-30T00:38:42.172374918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-3-3-307bd18bd0,Uid:c0aa777c4b546fc2d3c9bba100e9ba3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"afdbb1b6c455a29e9d3d0d42b4941fc84de8c86b44bb5742cab4216afb903beb\"" Apr 30 00:38:42.175423 containerd[1656]: time="2025-04-30T00:38:42.175374004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-3-3-307bd18bd0,Uid:aad703b95c3cfd2d6e8ca47adb851d31,Namespace:kube-system,Attempt:0,} returns sandbox id \"6edf17042ebbdbc73984f4ad3be54063bc8fa9fa43aef4493ca64b9870130ad6\"" Apr 30 00:38:42.178044 containerd[1656]: time="2025-04-30T00:38:42.177917802Z" level=info msg="CreateContainer within sandbox \"afdbb1b6c455a29e9d3d0d42b4941fc84de8c86b44bb5742cab4216afb903beb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:38:42.178969 containerd[1656]: time="2025-04-30T00:38:42.178952501Z" level=info msg="CreateContainer within sandbox \"6edf17042ebbdbc73984f4ad3be54063bc8fa9fa43aef4493ca64b9870130ad6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:38:42.188215 containerd[1656]: time="2025-04-30T00:38:42.188163399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-3-3-307bd18bd0,Uid:359abcbdb3ccd0954b62510f4a4463e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a987109d3d94f1746c874340dbd4b45a3a344aa0dc3f07b7195e95dbe8c26e08\"" Apr 30 00:38:42.190328 containerd[1656]: time="2025-04-30T00:38:42.190294271Z" level=info msg="CreateContainer within sandbox \"a987109d3d94f1746c874340dbd4b45a3a344aa0dc3f07b7195e95dbe8c26e08\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:38:42.202333 containerd[1656]: time="2025-04-30T00:38:42.202288249Z" level=info msg="CreateContainer within sandbox \"afdbb1b6c455a29e9d3d0d42b4941fc84de8c86b44bb5742cab4216afb903beb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"30bb9ec07364ee43e9133e777e2c0dcf51c903c8ed2a263da97b1c2ddac1339f\"" Apr 30 00:38:42.203025 containerd[1656]: time="2025-04-30T00:38:42.202814339Z" level=info msg="StartContainer for \"30bb9ec07364ee43e9133e777e2c0dcf51c903c8ed2a263da97b1c2ddac1339f\"" Apr 30 00:38:42.206745 containerd[1656]: time="2025-04-30T00:38:42.206646603Z" level=info msg="CreateContainer within sandbox \"6edf17042ebbdbc73984f4ad3be54063bc8fa9fa43aef4493ca64b9870130ad6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"473eb6bb3758e84fc690f446fdd86736ce36b451980810c5fa8d958d14577b8b\"" Apr 30 00:38:42.207473 containerd[1656]: time="2025-04-30T00:38:42.207412755Z" level=info msg="StartContainer for \"473eb6bb3758e84fc690f446fdd86736ce36b451980810c5fa8d958d14577b8b\"" Apr 30 00:38:42.209844 containerd[1656]: time="2025-04-30T00:38:42.209821199Z" level=info msg="CreateContainer within sandbox \"a987109d3d94f1746c874340dbd4b45a3a344aa0dc3f07b7195e95dbe8c26e08\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ca83946fc3891901874f770e01df15303b443b8352bc6af2d72243c16ab82479\"" Apr 30 00:38:42.210494 containerd[1656]: time="2025-04-30T00:38:42.210474790Z" level=info msg="StartContainer for \"ca83946fc3891901874f770e01df15303b443b8352bc6af2d72243c16ab82479\"" Apr 30 00:38:42.287248 containerd[1656]: time="2025-04-30T00:38:42.286777987Z" level=info msg="StartContainer for \"473eb6bb3758e84fc690f446fdd86736ce36b451980810c5fa8d958d14577b8b\" returns successfully" Apr 30 00:38:42.308110 containerd[1656]: time="2025-04-30T00:38:42.307785123Z" level=info msg="StartContainer for \"ca83946fc3891901874f770e01df15303b443b8352bc6af2d72243c16ab82479\" returns successfully" Apr 30 00:38:42.341161 containerd[1656]: time="2025-04-30T00:38:42.339520303Z" level=info msg="StartContainer for \"30bb9ec07364ee43e9133e777e2c0dcf51c903c8ed2a263da97b1c2ddac1339f\" returns successfully" Apr 30 00:38:42.369034 kubelet[2733]: W0430 00:38:42.367865 2733 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://135.181.102.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-3-307bd18bd0&limit=500&resourceVersion=0": dial tcp 135.181.102.231:6443: connect: connection refused Apr 30 00:38:42.369034 kubelet[2733]: E0430 00:38:42.367924 2733 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://135.181.102.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-3-3-307bd18bd0&limit=500&resourceVersion=0": dial tcp 135.181.102.231:6443: connect: connection refused Apr 30 00:38:42.372256 kubelet[2733]: E0430 00:38:42.372217 2733 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.102.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-3-3-307bd18bd0?timeout=10s\": dial tcp 135.181.102.231:6443: connect: connection refused" interval="1.6s" Apr 30 00:38:42.477214 kubelet[2733]: I0430 00:38:42.475074 2733 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:42.477698 kubelet[2733]: E0430 00:38:42.477663 2733 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://135.181.102.231:6443/api/v1/nodes\": dial tcp 135.181.102.231:6443: connect: connection refused" node="ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:42.508165 kubelet[2733]: W0430 00:38:42.507087 2733 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://135.181.102.231:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 135.181.102.231:6443: connect: connection refused Apr 30 00:38:42.508165 kubelet[2733]: E0430 00:38:42.507178 2733 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://135.181.102.231:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 135.181.102.231:6443: connect: connection refused Apr 30 00:38:43.990292 kubelet[2733]: E0430 00:38:43.990234 2733 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-3-3-307bd18bd0\" not found" node="ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:44.081566 kubelet[2733]: I0430 00:38:44.081497 2733 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:44.095212 kubelet[2733]: I0430 00:38:44.095057 2733 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:44.104370 kubelet[2733]: E0430 00:38:44.104344 2733 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-3-3-307bd18bd0\" not found" Apr 30 00:38:44.204957 kubelet[2733]: E0430 00:38:44.204881 2733 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-3-3-307bd18bd0\" not found" Apr 30 00:38:44.305863 kubelet[2733]: E0430 00:38:44.305710 2733 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-3-3-307bd18bd0\" not found" Apr 30 00:38:44.940833 kubelet[2733]: I0430 00:38:44.940054 2733 apiserver.go:52] "Watching apiserver" Apr 30 00:38:44.961712 kubelet[2733]: I0430 00:38:44.961660 2733 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:38:46.172661 systemd[1]: Reloading requested from client PID 3002 ('systemctl') (unit session-7.scope)... Apr 30 00:38:46.172690 systemd[1]: Reloading... Apr 30 00:38:46.278210 zram_generator::config[3043]: No configuration found. Apr 30 00:38:46.385209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:38:46.469621 systemd[1]: Reloading finished in 296 ms. Apr 30 00:38:46.495162 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:38:46.495565 kubelet[2733]: E0430 00:38:46.495095 2733 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4152-2-3-3-307bd18bd0.183af1aaaf02bffa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-3-3-307bd18bd0,UID:ci-4152-2-3-3-307bd18bd0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-3-3-307bd18bd0,},FirstTimestamp:2025-04-30 00:38:40.942415866 +0000 UTC m=+0.711378003,LastTimestamp:2025-04-30 00:38:40.942415866 +0000 UTC m=+0.711378003,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-3-3-307bd18bd0,}" Apr 30 00:38:46.504019 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:38:46.504346 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:38:46.509421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:38:46.598723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:38:46.606455 (kubelet)[3102]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:38:46.648600 kubelet[3102]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:38:46.648600 kubelet[3102]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:38:46.648600 kubelet[3102]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:38:46.648600 kubelet[3102]: I0430 00:38:46.648352 3102 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:38:46.652676 kubelet[3102]: I0430 00:38:46.652647 3102 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:38:46.652676 kubelet[3102]: I0430 00:38:46.652663 3102 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:38:46.652816 kubelet[3102]: I0430 00:38:46.652796 3102 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:38:46.654362 kubelet[3102]: I0430 00:38:46.653787 3102 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:38:46.655499 kubelet[3102]: I0430 00:38:46.654953 3102 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:38:46.661350 kubelet[3102]: I0430 00:38:46.661078 3102 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:38:46.663228 kubelet[3102]: I0430 00:38:46.663126 3102 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:38:46.664386 kubelet[3102]: I0430 00:38:46.663163 3102 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-3-3-307bd18bd0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:38:46.664386 kubelet[3102]: I0430 00:38:46.664308 3102 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:38:46.664386 kubelet[3102]: I0430 00:38:46.664317 3102 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:38:46.664987 kubelet[3102]: I0430 00:38:46.664914 3102 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:38:46.665061 kubelet[3102]: I0430 00:38:46.665054 3102 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:38:46.665679 kubelet[3102]: I0430 00:38:46.665671 3102 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:38:46.665802 kubelet[3102]: I0430 00:38:46.665752 3102 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:38:46.665802 kubelet[3102]: I0430 00:38:46.665767 3102 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:38:46.675179 kubelet[3102]: I0430 00:38:46.673863 3102 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:38:46.675179 kubelet[3102]: I0430 00:38:46.674072 3102 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:38:46.677630 kubelet[3102]: I0430 00:38:46.677599 3102 server.go:1264] "Started kubelet" Apr 30 00:38:46.683848 kubelet[3102]: I0430 00:38:46.683789 3102 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:38:46.684158 kubelet[3102]: I0430 00:38:46.684128 3102 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:38:46.685740 kubelet[3102]: I0430 00:38:46.685700 3102 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:38:46.686225 kubelet[3102]: I0430 00:38:46.686216 3102 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:38:46.686836 kubelet[3102]: I0430 00:38:46.686811 3102 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:38:46.690916 kubelet[3102]: I0430 00:38:46.690817 3102 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:38:46.690916 kubelet[3102]: I0430 00:38:46.690906 3102 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:38:46.691159 kubelet[3102]: I0430 00:38:46.691029 3102 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:38:46.693362 kubelet[3102]: E0430 00:38:46.693232 3102 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:38:46.694126 kubelet[3102]: I0430 00:38:46.694101 3102 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:38:46.694302 kubelet[3102]: I0430 00:38:46.694289 3102 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:38:46.695467 kubelet[3102]: I0430 00:38:46.695459 3102 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:38:46.701556 kubelet[3102]: I0430 00:38:46.701371 3102 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:38:46.702480 kubelet[3102]: I0430 00:38:46.702470 3102 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:38:46.702540 kubelet[3102]: I0430 00:38:46.702535 3102 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:38:46.702592 kubelet[3102]: I0430 00:38:46.702586 3102 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:38:46.702659 kubelet[3102]: E0430 00:38:46.702647 3102 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:38:46.754794 kubelet[3102]: I0430 00:38:46.754722 3102 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:38:46.755129 kubelet[3102]: I0430 00:38:46.754935 3102 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:38:46.755129 kubelet[3102]: I0430 00:38:46.754952 3102 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:38:46.755129 kubelet[3102]: I0430 00:38:46.755082 3102 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:38:46.755129 kubelet[3102]: I0430 00:38:46.755091 3102 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:38:46.755129 kubelet[3102]: I0430 00:38:46.755106 3102 policy_none.go:49] "None policy: Start" Apr 30 00:38:46.755828 kubelet[3102]: I0430 00:38:46.755577 3102 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:38:46.755828 kubelet[3102]: I0430 00:38:46.755590 3102 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:38:46.755828 kubelet[3102]: I0430 00:38:46.755784 3102 state_mem.go:75] "Updated machine memory state" Apr 30 00:38:46.756985 kubelet[3102]: I0430 00:38:46.756975 3102 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:38:46.758106 kubelet[3102]: I0430 00:38:46.758083 3102 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:38:46.758342 kubelet[3102]: I0430 00:38:46.758216 3102 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:38:46.795426 kubelet[3102]: I0430 00:38:46.795381 3102 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:46.803300 kubelet[3102]: I0430 00:38:46.803003 3102 topology_manager.go:215] "Topology Admit Handler" podUID="c0aa777c4b546fc2d3c9bba100e9ba3c" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:46.803300 kubelet[3102]: I0430 00:38:46.803118 3102 topology_manager.go:215] "Topology Admit Handler" podUID="aad703b95c3cfd2d6e8ca47adb851d31" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:46.803300 kubelet[3102]: I0430 00:38:46.803176 3102 topology_manager.go:215] "Topology Admit Handler" podUID="359abcbdb3ccd0954b62510f4a4463e1" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:46.807342 kubelet[3102]: I0430 00:38:46.807309 3102 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:46.807411 kubelet[3102]: I0430 00:38:46.807394 3102 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:46.992711 kubelet[3102]: I0430 00:38:46.992628 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aad703b95c3cfd2d6e8ca47adb851d31-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-3-3-307bd18bd0\" (UID: \"aad703b95c3cfd2d6e8ca47adb851d31\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:46.992711 kubelet[3102]: I0430 00:38:46.992673 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aad703b95c3cfd2d6e8ca47adb851d31-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-3-3-307bd18bd0\" (UID: \"aad703b95c3cfd2d6e8ca47adb851d31\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:46.992711 kubelet[3102]: I0430 00:38:46.992692 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c0aa777c4b546fc2d3c9bba100e9ba3c-k8s-certs\") pod \"kube-apiserver-ci-4152-2-3-3-307bd18bd0\" (UID: \"c0aa777c4b546fc2d3c9bba100e9ba3c\") " pod="kube-system/kube-apiserver-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:46.992711 kubelet[3102]: I0430 00:38:46.992706 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c0aa777c4b546fc2d3c9bba100e9ba3c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-3-3-307bd18bd0\" (UID: \"c0aa777c4b546fc2d3c9bba100e9ba3c\") " pod="kube-system/kube-apiserver-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:46.992711 kubelet[3102]: I0430 00:38:46.992723 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aad703b95c3cfd2d6e8ca47adb851d31-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-3-3-307bd18bd0\" (UID: \"aad703b95c3cfd2d6e8ca47adb851d31\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:46.993062 kubelet[3102]: I0430 00:38:46.992736 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/359abcbdb3ccd0954b62510f4a4463e1-kubeconfig\") pod \"kube-scheduler-ci-4152-2-3-3-307bd18bd0\" (UID: \"359abcbdb3ccd0954b62510f4a4463e1\") " pod="kube-system/kube-scheduler-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:46.993062 kubelet[3102]: I0430 00:38:46.992748 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c0aa777c4b546fc2d3c9bba100e9ba3c-ca-certs\") pod \"kube-apiserver-ci-4152-2-3-3-307bd18bd0\" (UID: \"c0aa777c4b546fc2d3c9bba100e9ba3c\") " pod="kube-system/kube-apiserver-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:46.993062 kubelet[3102]: I0430 00:38:46.992762 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aad703b95c3cfd2d6e8ca47adb851d31-ca-certs\") pod \"kube-controller-manager-ci-4152-2-3-3-307bd18bd0\" (UID: \"aad703b95c3cfd2d6e8ca47adb851d31\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:46.993062 kubelet[3102]: I0430 00:38:46.992793 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aad703b95c3cfd2d6e8ca47adb851d31-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-3-3-307bd18bd0\" (UID: \"aad703b95c3cfd2d6e8ca47adb851d31\") " pod="kube-system/kube-controller-manager-ci-4152-2-3-3-307bd18bd0" Apr 30 00:38:47.179358 sudo[3132]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 00:38:47.180875 sudo[3132]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 00:38:47.670388 kubelet[3102]: I0430 00:38:47.668868 3102 apiserver.go:52] "Watching apiserver" Apr 30 00:38:47.691843 kubelet[3102]: I0430 00:38:47.691776 3102 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:38:47.765630 kubelet[3102]: I0430 00:38:47.763496 3102 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-3-3-307bd18bd0" podStartSLOduration=1.7634728800000001 podStartE2EDuration="1.76347288s" podCreationTimestamp="2025-04-30 00:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:38:47.748684614 +0000 UTC m=+1.137066112" watchObservedRunningTime="2025-04-30 00:38:47.76347288 +0000 UTC m=+1.151854399" Apr 30 00:38:47.780841 kubelet[3102]: I0430 00:38:47.780781 3102 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-3-3-307bd18bd0" podStartSLOduration=1.780756868 podStartE2EDuration="1.780756868s" podCreationTimestamp="2025-04-30 00:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:38:47.765921393 +0000 UTC m=+1.154302891" watchObservedRunningTime="2025-04-30 00:38:47.780756868 +0000 UTC m=+1.169138388" Apr 30 00:38:47.796471 kubelet[3102]: I0430 00:38:47.796191 3102 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-3-3-307bd18bd0" podStartSLOduration=1.79616729 podStartE2EDuration="1.79616729s" podCreationTimestamp="2025-04-30 00:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:38:47.781926556 +0000 UTC m=+1.170308056" watchObservedRunningTime="2025-04-30 00:38:47.79616729 +0000 UTC m=+1.184548839" Apr 30 00:38:47.808702 sudo[3132]: pam_unix(sudo:session): session closed for user root Apr 30 00:38:49.455745 sudo[2110]: pam_unix(sudo:session): session closed for user root Apr 30 00:38:49.613565 sshd[2109]: Connection closed by 139.178.89.65 port 33726 Apr 30 00:38:49.614948 sshd-session[2106]: pam_unix(sshd:session): session closed for user core Apr 30 00:38:49.618851 systemd[1]: sshd@6-135.181.102.231:22-139.178.89.65:33726.service: Deactivated successfully. Apr 30 00:38:49.623510 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:38:49.624372 systemd-logind[1629]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:38:49.626669 systemd-logind[1629]: Removed session 7. Apr 30 00:39:00.885488 kubelet[3102]: I0430 00:39:00.885418 3102 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:39:00.886058 containerd[1656]: time="2025-04-30T00:39:00.885888136Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:39:00.886469 kubelet[3102]: I0430 00:39:00.886089 3102 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:39:01.549976 kubelet[3102]: I0430 00:39:01.549708 3102 topology_manager.go:215] "Topology Admit Handler" podUID="e1246b03-12d0-428f-87c9-b3792d264bdc" podNamespace="kube-system" podName="kube-proxy-kmqrv" Apr 30 00:39:01.557536 kubelet[3102]: I0430 00:39:01.556421 3102 topology_manager.go:215] "Topology Admit Handler" podUID="f72ffd5a-0812-48a4-bd3b-38d5e5976890" podNamespace="kube-system" podName="cilium-dhjb9" Apr 30 00:39:01.689084 kubelet[3102]: I0430 00:39:01.688973 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f72ffd5a-0812-48a4-bd3b-38d5e5976890-hubble-tls\") pod \"cilium-dhjb9\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " pod="kube-system/cilium-dhjb9" Apr 30 00:39:01.689084 kubelet[3102]: I0430 00:39:01.689053 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e1246b03-12d0-428f-87c9-b3792d264bdc-kube-proxy\") pod \"kube-proxy-kmqrv\" (UID: \"e1246b03-12d0-428f-87c9-b3792d264bdc\") " pod="kube-system/kube-proxy-kmqrv" Apr 30 00:39:01.689084 kubelet[3102]: I0430 00:39:01.689087 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-cilium-cgroup\") pod \"cilium-dhjb9\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " pod="kube-system/cilium-dhjb9" Apr 30 00:39:01.689441 kubelet[3102]: I0430 00:39:01.689114 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-etc-cni-netd\") pod \"cilium-dhjb9\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " pod="kube-system/cilium-dhjb9" Apr 30 00:39:01.689441 kubelet[3102]: I0430 00:39:01.689171 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-xtables-lock\") pod \"cilium-dhjb9\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " pod="kube-system/cilium-dhjb9" Apr 30 00:39:01.689441 kubelet[3102]: I0430 00:39:01.689199 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f72ffd5a-0812-48a4-bd3b-38d5e5976890-clustermesh-secrets\") pod \"cilium-dhjb9\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " pod="kube-system/cilium-dhjb9" Apr 30 00:39:01.689441 kubelet[3102]: I0430 00:39:01.689226 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1246b03-12d0-428f-87c9-b3792d264bdc-lib-modules\") pod \"kube-proxy-kmqrv\" (UID: \"e1246b03-12d0-428f-87c9-b3792d264bdc\") " pod="kube-system/kube-proxy-kmqrv" Apr 30 00:39:01.689441 kubelet[3102]: I0430 00:39:01.689255 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-hostproc\") pod \"cilium-dhjb9\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " pod="kube-system/cilium-dhjb9" Apr 30 00:39:01.689441 kubelet[3102]: I0430 00:39:01.689306 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h642\" (UniqueName: \"kubernetes.io/projected/e1246b03-12d0-428f-87c9-b3792d264bdc-kube-api-access-7h642\") pod \"kube-proxy-kmqrv\" (UID: \"e1246b03-12d0-428f-87c9-b3792d264bdc\") " pod="kube-system/kube-proxy-kmqrv" Apr 30 00:39:01.689692 kubelet[3102]: I0430 00:39:01.689338 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f72ffd5a-0812-48a4-bd3b-38d5e5976890-cilium-config-path\") pod \"cilium-dhjb9\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " pod="kube-system/cilium-dhjb9" Apr 30 00:39:01.689692 kubelet[3102]: I0430 00:39:01.689363 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-cilium-run\") pod \"cilium-dhjb9\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " pod="kube-system/cilium-dhjb9" Apr 30 00:39:01.689692 kubelet[3102]: I0430 00:39:01.689389 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxj9c\" (UniqueName: \"kubernetes.io/projected/f72ffd5a-0812-48a4-bd3b-38d5e5976890-kube-api-access-rxj9c\") pod \"cilium-dhjb9\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " pod="kube-system/cilium-dhjb9" Apr 30 00:39:01.689692 kubelet[3102]: I0430 00:39:01.689413 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-host-proc-sys-net\") pod \"cilium-dhjb9\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " pod="kube-system/cilium-dhjb9" Apr 30 00:39:01.689692 kubelet[3102]: I0430 00:39:01.689439 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-host-proc-sys-kernel\") pod \"cilium-dhjb9\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " pod="kube-system/cilium-dhjb9" Apr 30 00:39:01.689901 kubelet[3102]: I0430 00:39:01.689465 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1246b03-12d0-428f-87c9-b3792d264bdc-xtables-lock\") pod \"kube-proxy-kmqrv\" (UID: \"e1246b03-12d0-428f-87c9-b3792d264bdc\") " pod="kube-system/kube-proxy-kmqrv" Apr 30 00:39:01.689901 kubelet[3102]: I0430 00:39:01.689489 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-bpf-maps\") pod \"cilium-dhjb9\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " pod="kube-system/cilium-dhjb9" Apr 30 00:39:01.689901 kubelet[3102]: I0430 00:39:01.689518 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-cni-path\") pod \"cilium-dhjb9\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " pod="kube-system/cilium-dhjb9" Apr 30 00:39:01.689901 kubelet[3102]: I0430 00:39:01.689560 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-lib-modules\") pod \"cilium-dhjb9\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " pod="kube-system/cilium-dhjb9" Apr 30 00:39:01.868804 containerd[1656]: time="2025-04-30T00:39:01.868676322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kmqrv,Uid:e1246b03-12d0-428f-87c9-b3792d264bdc,Namespace:kube-system,Attempt:0,}" Apr 30 00:39:01.874102 containerd[1656]: time="2025-04-30T00:39:01.872524418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dhjb9,Uid:f72ffd5a-0812-48a4-bd3b-38d5e5976890,Namespace:kube-system,Attempt:0,}" Apr 30 00:39:01.879674 kubelet[3102]: I0430 00:39:01.879036 3102 topology_manager.go:215] "Topology Admit Handler" podUID="72af9aab-9037-4611-bfe7-3763254c85f5" podNamespace="kube-system" podName="cilium-operator-599987898-hrc5c" Apr 30 00:39:01.900876 kubelet[3102]: I0430 00:39:01.898533 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsjtp\" (UniqueName: \"kubernetes.io/projected/72af9aab-9037-4611-bfe7-3763254c85f5-kube-api-access-jsjtp\") pod \"cilium-operator-599987898-hrc5c\" (UID: \"72af9aab-9037-4611-bfe7-3763254c85f5\") " pod="kube-system/cilium-operator-599987898-hrc5c" Apr 30 00:39:01.907988 kubelet[3102]: I0430 00:39:01.901193 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72af9aab-9037-4611-bfe7-3763254c85f5-cilium-config-path\") pod \"cilium-operator-599987898-hrc5c\" (UID: \"72af9aab-9037-4611-bfe7-3763254c85f5\") " pod="kube-system/cilium-operator-599987898-hrc5c" Apr 30 00:39:01.944685 containerd[1656]: time="2025-04-30T00:39:01.944455161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:39:01.947552 containerd[1656]: time="2025-04-30T00:39:01.946122798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:39:01.947552 containerd[1656]: time="2025-04-30T00:39:01.946225002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:39:01.947552 containerd[1656]: time="2025-04-30T00:39:01.946241303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:01.947552 containerd[1656]: time="2025-04-30T00:39:01.946350700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:01.947552 containerd[1656]: time="2025-04-30T00:39:01.945880728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:39:01.947552 containerd[1656]: time="2025-04-30T00:39:01.945913330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:01.947552 containerd[1656]: time="2025-04-30T00:39:01.946015203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:01.996014 containerd[1656]: time="2025-04-30T00:39:01.995982301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kmqrv,Uid:e1246b03-12d0-428f-87c9-b3792d264bdc,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2123dd9c41b43613c905fb381402697a8d52410ec20e11971f531ed0c24095d\"" Apr 30 00:39:01.997426 containerd[1656]: time="2025-04-30T00:39:01.997379185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dhjb9,Uid:f72ffd5a-0812-48a4-bd3b-38d5e5976890,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\"" Apr 30 00:39:01.999852 containerd[1656]: time="2025-04-30T00:39:01.999649385Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 00:39:01.999852 containerd[1656]: time="2025-04-30T00:39:01.999777689Z" level=info msg="CreateContainer within sandbox \"b2123dd9c41b43613c905fb381402697a8d52410ec20e11971f531ed0c24095d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:39:02.019974 containerd[1656]: time="2025-04-30T00:39:02.019922291Z" level=info msg="CreateContainer within sandbox \"b2123dd9c41b43613c905fb381402697a8d52410ec20e11971f531ed0c24095d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2076614ea24ea6e371e7752413cb8b6542d83b813f2929826db7d138423f1cce\"" Apr 30 00:39:02.020814 containerd[1656]: time="2025-04-30T00:39:02.020784929Z" level=info msg="StartContainer for \"2076614ea24ea6e371e7752413cb8b6542d83b813f2929826db7d138423f1cce\"" Apr 30 00:39:02.074048 containerd[1656]: time="2025-04-30T00:39:02.073973608Z" level=info msg="StartContainer for \"2076614ea24ea6e371e7752413cb8b6542d83b813f2929826db7d138423f1cce\" returns successfully" Apr 30 00:39:02.204761 containerd[1656]: time="2025-04-30T00:39:02.204287319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-hrc5c,Uid:72af9aab-9037-4611-bfe7-3763254c85f5,Namespace:kube-system,Attempt:0,}" Apr 30 00:39:02.262658 containerd[1656]: time="2025-04-30T00:39:02.261977938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:39:02.262658 containerd[1656]: time="2025-04-30T00:39:02.262051729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:39:02.262658 containerd[1656]: time="2025-04-30T00:39:02.262090692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:02.262658 containerd[1656]: time="2025-04-30T00:39:02.262244034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:02.330863 containerd[1656]: time="2025-04-30T00:39:02.330818226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-hrc5c,Uid:72af9aab-9037-4611-bfe7-3763254c85f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684\"" Apr 30 00:39:02.786810 kubelet[3102]: I0430 00:39:02.785177 3102 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kmqrv" podStartSLOduration=1.785075945 podStartE2EDuration="1.785075945s" podCreationTimestamp="2025-04-30 00:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:39:02.783051961 +0000 UTC m=+16.171433480" watchObservedRunningTime="2025-04-30 00:39:02.785075945 +0000 UTC m=+16.173457525" Apr 30 00:39:07.916114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2596563140.mount: Deactivated successfully. Apr 30 00:39:09.455666 containerd[1656]: time="2025-04-30T00:39:09.455598342Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:09.457746 containerd[1656]: time="2025-04-30T00:39:09.457697377Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 00:39:09.458604 containerd[1656]: time="2025-04-30T00:39:09.458528028Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:09.461681 containerd[1656]: time="2025-04-30T00:39:09.461617638Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.461941202s" Apr 30 00:39:09.461681 containerd[1656]: time="2025-04-30T00:39:09.461651182Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 00:39:09.468135 containerd[1656]: time="2025-04-30T00:39:09.468072663Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 00:39:09.471853 containerd[1656]: time="2025-04-30T00:39:09.471796821Z" level=info msg="CreateContainer within sandbox \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:39:09.546560 containerd[1656]: time="2025-04-30T00:39:09.546504578Z" level=info msg="CreateContainer within sandbox \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"068360fb1d76f86ce146feebd8917f8b8eac73e31ef8243d7add628a2879dc76\"" Apr 30 00:39:09.547329 containerd[1656]: time="2025-04-30T00:39:09.547051258Z" level=info msg="StartContainer for \"068360fb1d76f86ce146feebd8917f8b8eac73e31ef8243d7add628a2879dc76\"" Apr 30 00:39:09.705422 systemd[1]: run-containerd-runc-k8s.io-068360fb1d76f86ce146feebd8917f8b8eac73e31ef8243d7add628a2879dc76-runc.QAEMTm.mount: Deactivated successfully. Apr 30 00:39:09.733779 containerd[1656]: time="2025-04-30T00:39:09.733351012Z" level=info msg="StartContainer for \"068360fb1d76f86ce146feebd8917f8b8eac73e31ef8243d7add628a2879dc76\" returns successfully" Apr 30 00:39:09.837495 containerd[1656]: time="2025-04-30T00:39:09.806928778Z" level=info msg="shim disconnected" id=068360fb1d76f86ce146feebd8917f8b8eac73e31ef8243d7add628a2879dc76 namespace=k8s.io Apr 30 00:39:09.837957 containerd[1656]: time="2025-04-30T00:39:09.837746361Z" level=warning msg="cleaning up after shim disconnected" id=068360fb1d76f86ce146feebd8917f8b8eac73e31ef8243d7add628a2879dc76 namespace=k8s.io Apr 30 00:39:09.837957 containerd[1656]: time="2025-04-30T00:39:09.837772431Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:39:10.537945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-068360fb1d76f86ce146feebd8917f8b8eac73e31ef8243d7add628a2879dc76-rootfs.mount: Deactivated successfully. Apr 30 00:39:10.856842 containerd[1656]: time="2025-04-30T00:39:10.856756499Z" level=info msg="CreateContainer within sandbox \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:39:10.885240 containerd[1656]: time="2025-04-30T00:39:10.885170802Z" level=info msg="CreateContainer within sandbox \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3ab6a25177de6e3050c4ec4a88ae2a1cbfa6d9efa876812c9643776fc9dba595\"" Apr 30 00:39:10.885941 containerd[1656]: time="2025-04-30T00:39:10.885851688Z" level=info msg="StartContainer for \"3ab6a25177de6e3050c4ec4a88ae2a1cbfa6d9efa876812c9643776fc9dba595\"" Apr 30 00:39:10.949855 containerd[1656]: time="2025-04-30T00:39:10.949717540Z" level=info msg="StartContainer for \"3ab6a25177de6e3050c4ec4a88ae2a1cbfa6d9efa876812c9643776fc9dba595\" returns successfully" Apr 30 00:39:10.959730 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:39:10.960007 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:39:10.960067 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:39:10.967513 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:39:10.984524 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:39:11.004755 containerd[1656]: time="2025-04-30T00:39:11.004684213Z" level=info msg="shim disconnected" id=3ab6a25177de6e3050c4ec4a88ae2a1cbfa6d9efa876812c9643776fc9dba595 namespace=k8s.io Apr 30 00:39:11.004755 containerd[1656]: time="2025-04-30T00:39:11.004745409Z" level=warning msg="cleaning up after shim disconnected" id=3ab6a25177de6e3050c4ec4a88ae2a1cbfa6d9efa876812c9643776fc9dba595 namespace=k8s.io Apr 30 00:39:11.004755 containerd[1656]: time="2025-04-30T00:39:11.004752794Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:39:11.537942 systemd[1]: run-containerd-runc-k8s.io-3ab6a25177de6e3050c4ec4a88ae2a1cbfa6d9efa876812c9643776fc9dba595-runc.O8qn2n.mount: Deactivated successfully. Apr 30 00:39:11.538213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ab6a25177de6e3050c4ec4a88ae2a1cbfa6d9efa876812c9643776fc9dba595-rootfs.mount: Deactivated successfully. Apr 30 00:39:11.582986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3804753094.mount: Deactivated successfully. Apr 30 00:39:11.864451 containerd[1656]: time="2025-04-30T00:39:11.864361605Z" level=info msg="CreateContainer within sandbox \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:39:11.886446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744506494.mount: Deactivated successfully. Apr 30 00:39:11.891361 containerd[1656]: time="2025-04-30T00:39:11.889730322Z" level=info msg="CreateContainer within sandbox \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"51212761a920a372ee3e19fd6c985ce2a6c597c74615f9dd5ea6960e1a6e9432\"" Apr 30 00:39:11.901408 containerd[1656]: time="2025-04-30T00:39:11.899670379Z" level=info msg="StartContainer for \"51212761a920a372ee3e19fd6c985ce2a6c597c74615f9dd5ea6960e1a6e9432\"" Apr 30 00:39:12.002989 containerd[1656]: time="2025-04-30T00:39:12.002952869Z" level=info msg="StartContainer for \"51212761a920a372ee3e19fd6c985ce2a6c597c74615f9dd5ea6960e1a6e9432\" returns successfully" Apr 30 00:39:12.056963 containerd[1656]: time="2025-04-30T00:39:12.056831127Z" level=info msg="shim disconnected" id=51212761a920a372ee3e19fd6c985ce2a6c597c74615f9dd5ea6960e1a6e9432 namespace=k8s.io Apr 30 00:39:12.056963 containerd[1656]: time="2025-04-30T00:39:12.056881202Z" level=warning msg="cleaning up after shim disconnected" id=51212761a920a372ee3e19fd6c985ce2a6c597c74615f9dd5ea6960e1a6e9432 namespace=k8s.io Apr 30 00:39:12.056963 containerd[1656]: time="2025-04-30T00:39:12.056887804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:39:12.171209 containerd[1656]: time="2025-04-30T00:39:12.171095768Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:12.172052 containerd[1656]: time="2025-04-30T00:39:12.172015220Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 00:39:12.173135 containerd[1656]: time="2025-04-30T00:39:12.173103162Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:39:12.174190 containerd[1656]: time="2025-04-30T00:39:12.174165104Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.706054589s" Apr 30 00:39:12.174239 containerd[1656]: time="2025-04-30T00:39:12.174192847Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 00:39:12.176729 containerd[1656]: time="2025-04-30T00:39:12.176699091Z" level=info msg="CreateContainer within sandbox \"5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 00:39:12.197971 containerd[1656]: time="2025-04-30T00:39:12.197909378Z" level=info msg="CreateContainer within sandbox \"5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2c913710e96d9bc7c612fcb72e15984f2ba2618e3445fe1866775a6bc715e7c7\"" Apr 30 00:39:12.198490 containerd[1656]: time="2025-04-30T00:39:12.198461811Z" level=info msg="StartContainer for \"2c913710e96d9bc7c612fcb72e15984f2ba2618e3445fe1866775a6bc715e7c7\"" Apr 30 00:39:12.257526 containerd[1656]: time="2025-04-30T00:39:12.257462796Z" level=info msg="StartContainer for \"2c913710e96d9bc7c612fcb72e15984f2ba2618e3445fe1866775a6bc715e7c7\" returns successfully" Apr 30 00:39:12.870811 containerd[1656]: time="2025-04-30T00:39:12.870776792Z" level=info msg="CreateContainer within sandbox \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:39:12.901244 containerd[1656]: time="2025-04-30T00:39:12.900581700Z" level=info msg="CreateContainer within sandbox \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1f9e8b0ea4911078e4e49f7b697fb9b24b5d9b774b7161bbb6654893ce9dddda\"" Apr 30 00:39:12.910421 containerd[1656]: time="2025-04-30T00:39:12.909417589Z" level=info msg="StartContainer for \"1f9e8b0ea4911078e4e49f7b697fb9b24b5d9b774b7161bbb6654893ce9dddda\"" Apr 30 00:39:12.962857 kubelet[3102]: I0430 00:39:12.962802 3102 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-hrc5c" podStartSLOduration=2.120036486 podStartE2EDuration="11.962785415s" podCreationTimestamp="2025-04-30 00:39:01 +0000 UTC" firstStartedPulling="2025-04-30 00:39:02.332498307 +0000 UTC m=+15.720879806" lastFinishedPulling="2025-04-30 00:39:12.175247236 +0000 UTC m=+25.563628735" observedRunningTime="2025-04-30 00:39:12.908829679 +0000 UTC m=+26.297211198" watchObservedRunningTime="2025-04-30 00:39:12.962785415 +0000 UTC m=+26.351166913" Apr 30 00:39:13.027910 containerd[1656]: time="2025-04-30T00:39:13.027873344Z" level=info msg="StartContainer for \"1f9e8b0ea4911078e4e49f7b697fb9b24b5d9b774b7161bbb6654893ce9dddda\" returns successfully" Apr 30 00:39:13.048496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f9e8b0ea4911078e4e49f7b697fb9b24b5d9b774b7161bbb6654893ce9dddda-rootfs.mount: Deactivated successfully. Apr 30 00:39:13.074234 containerd[1656]: time="2025-04-30T00:39:13.074126219Z" level=info msg="shim disconnected" id=1f9e8b0ea4911078e4e49f7b697fb9b24b5d9b774b7161bbb6654893ce9dddda namespace=k8s.io Apr 30 00:39:13.074234 containerd[1656]: time="2025-04-30T00:39:13.074194469Z" level=warning msg="cleaning up after shim disconnected" id=1f9e8b0ea4911078e4e49f7b697fb9b24b5d9b774b7161bbb6654893ce9dddda namespace=k8s.io Apr 30 00:39:13.074234 containerd[1656]: time="2025-04-30T00:39:13.074201513Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:39:13.882718 containerd[1656]: time="2025-04-30T00:39:13.882628558Z" level=info msg="CreateContainer within sandbox \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:39:13.923348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1036481158.mount: Deactivated successfully. Apr 30 00:39:13.925225 containerd[1656]: time="2025-04-30T00:39:13.925060274Z" level=info msg="CreateContainer within sandbox \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2c36cc9c2ae831f695ca78f3004318b4e221fa46e9bab83b1fce7fdda05d3cfa\"" Apr 30 00:39:13.926379 containerd[1656]: time="2025-04-30T00:39:13.926346424Z" level=info msg="StartContainer for \"2c36cc9c2ae831f695ca78f3004318b4e221fa46e9bab83b1fce7fdda05d3cfa\"" Apr 30 00:39:13.990438 containerd[1656]: time="2025-04-30T00:39:13.990342691Z" level=info msg="StartContainer for \"2c36cc9c2ae831f695ca78f3004318b4e221fa46e9bab83b1fce7fdda05d3cfa\" returns successfully" Apr 30 00:39:14.112344 kubelet[3102]: I0430 00:39:14.112189 3102 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 00:39:14.138877 kubelet[3102]: I0430 00:39:14.138494 3102 topology_manager.go:215] "Topology Admit Handler" podUID="de53babf-d858-448e-8be3-d59d2f3b2470" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sxns2" Apr 30 00:39:14.142877 kubelet[3102]: I0430 00:39:14.142767 3102 topology_manager.go:215] "Topology Admit Handler" podUID="20b0a9ff-745c-45f7-821c-0cd70920ecea" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mqtf5" Apr 30 00:39:14.209479 kubelet[3102]: I0430 00:39:14.209406 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de53babf-d858-448e-8be3-d59d2f3b2470-config-volume\") pod \"coredns-7db6d8ff4d-sxns2\" (UID: \"de53babf-d858-448e-8be3-d59d2f3b2470\") " pod="kube-system/coredns-7db6d8ff4d-sxns2" Apr 30 00:39:14.209709 kubelet[3102]: I0430 00:39:14.209496 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20b0a9ff-745c-45f7-821c-0cd70920ecea-config-volume\") pod \"coredns-7db6d8ff4d-mqtf5\" (UID: \"20b0a9ff-745c-45f7-821c-0cd70920ecea\") " pod="kube-system/coredns-7db6d8ff4d-mqtf5" Apr 30 00:39:14.209709 kubelet[3102]: I0430 00:39:14.209545 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ldf6\" (UniqueName: \"kubernetes.io/projected/de53babf-d858-448e-8be3-d59d2f3b2470-kube-api-access-8ldf6\") pod \"coredns-7db6d8ff4d-sxns2\" (UID: \"de53babf-d858-448e-8be3-d59d2f3b2470\") " pod="kube-system/coredns-7db6d8ff4d-sxns2" Apr 30 00:39:14.209709 kubelet[3102]: I0430 00:39:14.209612 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsnq9\" (UniqueName: \"kubernetes.io/projected/20b0a9ff-745c-45f7-821c-0cd70920ecea-kube-api-access-nsnq9\") pod \"coredns-7db6d8ff4d-mqtf5\" (UID: \"20b0a9ff-745c-45f7-821c-0cd70920ecea\") " pod="kube-system/coredns-7db6d8ff4d-mqtf5" Apr 30 00:39:14.450437 containerd[1656]: time="2025-04-30T00:39:14.449806600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mqtf5,Uid:20b0a9ff-745c-45f7-821c-0cd70920ecea,Namespace:kube-system,Attempt:0,}" Apr 30 00:39:14.450897 containerd[1656]: time="2025-04-30T00:39:14.450871779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sxns2,Uid:de53babf-d858-448e-8be3-d59d2f3b2470,Namespace:kube-system,Attempt:0,}" Apr 30 00:39:14.905429 kubelet[3102]: I0430 00:39:14.905057 3102 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dhjb9" podStartSLOduration=6.436921433 podStartE2EDuration="13.90502956s" podCreationTimestamp="2025-04-30 00:39:01 +0000 UTC" firstStartedPulling="2025-04-30 00:39:01.999206714 +0000 UTC m=+15.387588213" lastFinishedPulling="2025-04-30 00:39:09.467314841 +0000 UTC m=+22.855696340" observedRunningTime="2025-04-30 00:39:14.904729378 +0000 UTC m=+28.293110957" watchObservedRunningTime="2025-04-30 00:39:14.90502956 +0000 UTC m=+28.293411099" Apr 30 00:39:16.215433 systemd-networkd[1254]: cilium_host: Link UP Apr 30 00:39:16.217527 systemd-networkd[1254]: cilium_net: Link UP Apr 30 00:39:16.217533 systemd-networkd[1254]: cilium_net: Gained carrier Apr 30 00:39:16.217883 systemd-networkd[1254]: cilium_host: Gained carrier Apr 30 00:39:16.370839 systemd-networkd[1254]: cilium_vxlan: Link UP Apr 30 00:39:16.370847 systemd-networkd[1254]: cilium_vxlan: Gained carrier Apr 30 00:39:16.844239 kernel: NET: Registered PF_ALG protocol family Apr 30 00:39:17.025976 systemd-networkd[1254]: cilium_host: Gained IPv6LL Apr 30 00:39:17.087984 systemd-networkd[1254]: cilium_net: Gained IPv6LL Apr 30 00:39:17.601213 systemd-networkd[1254]: cilium_vxlan: Gained IPv6LL Apr 30 00:39:17.620158 systemd-networkd[1254]: lxc_health: Link UP Apr 30 00:39:17.629207 systemd-networkd[1254]: lxc_health: Gained carrier Apr 30 00:39:18.109265 kernel: eth0: renamed from tmp8c0e4 Apr 30 00:39:18.106980 systemd-networkd[1254]: lxcf0ad741a6ce2: Link UP Apr 30 00:39:18.107793 systemd-networkd[1254]: lxc5f64d7f2d79b: Link UP Apr 30 00:39:18.121417 systemd-networkd[1254]: lxcf0ad741a6ce2: Gained carrier Apr 30 00:39:18.128317 kernel: eth0: renamed from tmp627ab Apr 30 00:39:18.134187 systemd-networkd[1254]: lxc5f64d7f2d79b: Gained carrier Apr 30 00:39:19.264317 systemd-networkd[1254]: lxc_health: Gained IPv6LL Apr 30 00:39:19.648243 systemd-networkd[1254]: lxcf0ad741a6ce2: Gained IPv6LL Apr 30 00:39:19.967520 systemd-networkd[1254]: lxc5f64d7f2d79b: Gained IPv6LL Apr 30 00:39:21.528128 containerd[1656]: time="2025-04-30T00:39:21.527797438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:39:21.528128 containerd[1656]: time="2025-04-30T00:39:21.527869856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:39:21.528128 containerd[1656]: time="2025-04-30T00:39:21.527884153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:21.528128 containerd[1656]: time="2025-04-30T00:39:21.527957483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:21.608222 containerd[1656]: time="2025-04-30T00:39:21.606377505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:39:21.608222 containerd[1656]: time="2025-04-30T00:39:21.606454533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:39:21.608222 containerd[1656]: time="2025-04-30T00:39:21.606465643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:21.608222 containerd[1656]: time="2025-04-30T00:39:21.606563610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:39:21.626585 containerd[1656]: time="2025-04-30T00:39:21.626503137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mqtf5,Uid:20b0a9ff-745c-45f7-821c-0cd70920ecea,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c0e466a0095551726712d19176fd97fa741ae8764a192d83bd326ca92c7fe67\"" Apr 30 00:39:21.631420 containerd[1656]: time="2025-04-30T00:39:21.631374411Z" level=info msg="CreateContainer within sandbox \"8c0e466a0095551726712d19176fd97fa741ae8764a192d83bd326ca92c7fe67\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:39:21.686539 containerd[1656]: time="2025-04-30T00:39:21.686419583Z" level=info msg="CreateContainer within sandbox \"8c0e466a0095551726712d19176fd97fa741ae8764a192d83bd326ca92c7fe67\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1672146e5e00f6a95deb53fcd9f7a3918c263c453d8badd676c507b967239afb\"" Apr 30 00:39:21.687909 containerd[1656]: time="2025-04-30T00:39:21.687171327Z" level=info msg="StartContainer for \"1672146e5e00f6a95deb53fcd9f7a3918c263c453d8badd676c507b967239afb\"" Apr 30 00:39:21.691832 containerd[1656]: time="2025-04-30T00:39:21.691635285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sxns2,Uid:de53babf-d858-448e-8be3-d59d2f3b2470,Namespace:kube-system,Attempt:0,} returns sandbox id \"627abfbddbf57af829d789fdf8c3bdb8f236f7d55e5e9dc804ff6448a44e3561\"" Apr 30 00:39:21.697321 containerd[1656]: time="2025-04-30T00:39:21.697290296Z" level=info msg="CreateContainer within sandbox \"627abfbddbf57af829d789fdf8c3bdb8f236f7d55e5e9dc804ff6448a44e3561\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:39:21.715031 containerd[1656]: time="2025-04-30T00:39:21.714853156Z" level=info msg="CreateContainer within sandbox \"627abfbddbf57af829d789fdf8c3bdb8f236f7d55e5e9dc804ff6448a44e3561\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"03231eae82655d45e03d590ddb3944f5f901e4af6492db059e3ff6c89302e2f9\"" Apr 30 00:39:21.715946 containerd[1656]: time="2025-04-30T00:39:21.715605672Z" level=info msg="StartContainer for \"03231eae82655d45e03d590ddb3944f5f901e4af6492db059e3ff6c89302e2f9\"" Apr 30 00:39:21.745997 containerd[1656]: time="2025-04-30T00:39:21.745940393Z" level=info msg="StartContainer for \"1672146e5e00f6a95deb53fcd9f7a3918c263c453d8badd676c507b967239afb\" returns successfully" Apr 30 00:39:21.763597 containerd[1656]: time="2025-04-30T00:39:21.763549153Z" level=info msg="StartContainer for \"03231eae82655d45e03d590ddb3944f5f901e4af6492db059e3ff6c89302e2f9\" returns successfully" Apr 30 00:39:21.928194 kubelet[3102]: I0430 00:39:21.928052 3102 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-sxns2" podStartSLOduration=20.928034055 podStartE2EDuration="20.928034055s" podCreationTimestamp="2025-04-30 00:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:39:21.927702863 +0000 UTC m=+35.316084362" watchObservedRunningTime="2025-04-30 00:39:21.928034055 +0000 UTC m=+35.316415554" Apr 30 00:39:21.946257 kubelet[3102]: I0430 00:39:21.946204 3102 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mqtf5" podStartSLOduration=20.94618588 podStartE2EDuration="20.94618588s" podCreationTimestamp="2025-04-30 00:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:39:21.944738068 +0000 UTC m=+35.333119557" watchObservedRunningTime="2025-04-30 00:39:21.94618588 +0000 UTC m=+35.334567380" Apr 30 00:39:22.542993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2036785910.mount: Deactivated successfully. Apr 30 00:39:31.125771 kubelet[3102]: I0430 00:39:31.125543 3102 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:43:25.528735 systemd[1]: Started sshd@7-135.181.102.231:22-139.178.89.65:52580.service - OpenSSH per-connection server daemon (139.178.89.65:52580). Apr 30 00:43:26.536641 sshd[4501]: Accepted publickey for core from 139.178.89.65 port 52580 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:43:26.538376 sshd-session[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:43:26.545808 systemd-logind[1629]: New session 8 of user core. Apr 30 00:43:26.549379 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:43:27.956495 sshd[4504]: Connection closed by 139.178.89.65 port 52580 Apr 30 00:43:27.958250 sshd-session[4501]: pam_unix(sshd:session): session closed for user core Apr 30 00:43:27.964703 systemd[1]: sshd@7-135.181.102.231:22-139.178.89.65:52580.service: Deactivated successfully. Apr 30 00:43:27.969983 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:43:27.971126 systemd-logind[1629]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:43:27.973936 systemd-logind[1629]: Removed session 8. Apr 30 00:43:33.123593 systemd[1]: Started sshd@8-135.181.102.231:22-139.178.89.65:47060.service - OpenSSH per-connection server daemon (139.178.89.65:47060). Apr 30 00:43:34.118916 sshd[4518]: Accepted publickey for core from 139.178.89.65 port 47060 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:43:34.121579 sshd-session[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:43:34.129297 systemd-logind[1629]: New session 9 of user core. Apr 30 00:43:34.135653 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:43:34.928909 sshd[4521]: Connection closed by 139.178.89.65 port 47060 Apr 30 00:43:34.929926 sshd-session[4518]: pam_unix(sshd:session): session closed for user core Apr 30 00:43:34.934381 systemd[1]: sshd@8-135.181.102.231:22-139.178.89.65:47060.service: Deactivated successfully. Apr 30 00:43:34.940863 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:43:34.942334 systemd-logind[1629]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:43:34.943815 systemd-logind[1629]: Removed session 9. Apr 30 00:43:40.097596 systemd[1]: Started sshd@9-135.181.102.231:22-139.178.89.65:59858.service - OpenSSH per-connection server daemon (139.178.89.65:59858). Apr 30 00:43:41.097844 sshd[4534]: Accepted publickey for core from 139.178.89.65 port 59858 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:43:41.100260 sshd-session[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:43:41.108989 systemd-logind[1629]: New session 10 of user core. Apr 30 00:43:41.113783 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:43:41.864882 sshd[4537]: Connection closed by 139.178.89.65 port 59858 Apr 30 00:43:41.865476 sshd-session[4534]: pam_unix(sshd:session): session closed for user core Apr 30 00:43:41.868672 systemd[1]: sshd@9-135.181.102.231:22-139.178.89.65:59858.service: Deactivated successfully. Apr 30 00:43:41.870561 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:43:41.870759 systemd-logind[1629]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:43:41.872648 systemd-logind[1629]: Removed session 10. Apr 30 00:43:42.030520 systemd[1]: Started sshd@10-135.181.102.231:22-139.178.89.65:59872.service - OpenSSH per-connection server daemon (139.178.89.65:59872). Apr 30 00:43:43.030648 sshd[4550]: Accepted publickey for core from 139.178.89.65 port 59872 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:43:43.032962 sshd-session[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:43:43.040537 systemd-logind[1629]: New session 11 of user core. Apr 30 00:43:43.046526 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:43:43.885949 sshd[4553]: Connection closed by 139.178.89.65 port 59872 Apr 30 00:43:43.886664 sshd-session[4550]: pam_unix(sshd:session): session closed for user core Apr 30 00:43:43.888954 systemd[1]: sshd@10-135.181.102.231:22-139.178.89.65:59872.service: Deactivated successfully. Apr 30 00:43:43.892865 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:43:43.895792 systemd-logind[1629]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:43:43.896807 systemd-logind[1629]: Removed session 11. Apr 30 00:43:44.051910 systemd[1]: Started sshd@11-135.181.102.231:22-139.178.89.65:59886.service - OpenSSH per-connection server daemon (139.178.89.65:59886). Apr 30 00:43:45.045341 sshd[4562]: Accepted publickey for core from 139.178.89.65 port 59886 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:43:45.047108 sshd-session[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:43:45.052431 systemd-logind[1629]: New session 12 of user core. Apr 30 00:43:45.058342 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:43:45.833313 sshd[4565]: Connection closed by 139.178.89.65 port 59886 Apr 30 00:43:45.834937 sshd-session[4562]: pam_unix(sshd:session): session closed for user core Apr 30 00:43:45.840417 systemd[1]: sshd@11-135.181.102.231:22-139.178.89.65:59886.service: Deactivated successfully. Apr 30 00:43:45.840867 systemd-logind[1629]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:43:45.846090 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:43:45.848025 systemd-logind[1629]: Removed session 12. Apr 30 00:43:47.300545 update_engine[1632]: I20250430 00:43:47.300431 1632 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 00:43:47.300545 update_engine[1632]: I20250430 00:43:47.300520 1632 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 00:43:47.303510 update_engine[1632]: I20250430 00:43:47.303462 1632 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 00:43:47.304549 update_engine[1632]: I20250430 00:43:47.304235 1632 omaha_request_params.cc:62] Current group set to stable Apr 30 00:43:47.304549 update_engine[1632]: I20250430 00:43:47.304401 1632 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 00:43:47.304549 update_engine[1632]: I20250430 00:43:47.304414 1632 update_attempter.cc:643] Scheduling an action processor start. Apr 30 00:43:47.304549 update_engine[1632]: I20250430 00:43:47.304440 1632 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 00:43:47.304549 update_engine[1632]: I20250430 00:43:47.304493 1632 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 00:43:47.304782 update_engine[1632]: I20250430 00:43:47.304603 1632 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 00:43:47.304782 update_engine[1632]: I20250430 00:43:47.304618 1632 omaha_request_action.cc:272] Request: Apr 30 00:43:47.304782 update_engine[1632]: Apr 30 00:43:47.304782 update_engine[1632]: Apr 30 00:43:47.304782 update_engine[1632]: Apr 30 00:43:47.304782 update_engine[1632]: Apr 30 00:43:47.304782 update_engine[1632]: Apr 30 00:43:47.304782 update_engine[1632]: Apr 30 00:43:47.304782 update_engine[1632]: Apr 30 00:43:47.304782 update_engine[1632]: Apr 30 00:43:47.304782 update_engine[1632]: I20250430 00:43:47.304628 1632 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:43:47.320733 update_engine[1632]: I20250430 00:43:47.320053 1632 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:43:47.320733 update_engine[1632]: I20250430 00:43:47.320628 1632 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:43:47.322942 update_engine[1632]: E20250430 00:43:47.322798 1632 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:43:47.322942 update_engine[1632]: I20250430 00:43:47.322909 1632 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 00:43:47.325573 locksmithd[1677]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 00:43:51.000324 systemd[1]: Started sshd@12-135.181.102.231:22-139.178.89.65:50250.service - OpenSSH per-connection server daemon (139.178.89.65:50250). Apr 30 00:43:52.024872 sshd[4578]: Accepted publickey for core from 139.178.89.65 port 50250 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:43:52.027304 sshd-session[4578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:43:52.035243 systemd-logind[1629]: New session 13 of user core. Apr 30 00:43:52.039646 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:43:52.872499 sshd[4581]: Connection closed by 139.178.89.65 port 50250 Apr 30 00:43:52.873423 sshd-session[4578]: pam_unix(sshd:session): session closed for user core Apr 30 00:43:52.878068 systemd[1]: sshd@12-135.181.102.231:22-139.178.89.65:50250.service: Deactivated successfully. Apr 30 00:43:52.882060 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:43:52.883259 systemd-logind[1629]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:43:52.884637 systemd-logind[1629]: Removed session 13. Apr 30 00:43:53.036788 systemd[1]: Started sshd@13-135.181.102.231:22-139.178.89.65:50252.service - OpenSSH per-connection server daemon (139.178.89.65:50252). Apr 30 00:43:54.038121 sshd[4592]: Accepted publickey for core from 139.178.89.65 port 50252 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:43:54.040184 sshd-session[4592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:43:54.047194 systemd-logind[1629]: New session 14 of user core. Apr 30 00:43:54.053568 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:43:55.033892 sshd[4595]: Connection closed by 139.178.89.65 port 50252 Apr 30 00:43:55.035348 sshd-session[4592]: pam_unix(sshd:session): session closed for user core Apr 30 00:43:55.040898 systemd[1]: sshd@13-135.181.102.231:22-139.178.89.65:50252.service: Deactivated successfully. Apr 30 00:43:55.043042 systemd-logind[1629]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:43:55.043483 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:43:55.045301 systemd-logind[1629]: Removed session 14. Apr 30 00:43:55.199510 systemd[1]: Started sshd@14-135.181.102.231:22-139.178.89.65:50266.service - OpenSSH per-connection server daemon (139.178.89.65:50266). Apr 30 00:43:56.212453 sshd[4604]: Accepted publickey for core from 139.178.89.65 port 50266 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:43:56.214832 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:43:56.222554 systemd-logind[1629]: New session 15 of user core. Apr 30 00:43:56.227567 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:43:57.257974 update_engine[1632]: I20250430 00:43:57.257858 1632 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:43:57.258622 update_engine[1632]: I20250430 00:43:57.258257 1632 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:43:57.258679 update_engine[1632]: I20250430 00:43:57.258599 1632 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:43:57.259029 update_engine[1632]: E20250430 00:43:57.258984 1632 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:43:57.259081 update_engine[1632]: I20250430 00:43:57.259047 1632 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 30 00:43:58.814482 sshd[4607]: Connection closed by 139.178.89.65 port 50266 Apr 30 00:43:58.815535 sshd-session[4604]: pam_unix(sshd:session): session closed for user core Apr 30 00:43:58.825964 systemd[1]: sshd@14-135.181.102.231:22-139.178.89.65:50266.service: Deactivated successfully. Apr 30 00:43:58.832486 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:43:58.832822 systemd-logind[1629]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:43:58.836423 systemd-logind[1629]: Removed session 15. Apr 30 00:43:58.980548 systemd[1]: Started sshd@15-135.181.102.231:22-139.178.89.65:45352.service - OpenSSH per-connection server daemon (139.178.89.65:45352). Apr 30 00:43:59.989925 sshd[4623]: Accepted publickey for core from 139.178.89.65 port 45352 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:43:59.992025 sshd-session[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:00.000270 systemd-logind[1629]: New session 16 of user core. Apr 30 00:44:00.007645 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:44:01.226117 sshd[4626]: Connection closed by 139.178.89.65 port 45352 Apr 30 00:44:01.227001 sshd-session[4623]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:01.235198 systemd[1]: sshd@15-135.181.102.231:22-139.178.89.65:45352.service: Deactivated successfully. Apr 30 00:44:01.240075 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:44:01.240965 systemd-logind[1629]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:44:01.243989 systemd-logind[1629]: Removed session 16. Apr 30 00:44:01.392209 systemd[1]: Started sshd@16-135.181.102.231:22-139.178.89.65:45358.service - OpenSSH per-connection server daemon (139.178.89.65:45358). Apr 30 00:44:02.401286 sshd[4635]: Accepted publickey for core from 139.178.89.65 port 45358 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:44:02.403560 sshd-session[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:02.412265 systemd-logind[1629]: New session 17 of user core. Apr 30 00:44:02.418583 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:44:03.186477 sshd[4640]: Connection closed by 139.178.89.65 port 45358 Apr 30 00:44:03.187225 sshd-session[4635]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:03.191120 systemd[1]: sshd@16-135.181.102.231:22-139.178.89.65:45358.service: Deactivated successfully. Apr 30 00:44:03.196751 systemd-logind[1629]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:44:03.197616 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:44:03.199752 systemd-logind[1629]: Removed session 17. Apr 30 00:44:07.260703 update_engine[1632]: I20250430 00:44:07.260594 1632 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:44:07.261327 update_engine[1632]: I20250430 00:44:07.260944 1632 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:44:07.261327 update_engine[1632]: I20250430 00:44:07.261266 1632 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:44:07.261860 update_engine[1632]: E20250430 00:44:07.261783 1632 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:44:07.261952 update_engine[1632]: I20250430 00:44:07.261883 1632 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 30 00:44:08.352725 systemd[1]: Started sshd@17-135.181.102.231:22-139.178.89.65:40498.service - OpenSSH per-connection server daemon (139.178.89.65:40498). Apr 30 00:44:09.347716 sshd[4654]: Accepted publickey for core from 139.178.89.65 port 40498 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:44:09.350109 sshd-session[4654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:09.359616 systemd-logind[1629]: New session 18 of user core. Apr 30 00:44:09.365023 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:44:10.122833 sshd[4657]: Connection closed by 139.178.89.65 port 40498 Apr 30 00:44:10.123341 sshd-session[4654]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:10.128715 systemd[1]: sshd@17-135.181.102.231:22-139.178.89.65:40498.service: Deactivated successfully. Apr 30 00:44:10.133759 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:44:10.134164 systemd-logind[1629]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:44:10.137481 systemd-logind[1629]: Removed session 18. Apr 30 00:44:15.293617 systemd[1]: Started sshd@18-135.181.102.231:22-139.178.89.65:40510.service - OpenSSH per-connection server daemon (139.178.89.65:40510). Apr 30 00:44:16.288451 sshd[4668]: Accepted publickey for core from 139.178.89.65 port 40510 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:44:16.290878 sshd-session[4668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:16.298182 systemd-logind[1629]: New session 19 of user core. Apr 30 00:44:16.305576 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:44:17.086659 sshd[4671]: Connection closed by 139.178.89.65 port 40510 Apr 30 00:44:17.087059 sshd-session[4668]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:17.092255 systemd[1]: sshd@18-135.181.102.231:22-139.178.89.65:40510.service: Deactivated successfully. Apr 30 00:44:17.099457 systemd-logind[1629]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:44:17.099908 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:44:17.102361 systemd-logind[1629]: Removed session 19. Apr 30 00:44:17.255173 systemd[1]: Started sshd@19-135.181.102.231:22-139.178.89.65:55324.service - OpenSSH per-connection server daemon (139.178.89.65:55324). Apr 30 00:44:17.257186 update_engine[1632]: I20250430 00:44:17.256995 1632 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:44:17.263462 update_engine[1632]: I20250430 00:44:17.257633 1632 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:44:17.263462 update_engine[1632]: I20250430 00:44:17.257997 1632 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:44:17.263462 update_engine[1632]: E20250430 00:44:17.258396 1632 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:44:17.263462 update_engine[1632]: I20250430 00:44:17.258452 1632 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 00:44:17.263462 update_engine[1632]: I20250430 00:44:17.258462 1632 omaha_request_action.cc:617] Omaha request response: Apr 30 00:44:17.263462 update_engine[1632]: E20250430 00:44:17.258560 1632 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 30 00:44:17.263462 update_engine[1632]: I20250430 00:44:17.258590 1632 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 30 00:44:17.263462 update_engine[1632]: I20250430 00:44:17.258599 1632 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 00:44:17.263462 update_engine[1632]: I20250430 00:44:17.258608 1632 update_attempter.cc:306] Processing Done. Apr 30 00:44:17.263462 update_engine[1632]: E20250430 00:44:17.258630 1632 update_attempter.cc:619] Update failed. Apr 30 00:44:17.263462 update_engine[1632]: I20250430 00:44:17.261492 1632 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 30 00:44:17.263462 update_engine[1632]: I20250430 00:44:17.261543 1632 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 30 00:44:17.263462 update_engine[1632]: I20250430 00:44:17.261554 1632 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 30 00:44:17.263462 update_engine[1632]: I20250430 00:44:17.261655 1632 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 00:44:17.263462 update_engine[1632]: I20250430 00:44:17.261686 1632 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 00:44:17.263462 update_engine[1632]: I20250430 00:44:17.261695 1632 omaha_request_action.cc:272] Request: Apr 30 00:44:17.263462 update_engine[1632]: Apr 30 00:44:17.263462 update_engine[1632]: Apr 30 00:44:17.264338 locksmithd[1677]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 30 00:44:17.264338 locksmithd[1677]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 30 00:44:17.265460 update_engine[1632]: Apr 30 00:44:17.265460 update_engine[1632]: Apr 30 00:44:17.265460 update_engine[1632]: Apr 30 00:44:17.265460 update_engine[1632]: Apr 30 00:44:17.265460 update_engine[1632]: I20250430 00:44:17.261704 1632 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 00:44:17.265460 update_engine[1632]: I20250430 00:44:17.261929 1632 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 00:44:17.265460 update_engine[1632]: I20250430 00:44:17.262202 1632 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 00:44:17.265460 update_engine[1632]: E20250430 00:44:17.262562 1632 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 00:44:17.265460 update_engine[1632]: I20250430 00:44:17.262614 1632 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 00:44:17.265460 update_engine[1632]: I20250430 00:44:17.262624 1632 omaha_request_action.cc:617] Omaha request response: Apr 30 00:44:17.265460 update_engine[1632]: I20250430 00:44:17.262634 1632 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 00:44:17.265460 update_engine[1632]: I20250430 00:44:17.262643 1632 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 00:44:17.265460 update_engine[1632]: I20250430 00:44:17.262655 1632 update_attempter.cc:306] Processing Done. Apr 30 00:44:17.265460 update_engine[1632]: I20250430 00:44:17.262665 1632 update_attempter.cc:310] Error event sent. Apr 30 00:44:17.265460 update_engine[1632]: I20250430 00:44:17.262678 1632 update_check_scheduler.cc:74] Next update check in 40m19s Apr 30 00:44:18.263455 sshd[4682]: Accepted publickey for core from 139.178.89.65 port 55324 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:44:18.264174 sshd-session[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:18.271228 systemd-logind[1629]: New session 20 of user core. Apr 30 00:44:18.279480 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:44:20.449182 containerd[1656]: time="2025-04-30T00:44:20.448504432Z" level=info msg="StopContainer for \"2c913710e96d9bc7c612fcb72e15984f2ba2618e3445fe1866775a6bc715e7c7\" with timeout 30 (s)" Apr 30 00:44:20.453855 containerd[1656]: time="2025-04-30T00:44:20.453768628Z" level=info msg="Stop container \"2c913710e96d9bc7c612fcb72e15984f2ba2618e3445fe1866775a6bc715e7c7\" with signal terminated" Apr 30 00:44:20.492070 systemd[1]: run-containerd-runc-k8s.io-2c36cc9c2ae831f695ca78f3004318b4e221fa46e9bab83b1fce7fdda05d3cfa-runc.3sLWx2.mount: Deactivated successfully. Apr 30 00:44:20.513921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c913710e96d9bc7c612fcb72e15984f2ba2618e3445fe1866775a6bc715e7c7-rootfs.mount: Deactivated successfully. Apr 30 00:44:20.514503 containerd[1656]: time="2025-04-30T00:44:20.514120125Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:44:20.522995 containerd[1656]: time="2025-04-30T00:44:20.522959928Z" level=info msg="StopContainer for \"2c36cc9c2ae831f695ca78f3004318b4e221fa46e9bab83b1fce7fdda05d3cfa\" with timeout 2 (s)" Apr 30 00:44:20.523271 containerd[1656]: time="2025-04-30T00:44:20.523243248Z" level=info msg="Stop container \"2c36cc9c2ae831f695ca78f3004318b4e221fa46e9bab83b1fce7fdda05d3cfa\" with signal terminated" Apr 30 00:44:20.531226 systemd-networkd[1254]: lxc_health: Link DOWN Apr 30 00:44:20.531241 systemd-networkd[1254]: lxc_health: Lost carrier Apr 30 00:44:20.550396 containerd[1656]: time="2025-04-30T00:44:20.550285923Z" level=info msg="shim disconnected" id=2c913710e96d9bc7c612fcb72e15984f2ba2618e3445fe1866775a6bc715e7c7 namespace=k8s.io Apr 30 00:44:20.550396 containerd[1656]: time="2025-04-30T00:44:20.550366187Z" level=warning msg="cleaning up after shim disconnected" id=2c913710e96d9bc7c612fcb72e15984f2ba2618e3445fe1866775a6bc715e7c7 namespace=k8s.io Apr 30 00:44:20.550396 containerd[1656]: time="2025-04-30T00:44:20.550374072Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:44:20.571538 containerd[1656]: time="2025-04-30T00:44:20.571357446Z" level=info msg="StopContainer for \"2c913710e96d9bc7c612fcb72e15984f2ba2618e3445fe1866775a6bc715e7c7\" returns successfully" Apr 30 00:44:20.575613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c36cc9c2ae831f695ca78f3004318b4e221fa46e9bab83b1fce7fdda05d3cfa-rootfs.mount: Deactivated successfully. Apr 30 00:44:20.583710 containerd[1656]: time="2025-04-30T00:44:20.582925861Z" level=info msg="shim disconnected" id=2c36cc9c2ae831f695ca78f3004318b4e221fa46e9bab83b1fce7fdda05d3cfa namespace=k8s.io Apr 30 00:44:20.583710 containerd[1656]: time="2025-04-30T00:44:20.583533138Z" level=warning msg="cleaning up after shim disconnected" id=2c36cc9c2ae831f695ca78f3004318b4e221fa46e9bab83b1fce7fdda05d3cfa namespace=k8s.io Apr 30 00:44:20.583710 containerd[1656]: time="2025-04-30T00:44:20.583558607Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:44:20.584648 containerd[1656]: time="2025-04-30T00:44:20.584634538Z" level=info msg="StopPodSandbox for \"5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684\"" Apr 30 00:44:20.586112 containerd[1656]: time="2025-04-30T00:44:20.586076247Z" level=info msg="Container to stop \"2c913710e96d9bc7c612fcb72e15984f2ba2618e3445fe1866775a6bc715e7c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:44:20.588085 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684-shm.mount: Deactivated successfully. Apr 30 00:44:20.602012 containerd[1656]: time="2025-04-30T00:44:20.600919645Z" level=info msg="StopContainer for \"2c36cc9c2ae831f695ca78f3004318b4e221fa46e9bab83b1fce7fdda05d3cfa\" returns successfully" Apr 30 00:44:20.603314 containerd[1656]: time="2025-04-30T00:44:20.602669141Z" level=info msg="StopPodSandbox for \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\"" Apr 30 00:44:20.603314 containerd[1656]: time="2025-04-30T00:44:20.602694790Z" level=info msg="Container to stop \"068360fb1d76f86ce146feebd8917f8b8eac73e31ef8243d7add628a2879dc76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:44:20.603314 containerd[1656]: time="2025-04-30T00:44:20.602722372Z" level=info msg="Container to stop \"3ab6a25177de6e3050c4ec4a88ae2a1cbfa6d9efa876812c9643776fc9dba595\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:44:20.603314 containerd[1656]: time="2025-04-30T00:44:20.602729015Z" level=info msg="Container to stop \"51212761a920a372ee3e19fd6c985ce2a6c597c74615f9dd5ea6960e1a6e9432\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:44:20.603314 containerd[1656]: time="2025-04-30T00:44:20.602736409Z" level=info msg="Container to stop \"1f9e8b0ea4911078e4e49f7b697fb9b24b5d9b774b7161bbb6654893ce9dddda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:44:20.603314 containerd[1656]: time="2025-04-30T00:44:20.602746397Z" level=info msg="Container to stop \"2c36cc9c2ae831f695ca78f3004318b4e221fa46e9bab83b1fce7fdda05d3cfa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:44:20.629922 containerd[1656]: time="2025-04-30T00:44:20.629359314Z" level=info msg="shim disconnected" id=5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684 namespace=k8s.io Apr 30 00:44:20.631014 containerd[1656]: time="2025-04-30T00:44:20.630084716Z" level=warning msg="cleaning up after shim disconnected" id=5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684 namespace=k8s.io Apr 30 00:44:20.631014 containerd[1656]: time="2025-04-30T00:44:20.630095878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:44:20.643632 containerd[1656]: time="2025-04-30T00:44:20.643597398Z" level=info msg="TearDown network for sandbox \"5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684\" successfully" Apr 30 00:44:20.644895 containerd[1656]: time="2025-04-30T00:44:20.644850788Z" level=info msg="StopPodSandbox for \"5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684\" returns successfully" Apr 30 00:44:20.652640 containerd[1656]: time="2025-04-30T00:44:20.652582147Z" level=info msg="shim disconnected" id=1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2 namespace=k8s.io Apr 30 00:44:20.652936 containerd[1656]: time="2025-04-30T00:44:20.652919921Z" level=warning msg="cleaning up after shim disconnected" id=1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2 namespace=k8s.io Apr 30 00:44:20.652994 containerd[1656]: time="2025-04-30T00:44:20.652984765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:44:20.666278 containerd[1656]: time="2025-04-30T00:44:20.666228233Z" level=info msg="TearDown network for sandbox \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\" successfully" Apr 30 00:44:20.666278 containerd[1656]: time="2025-04-30T00:44:20.666266346Z" level=info msg="StopPodSandbox for \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\" returns successfully" Apr 30 00:44:20.763378 kubelet[3102]: I0430 00:44:20.763108 3102 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-etc-cni-netd\") pod \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " Apr 30 00:44:20.763378 kubelet[3102]: I0430 00:44:20.763198 3102 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsjtp\" (UniqueName: \"kubernetes.io/projected/72af9aab-9037-4611-bfe7-3763254c85f5-kube-api-access-jsjtp\") pod \"72af9aab-9037-4611-bfe7-3763254c85f5\" (UID: \"72af9aab-9037-4611-bfe7-3763254c85f5\") " Apr 30 00:44:20.763378 kubelet[3102]: I0430 00:44:20.763217 3102 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-xtables-lock\") pod \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " Apr 30 00:44:20.763378 kubelet[3102]: I0430 00:44:20.763238 3102 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f72ffd5a-0812-48a4-bd3b-38d5e5976890-clustermesh-secrets\") pod \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " Apr 30 00:44:20.763378 kubelet[3102]: I0430 00:44:20.763252 3102 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-host-proc-sys-kernel\") pod \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " Apr 30 00:44:20.763378 kubelet[3102]: I0430 00:44:20.763272 3102 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-bpf-maps\") pod \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " Apr 30 00:44:20.764383 kubelet[3102]: I0430 00:44:20.763288 3102 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-host-proc-sys-net\") pod \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " Apr 30 00:44:20.764383 kubelet[3102]: I0430 00:44:20.763303 3102 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-hostproc\") pod \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " Apr 30 00:44:20.764383 kubelet[3102]: I0430 00:44:20.763321 3102 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f72ffd5a-0812-48a4-bd3b-38d5e5976890-cilium-config-path\") pod \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " Apr 30 00:44:20.764383 kubelet[3102]: I0430 00:44:20.763337 3102 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-cni-path\") pod \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " Apr 30 00:44:20.764383 kubelet[3102]: I0430 00:44:20.763353 3102 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f72ffd5a-0812-48a4-bd3b-38d5e5976890-hubble-tls\") pod \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " Apr 30 00:44:20.764383 kubelet[3102]: I0430 00:44:20.763369 3102 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-cilium-run\") pod \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " Apr 30 00:44:20.764654 kubelet[3102]: I0430 00:44:20.763387 3102 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72af9aab-9037-4611-bfe7-3763254c85f5-cilium-config-path\") pod \"72af9aab-9037-4611-bfe7-3763254c85f5\" (UID: \"72af9aab-9037-4611-bfe7-3763254c85f5\") " Apr 30 00:44:20.764654 kubelet[3102]: I0430 00:44:20.763403 3102 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-cilium-cgroup\") pod \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " Apr 30 00:44:20.764654 kubelet[3102]: I0430 00:44:20.763422 3102 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxj9c\" (UniqueName: \"kubernetes.io/projected/f72ffd5a-0812-48a4-bd3b-38d5e5976890-kube-api-access-rxj9c\") pod \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " Apr 30 00:44:20.764654 kubelet[3102]: I0430 00:44:20.763439 3102 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-lib-modules\") pod \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\" (UID: \"f72ffd5a-0812-48a4-bd3b-38d5e5976890\") " Apr 30 00:44:20.766265 kubelet[3102]: I0430 00:44:20.763532 3102 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f72ffd5a-0812-48a4-bd3b-38d5e5976890" (UID: "f72ffd5a-0812-48a4-bd3b-38d5e5976890"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:44:20.766265 kubelet[3102]: I0430 00:44:20.766087 3102 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f72ffd5a-0812-48a4-bd3b-38d5e5976890" (UID: "f72ffd5a-0812-48a4-bd3b-38d5e5976890"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:44:20.766265 kubelet[3102]: I0430 00:44:20.764939 3102 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-hostproc" (OuterVolumeSpecName: "hostproc") pod "f72ffd5a-0812-48a4-bd3b-38d5e5976890" (UID: "f72ffd5a-0812-48a4-bd3b-38d5e5976890"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:44:20.780792 kubelet[3102]: I0430 00:44:20.780439 3102 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f72ffd5a-0812-48a4-bd3b-38d5e5976890" (UID: "f72ffd5a-0812-48a4-bd3b-38d5e5976890"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:44:20.780792 kubelet[3102]: I0430 00:44:20.780708 3102 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-cni-path" (OuterVolumeSpecName: "cni-path") pod "f72ffd5a-0812-48a4-bd3b-38d5e5976890" (UID: "f72ffd5a-0812-48a4-bd3b-38d5e5976890"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:44:20.783045 kubelet[3102]: I0430 00:44:20.783017 3102 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f72ffd5a-0812-48a4-bd3b-38d5e5976890-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f72ffd5a-0812-48a4-bd3b-38d5e5976890" (UID: "f72ffd5a-0812-48a4-bd3b-38d5e5976890"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:44:20.783100 kubelet[3102]: I0430 00:44:20.783059 3102 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f72ffd5a-0812-48a4-bd3b-38d5e5976890" (UID: "f72ffd5a-0812-48a4-bd3b-38d5e5976890"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:44:20.784503 kubelet[3102]: I0430 00:44:20.784410 3102 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f72ffd5a-0812-48a4-bd3b-38d5e5976890" (UID: "f72ffd5a-0812-48a4-bd3b-38d5e5976890"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:44:20.784503 kubelet[3102]: I0430 00:44:20.784454 3102 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f72ffd5a-0812-48a4-bd3b-38d5e5976890" (UID: "f72ffd5a-0812-48a4-bd3b-38d5e5976890"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:44:20.784503 kubelet[3102]: I0430 00:44:20.784468 3102 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f72ffd5a-0812-48a4-bd3b-38d5e5976890" (UID: "f72ffd5a-0812-48a4-bd3b-38d5e5976890"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:44:20.784938 kubelet[3102]: I0430 00:44:20.784824 3102 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72af9aab-9037-4611-bfe7-3763254c85f5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "72af9aab-9037-4611-bfe7-3763254c85f5" (UID: "72af9aab-9037-4611-bfe7-3763254c85f5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:44:20.784938 kubelet[3102]: I0430 00:44:20.784869 3102 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f72ffd5a-0812-48a4-bd3b-38d5e5976890" (UID: "f72ffd5a-0812-48a4-bd3b-38d5e5976890"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:44:20.785729 kubelet[3102]: I0430 00:44:20.785467 3102 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f72ffd5a-0812-48a4-bd3b-38d5e5976890-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f72ffd5a-0812-48a4-bd3b-38d5e5976890" (UID: "f72ffd5a-0812-48a4-bd3b-38d5e5976890"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 00:44:20.785729 kubelet[3102]: I0430 00:44:20.785702 3102 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72af9aab-9037-4611-bfe7-3763254c85f5-kube-api-access-jsjtp" (OuterVolumeSpecName: "kube-api-access-jsjtp") pod "72af9aab-9037-4611-bfe7-3763254c85f5" (UID: "72af9aab-9037-4611-bfe7-3763254c85f5"). InnerVolumeSpecName "kube-api-access-jsjtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:44:20.786052 kubelet[3102]: I0430 00:44:20.786038 3102 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f72ffd5a-0812-48a4-bd3b-38d5e5976890-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f72ffd5a-0812-48a4-bd3b-38d5e5976890" (UID: "f72ffd5a-0812-48a4-bd3b-38d5e5976890"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:44:20.787430 kubelet[3102]: I0430 00:44:20.787402 3102 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f72ffd5a-0812-48a4-bd3b-38d5e5976890-kube-api-access-rxj9c" (OuterVolumeSpecName: "kube-api-access-rxj9c") pod "f72ffd5a-0812-48a4-bd3b-38d5e5976890" (UID: "f72ffd5a-0812-48a4-bd3b-38d5e5976890"). InnerVolumeSpecName "kube-api-access-rxj9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:44:20.863829 kubelet[3102]: I0430 00:44:20.863738 3102 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f72ffd5a-0812-48a4-bd3b-38d5e5976890-hubble-tls\") on node \"ci-4152-2-3-3-307bd18bd0\" DevicePath \"\"" Apr 30 00:44:20.863829 kubelet[3102]: I0430 00:44:20.863795 3102 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-cilium-run\") on node \"ci-4152-2-3-3-307bd18bd0\" DevicePath \"\"" Apr 30 00:44:20.863829 kubelet[3102]: I0430 00:44:20.863812 3102 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72af9aab-9037-4611-bfe7-3763254c85f5-cilium-config-path\") on node \"ci-4152-2-3-3-307bd18bd0\" DevicePath \"\"" Apr 30 00:44:20.863829 kubelet[3102]: I0430 00:44:20.863847 3102 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-cilium-cgroup\") on node \"ci-4152-2-3-3-307bd18bd0\" DevicePath \"\"" Apr 30 00:44:20.863829 kubelet[3102]: I0430 00:44:20.863867 3102 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rxj9c\" (UniqueName: \"kubernetes.io/projected/f72ffd5a-0812-48a4-bd3b-38d5e5976890-kube-api-access-rxj9c\") on node \"ci-4152-2-3-3-307bd18bd0\" DevicePath \"\"" Apr 30 00:44:20.864278 kubelet[3102]: I0430 00:44:20.863892 3102 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-lib-modules\") on node \"ci-4152-2-3-3-307bd18bd0\" DevicePath \"\"" Apr 30 00:44:20.864278 kubelet[3102]: I0430 00:44:20.863915 3102 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-etc-cni-netd\") on node \"ci-4152-2-3-3-307bd18bd0\" DevicePath \"\"" Apr 30 00:44:20.864278 kubelet[3102]: I0430 00:44:20.863938 3102 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jsjtp\" (UniqueName: \"kubernetes.io/projected/72af9aab-9037-4611-bfe7-3763254c85f5-kube-api-access-jsjtp\") on node \"ci-4152-2-3-3-307bd18bd0\" DevicePath \"\"" Apr 30 00:44:20.864278 kubelet[3102]: I0430 00:44:20.863963 3102 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-xtables-lock\") on node \"ci-4152-2-3-3-307bd18bd0\" DevicePath \"\"" Apr 30 00:44:20.864278 kubelet[3102]: I0430 00:44:20.863979 3102 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f72ffd5a-0812-48a4-bd3b-38d5e5976890-clustermesh-secrets\") on node \"ci-4152-2-3-3-307bd18bd0\" DevicePath \"\"" Apr 30 00:44:20.864278 kubelet[3102]: I0430 00:44:20.863997 3102 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-host-proc-sys-kernel\") on node \"ci-4152-2-3-3-307bd18bd0\" DevicePath \"\"" Apr 30 00:44:20.864278 kubelet[3102]: I0430 00:44:20.864015 3102 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-bpf-maps\") on node \"ci-4152-2-3-3-307bd18bd0\" DevicePath \"\"" Apr 30 00:44:20.864278 kubelet[3102]: I0430 00:44:20.864035 3102 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-host-proc-sys-net\") on node \"ci-4152-2-3-3-307bd18bd0\" DevicePath \"\"" Apr 30 00:44:20.864525 kubelet[3102]: I0430 00:44:20.864052 3102 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-hostproc\") on node \"ci-4152-2-3-3-307bd18bd0\" DevicePath \"\"" Apr 30 00:44:20.864525 kubelet[3102]: I0430 00:44:20.864069 3102 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f72ffd5a-0812-48a4-bd3b-38d5e5976890-cilium-config-path\") on node \"ci-4152-2-3-3-307bd18bd0\" DevicePath \"\"" Apr 30 00:44:20.864525 kubelet[3102]: I0430 00:44:20.864085 3102 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f72ffd5a-0812-48a4-bd3b-38d5e5976890-cni-path\") on node \"ci-4152-2-3-3-307bd18bd0\" DevicePath \"\"" Apr 30 00:44:21.483856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684-rootfs.mount: Deactivated successfully. Apr 30 00:44:21.484129 systemd[1]: var-lib-kubelet-pods-72af9aab\x2d9037\x2d4611\x2dbfe7\x2d3763254c85f5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djsjtp.mount: Deactivated successfully. Apr 30 00:44:21.484405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2-rootfs.mount: Deactivated successfully. Apr 30 00:44:21.484573 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2-shm.mount: Deactivated successfully. Apr 30 00:44:21.484748 systemd[1]: var-lib-kubelet-pods-f72ffd5a\x2d0812\x2d48a4\x2dbd3b\x2d38d5e5976890-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 00:44:21.484947 systemd[1]: var-lib-kubelet-pods-f72ffd5a\x2d0812\x2d48a4\x2dbd3b\x2d38d5e5976890-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drxj9c.mount: Deactivated successfully. Apr 30 00:44:21.485126 systemd[1]: var-lib-kubelet-pods-f72ffd5a\x2d0812\x2d48a4\x2dbd3b\x2d38d5e5976890-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 00:44:21.662034 kubelet[3102]: I0430 00:44:21.660761 3102 scope.go:117] "RemoveContainer" containerID="2c913710e96d9bc7c612fcb72e15984f2ba2618e3445fe1866775a6bc715e7c7" Apr 30 00:44:21.696385 containerd[1656]: time="2025-04-30T00:44:21.694954274Z" level=info msg="RemoveContainer for \"2c913710e96d9bc7c612fcb72e15984f2ba2618e3445fe1866775a6bc715e7c7\"" Apr 30 00:44:21.707189 containerd[1656]: time="2025-04-30T00:44:21.707054153Z" level=info msg="RemoveContainer for \"2c913710e96d9bc7c612fcb72e15984f2ba2618e3445fe1866775a6bc715e7c7\" returns successfully" Apr 30 00:44:21.711730 kubelet[3102]: I0430 00:44:21.710524 3102 scope.go:117] "RemoveContainer" containerID="2c36cc9c2ae831f695ca78f3004318b4e221fa46e9bab83b1fce7fdda05d3cfa" Apr 30 00:44:21.713700 containerd[1656]: time="2025-04-30T00:44:21.712345861Z" level=info msg="RemoveContainer for \"2c36cc9c2ae831f695ca78f3004318b4e221fa46e9bab83b1fce7fdda05d3cfa\"" Apr 30 00:44:21.721834 containerd[1656]: time="2025-04-30T00:44:21.719441178Z" level=info msg="RemoveContainer for \"2c36cc9c2ae831f695ca78f3004318b4e221fa46e9bab83b1fce7fdda05d3cfa\" returns successfully" Apr 30 00:44:21.722568 kubelet[3102]: I0430 00:44:21.722279 3102 scope.go:117] "RemoveContainer" containerID="1f9e8b0ea4911078e4e49f7b697fb9b24b5d9b774b7161bbb6654893ce9dddda" Apr 30 00:44:21.725829 containerd[1656]: time="2025-04-30T00:44:21.725687396Z" level=info msg="RemoveContainer for \"1f9e8b0ea4911078e4e49f7b697fb9b24b5d9b774b7161bbb6654893ce9dddda\"" Apr 30 00:44:21.730678 containerd[1656]: time="2025-04-30T00:44:21.730624248Z" level=info msg="RemoveContainer for \"1f9e8b0ea4911078e4e49f7b697fb9b24b5d9b774b7161bbb6654893ce9dddda\" returns successfully" Apr 30 00:44:21.730984 kubelet[3102]: I0430 00:44:21.730954 3102 scope.go:117] "RemoveContainer" containerID="51212761a920a372ee3e19fd6c985ce2a6c597c74615f9dd5ea6960e1a6e9432" Apr 30 00:44:21.732760 containerd[1656]: time="2025-04-30T00:44:21.732714985Z" level=info msg="RemoveContainer for \"51212761a920a372ee3e19fd6c985ce2a6c597c74615f9dd5ea6960e1a6e9432\"" Apr 30 00:44:21.738369 containerd[1656]: time="2025-04-30T00:44:21.737874231Z" level=info msg="RemoveContainer for \"51212761a920a372ee3e19fd6c985ce2a6c597c74615f9dd5ea6960e1a6e9432\" returns successfully" Apr 30 00:44:21.738432 kubelet[3102]: I0430 00:44:21.738114 3102 scope.go:117] "RemoveContainer" containerID="3ab6a25177de6e3050c4ec4a88ae2a1cbfa6d9efa876812c9643776fc9dba595" Apr 30 00:44:21.740465 containerd[1656]: time="2025-04-30T00:44:21.740416637Z" level=info msg="RemoveContainer for \"3ab6a25177de6e3050c4ec4a88ae2a1cbfa6d9efa876812c9643776fc9dba595\"" Apr 30 00:44:21.744724 containerd[1656]: time="2025-04-30T00:44:21.744675176Z" level=info msg="RemoveContainer for \"3ab6a25177de6e3050c4ec4a88ae2a1cbfa6d9efa876812c9643776fc9dba595\" returns successfully" Apr 30 00:44:21.744970 kubelet[3102]: I0430 00:44:21.744913 3102 scope.go:117] "RemoveContainer" containerID="068360fb1d76f86ce146feebd8917f8b8eac73e31ef8243d7add628a2879dc76" Apr 30 00:44:21.746645 containerd[1656]: time="2025-04-30T00:44:21.746603493Z" level=info msg="RemoveContainer for \"068360fb1d76f86ce146feebd8917f8b8eac73e31ef8243d7add628a2879dc76\"" Apr 30 00:44:21.750553 containerd[1656]: time="2025-04-30T00:44:21.750491756Z" level=info msg="RemoveContainer for \"068360fb1d76f86ce146feebd8917f8b8eac73e31ef8243d7add628a2879dc76\" returns successfully" Apr 30 00:44:21.863755 kubelet[3102]: E0430 00:44:21.863670 3102 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 00:44:22.502478 sshd[4685]: Connection closed by 139.178.89.65 port 55324 Apr 30 00:44:22.502967 sshd-session[4682]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:22.510495 systemd-logind[1629]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:44:22.511711 systemd[1]: sshd@19-135.181.102.231:22-139.178.89.65:55324.service: Deactivated successfully. Apr 30 00:44:22.517133 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:44:22.519256 systemd-logind[1629]: Removed session 20. Apr 30 00:44:22.668313 systemd[1]: Started sshd@20-135.181.102.231:22-139.178.89.65:55334.service - OpenSSH per-connection server daemon (139.178.89.65:55334). Apr 30 00:44:22.708194 kubelet[3102]: I0430 00:44:22.707613 3102 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72af9aab-9037-4611-bfe7-3763254c85f5" path="/var/lib/kubelet/pods/72af9aab-9037-4611-bfe7-3763254c85f5/volumes" Apr 30 00:44:22.708670 kubelet[3102]: I0430 00:44:22.708621 3102 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f72ffd5a-0812-48a4-bd3b-38d5e5976890" path="/var/lib/kubelet/pods/f72ffd5a-0812-48a4-bd3b-38d5e5976890/volumes" Apr 30 00:44:23.625571 kubelet[3102]: I0430 00:44:23.625475 3102 setters.go:580] "Node became not ready" node="ci-4152-2-3-3-307bd18bd0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T00:44:23Z","lastTransitionTime":"2025-04-30T00:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 00:44:23.671496 sshd[4847]: Accepted publickey for core from 139.178.89.65 port 55334 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:44:23.673503 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:23.681558 systemd-logind[1629]: New session 21 of user core. Apr 30 00:44:23.686656 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:44:24.806461 kubelet[3102]: I0430 00:44:24.806401 3102 topology_manager.go:215] "Topology Admit Handler" podUID="3e84029c-0640-4098-a922-096bca48a72f" podNamespace="kube-system" podName="cilium-lndh8" Apr 30 00:44:24.806996 kubelet[3102]: E0430 00:44:24.806493 3102 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f72ffd5a-0812-48a4-bd3b-38d5e5976890" containerName="mount-bpf-fs" Apr 30 00:44:24.806996 kubelet[3102]: E0430 00:44:24.806513 3102 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f72ffd5a-0812-48a4-bd3b-38d5e5976890" containerName="mount-cgroup" Apr 30 00:44:24.806996 kubelet[3102]: E0430 00:44:24.806524 3102 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f72ffd5a-0812-48a4-bd3b-38d5e5976890" containerName="apply-sysctl-overwrites" Apr 30 00:44:24.806996 kubelet[3102]: E0430 00:44:24.806533 3102 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="72af9aab-9037-4611-bfe7-3763254c85f5" containerName="cilium-operator" Apr 30 00:44:24.806996 kubelet[3102]: E0430 00:44:24.806541 3102 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f72ffd5a-0812-48a4-bd3b-38d5e5976890" containerName="clean-cilium-state" Apr 30 00:44:24.806996 kubelet[3102]: E0430 00:44:24.806551 3102 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f72ffd5a-0812-48a4-bd3b-38d5e5976890" containerName="cilium-agent" Apr 30 00:44:24.806996 kubelet[3102]: I0430 00:44:24.806584 3102 memory_manager.go:354] "RemoveStaleState removing state" podUID="f72ffd5a-0812-48a4-bd3b-38d5e5976890" containerName="cilium-agent" Apr 30 00:44:24.806996 kubelet[3102]: I0430 00:44:24.806592 3102 memory_manager.go:354] "RemoveStaleState removing state" podUID="72af9aab-9037-4611-bfe7-3763254c85f5" containerName="cilium-operator" Apr 30 00:44:24.897442 kubelet[3102]: I0430 00:44:24.897280 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e84029c-0640-4098-a922-096bca48a72f-hubble-tls\") pod \"cilium-lndh8\" (UID: \"3e84029c-0640-4098-a922-096bca48a72f\") " pod="kube-system/cilium-lndh8" Apr 30 00:44:24.897442 kubelet[3102]: I0430 00:44:24.897365 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e84029c-0640-4098-a922-096bca48a72f-etc-cni-netd\") pod \"cilium-lndh8\" (UID: \"3e84029c-0640-4098-a922-096bca48a72f\") " pod="kube-system/cilium-lndh8" Apr 30 00:44:24.897442 kubelet[3102]: I0430 00:44:24.897410 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e84029c-0640-4098-a922-096bca48a72f-xtables-lock\") pod \"cilium-lndh8\" (UID: \"3e84029c-0640-4098-a922-096bca48a72f\") " pod="kube-system/cilium-lndh8" Apr 30 00:44:24.897442 kubelet[3102]: I0430 00:44:24.897442 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e84029c-0640-4098-a922-096bca48a72f-cni-path\") pod \"cilium-lndh8\" (UID: \"3e84029c-0640-4098-a922-096bca48a72f\") " pod="kube-system/cilium-lndh8" Apr 30 00:44:24.897789 kubelet[3102]: I0430 00:44:24.897468 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e84029c-0640-4098-a922-096bca48a72f-lib-modules\") pod \"cilium-lndh8\" (UID: \"3e84029c-0640-4098-a922-096bca48a72f\") " pod="kube-system/cilium-lndh8" Apr 30 00:44:24.897789 kubelet[3102]: I0430 00:44:24.897493 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e84029c-0640-4098-a922-096bca48a72f-host-proc-sys-net\") pod \"cilium-lndh8\" (UID: \"3e84029c-0640-4098-a922-096bca48a72f\") " pod="kube-system/cilium-lndh8" Apr 30 00:44:24.897789 kubelet[3102]: I0430 00:44:24.897524 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e84029c-0640-4098-a922-096bca48a72f-hostproc\") pod \"cilium-lndh8\" (UID: \"3e84029c-0640-4098-a922-096bca48a72f\") " pod="kube-system/cilium-lndh8" Apr 30 00:44:24.897789 kubelet[3102]: I0430 00:44:24.897561 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzrw4\" (UniqueName: \"kubernetes.io/projected/3e84029c-0640-4098-a922-096bca48a72f-kube-api-access-vzrw4\") pod \"cilium-lndh8\" (UID: \"3e84029c-0640-4098-a922-096bca48a72f\") " pod="kube-system/cilium-lndh8" Apr 30 00:44:24.897789 kubelet[3102]: I0430 00:44:24.897600 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e84029c-0640-4098-a922-096bca48a72f-cilium-cgroup\") pod \"cilium-lndh8\" (UID: \"3e84029c-0640-4098-a922-096bca48a72f\") " pod="kube-system/cilium-lndh8" Apr 30 00:44:24.897789 kubelet[3102]: I0430 00:44:24.897641 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e84029c-0640-4098-a922-096bca48a72f-cilium-config-path\") pod \"cilium-lndh8\" (UID: \"3e84029c-0640-4098-a922-096bca48a72f\") " pod="kube-system/cilium-lndh8" Apr 30 00:44:24.898056 kubelet[3102]: I0430 00:44:24.897677 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e84029c-0640-4098-a922-096bca48a72f-host-proc-sys-kernel\") pod \"cilium-lndh8\" (UID: \"3e84029c-0640-4098-a922-096bca48a72f\") " pod="kube-system/cilium-lndh8" Apr 30 00:44:24.898056 kubelet[3102]: I0430 00:44:24.897710 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3e84029c-0640-4098-a922-096bca48a72f-cilium-ipsec-secrets\") pod \"cilium-lndh8\" (UID: \"3e84029c-0640-4098-a922-096bca48a72f\") " pod="kube-system/cilium-lndh8" Apr 30 00:44:24.898056 kubelet[3102]: I0430 00:44:24.897746 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e84029c-0640-4098-a922-096bca48a72f-clustermesh-secrets\") pod \"cilium-lndh8\" (UID: \"3e84029c-0640-4098-a922-096bca48a72f\") " pod="kube-system/cilium-lndh8" Apr 30 00:44:24.898056 kubelet[3102]: I0430 00:44:24.897788 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e84029c-0640-4098-a922-096bca48a72f-bpf-maps\") pod \"cilium-lndh8\" (UID: \"3e84029c-0640-4098-a922-096bca48a72f\") " pod="kube-system/cilium-lndh8" Apr 30 00:44:24.898056 kubelet[3102]: I0430 00:44:24.897828 3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e84029c-0640-4098-a922-096bca48a72f-cilium-run\") pod \"cilium-lndh8\" (UID: \"3e84029c-0640-4098-a922-096bca48a72f\") " pod="kube-system/cilium-lndh8" Apr 30 00:44:24.976779 sshd[4850]: Connection closed by 139.178.89.65 port 55334 Apr 30 00:44:24.977855 sshd-session[4847]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:24.984737 systemd[1]: sshd@20-135.181.102.231:22-139.178.89.65:55334.service: Deactivated successfully. Apr 30 00:44:24.985443 systemd-logind[1629]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:44:24.990788 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:44:24.992904 systemd-logind[1629]: Removed session 21. Apr 30 00:44:25.141899 systemd[1]: Started sshd@21-135.181.102.231:22-139.178.89.65:55338.service - OpenSSH per-connection server daemon (139.178.89.65:55338). Apr 30 00:44:25.144558 containerd[1656]: time="2025-04-30T00:44:25.144327883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lndh8,Uid:3e84029c-0640-4098-a922-096bca48a72f,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:25.202727 containerd[1656]: time="2025-04-30T00:44:25.201839663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:25.202727 containerd[1656]: time="2025-04-30T00:44:25.201926783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:25.202727 containerd[1656]: time="2025-04-30T00:44:25.201949225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:25.202727 containerd[1656]: time="2025-04-30T00:44:25.202056753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:25.253296 containerd[1656]: time="2025-04-30T00:44:25.253233541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lndh8,Uid:3e84029c-0640-4098-a922-096bca48a72f,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd0e1e4a322de0009cfa943004dc110db530805c64f7e1e8de973c5e2d578b92\"" Apr 30 00:44:25.257494 containerd[1656]: time="2025-04-30T00:44:25.257432229Z" level=info msg="CreateContainer within sandbox \"dd0e1e4a322de0009cfa943004dc110db530805c64f7e1e8de973c5e2d578b92\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:44:25.270579 containerd[1656]: time="2025-04-30T00:44:25.270519913Z" level=info msg="CreateContainer within sandbox \"dd0e1e4a322de0009cfa943004dc110db530805c64f7e1e8de973c5e2d578b92\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d72ada06ee35e98094d39287359ce89e1ae4fb76fddd36e78f67cdb82d6073ae\"" Apr 30 00:44:25.271540 containerd[1656]: time="2025-04-30T00:44:25.271061213Z" level=info msg="StartContainer for \"d72ada06ee35e98094d39287359ce89e1ae4fb76fddd36e78f67cdb82d6073ae\"" Apr 30 00:44:25.341415 containerd[1656]: time="2025-04-30T00:44:25.341367152Z" level=info msg="StartContainer for \"d72ada06ee35e98094d39287359ce89e1ae4fb76fddd36e78f67cdb82d6073ae\" returns successfully" Apr 30 00:44:25.400090 containerd[1656]: time="2025-04-30T00:44:25.399770487Z" level=info msg="shim disconnected" id=d72ada06ee35e98094d39287359ce89e1ae4fb76fddd36e78f67cdb82d6073ae namespace=k8s.io Apr 30 00:44:25.400090 containerd[1656]: time="2025-04-30T00:44:25.399886321Z" level=warning msg="cleaning up after shim disconnected" id=d72ada06ee35e98094d39287359ce89e1ae4fb76fddd36e78f67cdb82d6073ae namespace=k8s.io Apr 30 00:44:25.400090 containerd[1656]: time="2025-04-30T00:44:25.399907540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:44:25.706830 containerd[1656]: time="2025-04-30T00:44:25.705932884Z" level=info msg="CreateContainer within sandbox \"dd0e1e4a322de0009cfa943004dc110db530805c64f7e1e8de973c5e2d578b92\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:44:25.721695 containerd[1656]: time="2025-04-30T00:44:25.721525671Z" level=info msg="CreateContainer within sandbox \"dd0e1e4a322de0009cfa943004dc110db530805c64f7e1e8de973c5e2d578b92\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"428202ad1d4bc2a6506f8dd76681879dfcf76b9c861a03c9903fe26c02a38a53\"" Apr 30 00:44:25.722896 containerd[1656]: time="2025-04-30T00:44:25.722827043Z" level=info msg="StartContainer for \"428202ad1d4bc2a6506f8dd76681879dfcf76b9c861a03c9903fe26c02a38a53\"" Apr 30 00:44:25.796190 containerd[1656]: time="2025-04-30T00:44:25.796042443Z" level=info msg="StartContainer for \"428202ad1d4bc2a6506f8dd76681879dfcf76b9c861a03c9903fe26c02a38a53\" returns successfully" Apr 30 00:44:25.835393 containerd[1656]: time="2025-04-30T00:44:25.835323146Z" level=info msg="shim disconnected" id=428202ad1d4bc2a6506f8dd76681879dfcf76b9c861a03c9903fe26c02a38a53 namespace=k8s.io Apr 30 00:44:25.835825 containerd[1656]: time="2025-04-30T00:44:25.835634571Z" level=warning msg="cleaning up after shim disconnected" id=428202ad1d4bc2a6506f8dd76681879dfcf76b9c861a03c9903fe26c02a38a53 namespace=k8s.io Apr 30 00:44:25.835825 containerd[1656]: time="2025-04-30T00:44:25.835655028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:44:26.156647 sshd[4864]: Accepted publickey for core from 139.178.89.65 port 55338 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:44:26.159982 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:26.167994 systemd-logind[1629]: New session 22 of user core. Apr 30 00:44:26.171848 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 00:44:26.714809 containerd[1656]: time="2025-04-30T00:44:26.714679070Z" level=info msg="CreateContainer within sandbox \"dd0e1e4a322de0009cfa943004dc110db530805c64f7e1e8de973c5e2d578b92\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:44:26.742917 containerd[1656]: time="2025-04-30T00:44:26.742803647Z" level=info msg="CreateContainer within sandbox \"dd0e1e4a322de0009cfa943004dc110db530805c64f7e1e8de973c5e2d578b92\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"280b6e9c0c3502c404cd77e1f814df4bc67aa7adacd26425d8f7ae646d03a7c5\"" Apr 30 00:44:26.749966 containerd[1656]: time="2025-04-30T00:44:26.749919332Z" level=info msg="StartContainer for \"280b6e9c0c3502c404cd77e1f814df4bc67aa7adacd26425d8f7ae646d03a7c5\"" Apr 30 00:44:26.828845 containerd[1656]: time="2025-04-30T00:44:26.828804290Z" level=info msg="StartContainer for \"280b6e9c0c3502c404cd77e1f814df4bc67aa7adacd26425d8f7ae646d03a7c5\" returns successfully" Apr 30 00:44:26.835065 sshd[5030]: Connection closed by 139.178.89.65 port 55338 Apr 30 00:44:26.835755 sshd-session[4864]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:26.843710 systemd[1]: sshd@21-135.181.102.231:22-139.178.89.65:55338.service: Deactivated successfully. Apr 30 00:44:26.849114 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 00:44:26.850513 systemd-logind[1629]: Session 22 logged out. Waiting for processes to exit. Apr 30 00:44:26.852408 systemd-logind[1629]: Removed session 22. Apr 30 00:44:26.865132 kubelet[3102]: E0430 00:44:26.865041 3102 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 00:44:26.869453 containerd[1656]: time="2025-04-30T00:44:26.869386311Z" level=info msg="shim disconnected" id=280b6e9c0c3502c404cd77e1f814df4bc67aa7adacd26425d8f7ae646d03a7c5 namespace=k8s.io Apr 30 00:44:26.869453 containerd[1656]: time="2025-04-30T00:44:26.869450520Z" level=warning msg="cleaning up after shim disconnected" id=280b6e9c0c3502c404cd77e1f814df4bc67aa7adacd26425d8f7ae646d03a7c5 namespace=k8s.io Apr 30 00:44:26.869453 containerd[1656]: time="2025-04-30T00:44:26.869458876Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:44:27.002493 systemd[1]: Started sshd@22-135.181.102.231:22-139.178.89.65:56902.service - OpenSSH per-connection server daemon (139.178.89.65:56902). Apr 30 00:44:27.012081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-280b6e9c0c3502c404cd77e1f814df4bc67aa7adacd26425d8f7ae646d03a7c5-rootfs.mount: Deactivated successfully. Apr 30 00:44:27.719349 containerd[1656]: time="2025-04-30T00:44:27.719270424Z" level=info msg="CreateContainer within sandbox \"dd0e1e4a322de0009cfa943004dc110db530805c64f7e1e8de973c5e2d578b92\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:44:27.741560 containerd[1656]: time="2025-04-30T00:44:27.741478479Z" level=info msg="CreateContainer within sandbox \"dd0e1e4a322de0009cfa943004dc110db530805c64f7e1e8de973c5e2d578b92\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b302cd1f2ec7b3fb6fe314281e57e60a9f9b4198fdf0b14547daaccf3e9e1f56\"" Apr 30 00:44:27.743526 containerd[1656]: time="2025-04-30T00:44:27.743394877Z" level=info msg="StartContainer for \"b302cd1f2ec7b3fb6fe314281e57e60a9f9b4198fdf0b14547daaccf3e9e1f56\"" Apr 30 00:44:27.824167 containerd[1656]: time="2025-04-30T00:44:27.824056419Z" level=info msg="StartContainer for \"b302cd1f2ec7b3fb6fe314281e57e60a9f9b4198fdf0b14547daaccf3e9e1f56\" returns successfully" Apr 30 00:44:27.854246 containerd[1656]: time="2025-04-30T00:44:27.854188513Z" level=info msg="shim disconnected" id=b302cd1f2ec7b3fb6fe314281e57e60a9f9b4198fdf0b14547daaccf3e9e1f56 namespace=k8s.io Apr 30 00:44:27.854246 containerd[1656]: time="2025-04-30T00:44:27.854238526Z" level=warning msg="cleaning up after shim disconnected" id=b302cd1f2ec7b3fb6fe314281e57e60a9f9b4198fdf0b14547daaccf3e9e1f56 namespace=k8s.io Apr 30 00:44:27.854246 containerd[1656]: time="2025-04-30T00:44:27.854245570Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:44:27.996245 sshd[5095]: Accepted publickey for core from 139.178.89.65 port 56902 ssh2: RSA SHA256:Z5W/GaQaT6a1P913sOSNU1tCxAIyxSb0Z7O2Sul6T8E Apr 30 00:44:27.997964 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:28.008491 systemd-logind[1629]: New session 23 of user core. Apr 30 00:44:28.011991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b302cd1f2ec7b3fb6fe314281e57e60a9f9b4198fdf0b14547daaccf3e9e1f56-rootfs.mount: Deactivated successfully. Apr 30 00:44:28.022682 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 00:44:28.723937 containerd[1656]: time="2025-04-30T00:44:28.723815063Z" level=info msg="CreateContainer within sandbox \"dd0e1e4a322de0009cfa943004dc110db530805c64f7e1e8de973c5e2d578b92\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:44:28.744279 containerd[1656]: time="2025-04-30T00:44:28.744201152Z" level=info msg="CreateContainer within sandbox \"dd0e1e4a322de0009cfa943004dc110db530805c64f7e1e8de973c5e2d578b92\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1e85e0d0a3cc3b1d0130e17849d6d27d544ca7645b8e2750b5fc37cba05a68c9\"" Apr 30 00:44:28.752781 containerd[1656]: time="2025-04-30T00:44:28.745678401Z" level=info msg="StartContainer for \"1e85e0d0a3cc3b1d0130e17849d6d27d544ca7645b8e2750b5fc37cba05a68c9\"" Apr 30 00:44:28.825690 containerd[1656]: time="2025-04-30T00:44:28.825633181Z" level=info msg="StartContainer for \"1e85e0d0a3cc3b1d0130e17849d6d27d544ca7645b8e2750b5fc37cba05a68c9\" returns successfully" Apr 30 00:44:29.383241 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 00:44:29.764387 kubelet[3102]: I0430 00:44:29.763546 3102 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lndh8" podStartSLOduration=5.763519189 podStartE2EDuration="5.763519189s" podCreationTimestamp="2025-04-30 00:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:44:29.761824549 +0000 UTC m=+343.150206078" watchObservedRunningTime="2025-04-30 00:44:29.763519189 +0000 UTC m=+343.151900728" Apr 30 00:44:32.385181 systemd-networkd[1254]: lxc_health: Link UP Apr 30 00:44:32.393506 systemd-networkd[1254]: lxc_health: Gained carrier Apr 30 00:44:33.269308 systemd[1]: run-containerd-runc-k8s.io-1e85e0d0a3cc3b1d0130e17849d6d27d544ca7645b8e2750b5fc37cba05a68c9-runc.2jVbc6.mount: Deactivated successfully. Apr 30 00:44:33.695524 systemd-networkd[1254]: lxc_health: Gained IPv6LL Apr 30 00:44:37.604086 systemd[1]: run-containerd-runc-k8s.io-1e85e0d0a3cc3b1d0130e17849d6d27d544ca7645b8e2750b5fc37cba05a68c9-runc.DEMLIl.mount: Deactivated successfully. Apr 30 00:44:37.819862 sshd[5153]: Connection closed by 139.178.89.65 port 56902 Apr 30 00:44:37.821165 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:37.827190 systemd[1]: sshd@22-135.181.102.231:22-139.178.89.65:56902.service: Deactivated successfully. Apr 30 00:44:37.831718 systemd-logind[1629]: Session 23 logged out. Waiting for processes to exit. Apr 30 00:44:37.831881 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 00:44:37.835376 systemd-logind[1629]: Removed session 23. Apr 30 00:44:46.721915 containerd[1656]: time="2025-04-30T00:44:46.721695246Z" level=info msg="StopPodSandbox for \"5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684\"" Apr 30 00:44:46.721915 containerd[1656]: time="2025-04-30T00:44:46.721822181Z" level=info msg="TearDown network for sandbox \"5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684\" successfully" Apr 30 00:44:46.721915 containerd[1656]: time="2025-04-30T00:44:46.721838603Z" level=info msg="StopPodSandbox for \"5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684\" returns successfully" Apr 30 00:44:46.723316 containerd[1656]: time="2025-04-30T00:44:46.722905791Z" level=info msg="RemovePodSandbox for \"5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684\"" Apr 30 00:44:46.723316 containerd[1656]: time="2025-04-30T00:44:46.722933512Z" level=info msg="Forcibly stopping sandbox \"5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684\"" Apr 30 00:44:46.723316 containerd[1656]: time="2025-04-30T00:44:46.722984356Z" level=info msg="TearDown network for sandbox \"5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684\" successfully" Apr 30 00:44:46.730122 containerd[1656]: time="2025-04-30T00:44:46.730056769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:44:46.730257 containerd[1656]: time="2025-04-30T00:44:46.730173856Z" level=info msg="RemovePodSandbox \"5d8c10fa472bda15d89d6b8bdd4acbf6a744c4863936bae7659505e652604684\" returns successfully" Apr 30 00:44:46.730862 containerd[1656]: time="2025-04-30T00:44:46.730808141Z" level=info msg="StopPodSandbox for \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\"" Apr 30 00:44:46.730951 containerd[1656]: time="2025-04-30T00:44:46.730905652Z" level=info msg="TearDown network for sandbox \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\" successfully" Apr 30 00:44:46.730951 containerd[1656]: time="2025-04-30T00:44:46.730919799Z" level=info msg="StopPodSandbox for \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\" returns successfully" Apr 30 00:44:46.731317 containerd[1656]: time="2025-04-30T00:44:46.731239030Z" level=info msg="RemovePodSandbox for \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\"" Apr 30 00:44:46.731317 containerd[1656]: time="2025-04-30T00:44:46.731282501Z" level=info msg="Forcibly stopping sandbox \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\"" Apr 30 00:44:46.731502 containerd[1656]: time="2025-04-30T00:44:46.731338876Z" level=info msg="TearDown network for sandbox \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\" successfully" Apr 30 00:44:46.735545 containerd[1656]: time="2025-04-30T00:44:46.735493839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:44:46.735545 containerd[1656]: time="2025-04-30T00:44:46.735560852Z" level=info msg="RemovePodSandbox \"1f7c2fcec1269719fdfc8196125993665c825db2430628d599133b2d88c764f2\" returns successfully"