Oct 8 20:06:50.952979 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 20:06:50.953021 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:06:50.953033 kernel: BIOS-provided physical RAM map: Oct 8 20:06:50.953042 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 8 20:06:50.953050 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 8 20:06:50.953058 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 8 20:06:50.953080 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Oct 8 20:06:50.953088 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Oct 8 20:06:50.953100 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 8 20:06:50.953107 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 8 20:06:50.953115 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 8 20:06:50.953121 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 8 20:06:50.953127 kernel: NX (Execute Disable) protection: active Oct 8 20:06:50.953134 kernel: APIC: Static calls initialized Oct 8 20:06:50.953148 kernel: SMBIOS 2.8 present. Oct 8 20:06:50.953158 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Oct 8 20:06:50.953166 kernel: Hypervisor detected: KVM Oct 8 20:06:50.953172 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 8 20:06:50.953179 kernel: kvm-clock: using sched offset of 2883223683 cycles Oct 8 20:06:50.953186 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 8 20:06:50.953193 kernel: tsc: Detected 2495.308 MHz processor Oct 8 20:06:50.953200 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 20:06:50.953207 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 20:06:50.953216 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Oct 8 20:06:50.953224 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 8 20:06:50.953234 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 20:06:50.953244 kernel: Using GB pages for direct mapping Oct 8 20:06:50.953252 kernel: ACPI: Early table checksum verification disabled Oct 8 20:06:50.953259 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Oct 8 20:06:50.953268 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:06:50.953277 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:06:50.953286 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:06:50.953296 kernel: ACPI: FACS 0x000000007CFE0000 000040 Oct 8 20:06:50.953303 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:06:50.953310 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:06:50.953320 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:06:50.953329 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:06:50.953339 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Oct 8 20:06:50.953346 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Oct 8 20:06:50.953353 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Oct 8 20:06:50.953368 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Oct 8 20:06:50.953378 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Oct 8 20:06:50.953387 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Oct 8 20:06:50.953395 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Oct 8 20:06:50.953402 kernel: No NUMA configuration found Oct 8 20:06:50.953412 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Oct 8 20:06:50.953424 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Oct 8 20:06:50.953434 kernel: Zone ranges: Oct 8 20:06:50.953443 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 20:06:50.953450 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Oct 8 20:06:50.953457 kernel: Normal empty Oct 8 20:06:50.953464 kernel: Movable zone start for each node Oct 8 20:06:50.953471 kernel: Early memory node ranges Oct 8 20:06:50.953478 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 8 20:06:50.953486 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Oct 8 20:06:50.953493 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Oct 8 20:06:50.953504 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 20:06:50.953514 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 8 20:06:50.953523 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 8 20:06:50.953531 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 8 20:06:50.953538 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 8 20:06:50.953545 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 8 20:06:50.953553 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 8 20:06:50.953560 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 8 20:06:50.953567 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 20:06:50.953580 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 8 20:06:50.953590 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 8 20:06:50.953600 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 20:06:50.953609 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 8 20:06:50.953616 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 8 20:06:50.953624 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 8 20:06:50.953634 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 8 20:06:50.953643 kernel: Booting paravirtualized kernel on KVM Oct 8 20:06:50.953651 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 20:06:50.953661 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 8 20:06:50.953669 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 8 20:06:50.953679 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 8 20:06:50.953688 kernel: pcpu-alloc: [0] 0 1 Oct 8 20:06:50.953697 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 8 20:06:50.953708 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:06:50.953716 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 20:06:50.953723 kernel: random: crng init done Oct 8 20:06:50.953733 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 20:06:50.953740 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 8 20:06:50.953747 kernel: Fallback order for Node 0: 0 Oct 8 20:06:50.953754 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Oct 8 20:06:50.953761 kernel: Policy zone: DMA32 Oct 8 20:06:50.953770 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 20:06:50.953781 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 125148K reserved, 0K cma-reserved) Oct 8 20:06:50.953791 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 20:06:50.953799 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 20:06:50.953808 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 20:06:50.953816 kernel: Dynamic Preempt: voluntary Oct 8 20:06:50.953823 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 20:06:50.953831 kernel: rcu: RCU event tracing is enabled. Oct 8 20:06:50.953838 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 20:06:50.953846 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 20:06:50.953854 kernel: Rude variant of Tasks RCU enabled. Oct 8 20:06:50.953864 kernel: Tracing variant of Tasks RCU enabled. Oct 8 20:06:50.953874 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 20:06:50.953883 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 20:06:50.953893 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 8 20:06:50.953900 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 20:06:50.953909 kernel: Console: colour VGA+ 80x25 Oct 8 20:06:50.953918 kernel: printk: console [tty0] enabled Oct 8 20:06:50.953928 kernel: printk: console [ttyS0] enabled Oct 8 20:06:50.953938 kernel: ACPI: Core revision 20230628 Oct 8 20:06:50.953949 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 8 20:06:50.953959 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 20:06:50.953969 kernel: x2apic enabled Oct 8 20:06:50.953980 kernel: APIC: Switched APIC routing to: physical x2apic Oct 8 20:06:50.953988 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 8 20:06:50.953997 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 8 20:06:50.954402 kernel: Calibrating delay loop (skipped) preset value.. 4990.61 BogoMIPS (lpj=2495308) Oct 8 20:06:50.954414 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 8 20:06:50.954424 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 8 20:06:50.954435 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 8 20:06:50.954446 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 20:06:50.954470 kernel: Spectre V2 : Mitigation: Retpolines Oct 8 20:06:50.954481 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 20:06:50.954491 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 20:06:50.954504 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 8 20:06:50.954514 kernel: RETBleed: Mitigation: untrained return thunk Oct 8 20:06:50.954524 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 8 20:06:50.954533 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 8 20:06:50.954543 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 8 20:06:50.954554 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 8 20:06:50.954564 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 8 20:06:50.954574 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 8 20:06:50.954588 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 8 20:06:50.954598 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 8 20:06:50.954609 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 8 20:06:50.954619 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 8 20:06:50.954630 kernel: Freeing SMP alternatives memory: 32K Oct 8 20:06:50.954642 kernel: pid_max: default: 32768 minimum: 301 Oct 8 20:06:50.954652 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 20:06:50.954662 kernel: landlock: Up and running. Oct 8 20:06:50.954672 kernel: SELinux: Initializing. Oct 8 20:06:50.954685 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 8 20:06:50.954695 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 8 20:06:50.954707 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 8 20:06:50.954717 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:06:50.954727 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:06:50.954741 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:06:50.954752 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 8 20:06:50.954763 kernel: ... version: 0 Oct 8 20:06:50.954774 kernel: ... bit width: 48 Oct 8 20:06:50.954785 kernel: ... generic registers: 6 Oct 8 20:06:50.954796 kernel: ... value mask: 0000ffffffffffff Oct 8 20:06:50.954807 kernel: ... max period: 00007fffffffffff Oct 8 20:06:50.954817 kernel: ... fixed-purpose events: 0 Oct 8 20:06:50.954828 kernel: ... event mask: 000000000000003f Oct 8 20:06:50.954842 kernel: signal: max sigframe size: 1776 Oct 8 20:06:50.954853 kernel: rcu: Hierarchical SRCU implementation. Oct 8 20:06:50.954864 kernel: rcu: Max phase no-delay instances is 400. Oct 8 20:06:50.954875 kernel: smp: Bringing up secondary CPUs ... Oct 8 20:06:50.954883 kernel: smpboot: x86: Booting SMP configuration: Oct 8 20:06:50.954890 kernel: .... node #0, CPUs: #1 Oct 8 20:06:50.954898 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 20:06:50.954905 kernel: smpboot: Max logical packages: 1 Oct 8 20:06:50.954913 kernel: smpboot: Total of 2 processors activated (9981.23 BogoMIPS) Oct 8 20:06:50.954924 kernel: devtmpfs: initialized Oct 8 20:06:50.954938 kernel: x86/mm: Memory block size: 128MB Oct 8 20:06:50.954947 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 20:06:50.954957 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 20:06:50.954967 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 20:06:50.954976 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 20:06:50.954985 kernel: audit: initializing netlink subsys (disabled) Oct 8 20:06:50.954995 kernel: audit: type=2000 audit(1728418009.856:1): state=initialized audit_enabled=0 res=1 Oct 8 20:06:50.955133 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 20:06:50.955145 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 20:06:50.955159 kernel: cpuidle: using governor menu Oct 8 20:06:50.955169 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 20:06:50.955179 kernel: dca service started, version 1.12.1 Oct 8 20:06:50.955188 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 8 20:06:50.955198 kernel: PCI: Using configuration type 1 for base access Oct 8 20:06:50.955208 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 20:06:50.955217 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 20:06:50.955227 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 20:06:50.955238 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 20:06:50.955252 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 20:06:50.955261 kernel: ACPI: Added _OSI(Module Device) Oct 8 20:06:50.955269 kernel: ACPI: Added _OSI(Processor Device) Oct 8 20:06:50.955276 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 20:06:50.955284 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 20:06:50.955291 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 20:06:50.955298 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 20:06:50.955306 kernel: ACPI: Interpreter enabled Oct 8 20:06:50.955313 kernel: ACPI: PM: (supports S0 S5) Oct 8 20:06:50.955324 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 20:06:50.955332 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 20:06:50.955339 kernel: PCI: Using E820 reservations for host bridge windows Oct 8 20:06:50.955347 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 8 20:06:50.955354 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 20:06:50.955543 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 20:06:50.955690 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 8 20:06:50.955818 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 8 20:06:50.955829 kernel: PCI host bridge to bus 0000:00 Oct 8 20:06:50.955969 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 8 20:06:50.956114 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 8 20:06:50.956226 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 8 20:06:50.956334 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Oct 8 20:06:50.956443 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 8 20:06:50.956556 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 8 20:06:50.956667 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 20:06:50.956808 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 8 20:06:50.956939 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Oct 8 20:06:50.957095 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Oct 8 20:06:50.957220 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Oct 8 20:06:50.957348 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Oct 8 20:06:50.957477 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Oct 8 20:06:50.957597 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 8 20:06:50.957727 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 8 20:06:50.957847 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Oct 8 20:06:50.957975 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 8 20:06:50.958135 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Oct 8 20:06:50.958276 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 8 20:06:50.958398 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Oct 8 20:06:50.958537 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 8 20:06:50.958660 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Oct 8 20:06:50.958788 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 8 20:06:50.958908 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Oct 8 20:06:50.959066 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 8 20:06:50.959204 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Oct 8 20:06:50.959348 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 8 20:06:50.959470 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Oct 8 20:06:50.959602 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 8 20:06:50.959740 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Oct 8 20:06:50.959874 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Oct 8 20:06:50.959995 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Oct 8 20:06:50.960159 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 8 20:06:50.960285 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 8 20:06:50.960428 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 8 20:06:50.960556 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Oct 8 20:06:50.960681 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Oct 8 20:06:50.960808 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 8 20:06:50.960934 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 8 20:06:50.961112 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 20:06:50.961252 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Oct 8 20:06:50.961461 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Oct 8 20:06:50.961627 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Oct 8 20:06:50.961779 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 8 20:06:50.961908 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 8 20:06:50.962781 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 8 20:06:50.962926 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 8 20:06:50.963205 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Oct 8 20:06:50.963330 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 8 20:06:50.963448 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 8 20:06:50.963571 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 8 20:06:50.963719 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Oct 8 20:06:50.966093 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Oct 8 20:06:50.966230 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Oct 8 20:06:50.966352 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 8 20:06:50.966472 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 8 20:06:50.966591 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 8 20:06:50.966762 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Oct 8 20:06:50.966895 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Oct 8 20:06:50.968096 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 8 20:06:50.968249 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 8 20:06:50.968471 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 8 20:06:50.968617 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 8 20:06:50.968750 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Oct 8 20:06:50.968878 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 8 20:06:50.968997 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 8 20:06:50.970177 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 8 20:06:50.970320 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Oct 8 20:06:50.970450 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Oct 8 20:06:50.970577 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Oct 8 20:06:50.970697 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 8 20:06:50.970846 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 8 20:06:50.970974 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 8 20:06:50.970985 kernel: acpiphp: Slot [0] registered Oct 8 20:06:50.972208 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 20:06:50.972341 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Oct 8 20:06:50.972467 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Oct 8 20:06:50.972591 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Oct 8 20:06:50.972719 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 8 20:06:50.972844 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 8 20:06:50.972965 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 8 20:06:50.972975 kernel: acpiphp: Slot [0-2] registered Oct 8 20:06:50.973494 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 8 20:06:50.973621 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 8 20:06:50.973742 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 8 20:06:50.973753 kernel: acpiphp: Slot [0-3] registered Oct 8 20:06:50.973874 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 8 20:06:50.973999 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 8 20:06:50.975186 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 8 20:06:50.975197 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 8 20:06:50.975205 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 8 20:06:50.975213 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 8 20:06:50.975221 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 8 20:06:50.975229 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 8 20:06:50.975237 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 8 20:06:50.975245 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 8 20:06:50.975256 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 8 20:06:50.975264 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 8 20:06:50.975272 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 8 20:06:50.975279 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 8 20:06:50.975287 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 8 20:06:50.975295 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 8 20:06:50.975302 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 8 20:06:50.975310 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 8 20:06:50.975318 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 8 20:06:50.975328 kernel: iommu: Default domain type: Translated Oct 8 20:06:50.975335 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 20:06:50.975343 kernel: PCI: Using ACPI for IRQ routing Oct 8 20:06:50.975351 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 8 20:06:50.975359 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 8 20:06:50.975366 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Oct 8 20:06:50.975489 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 8 20:06:50.975617 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 8 20:06:50.975745 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 8 20:06:50.975759 kernel: vgaarb: loaded Oct 8 20:06:50.975767 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 8 20:06:50.975775 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 8 20:06:50.975783 kernel: clocksource: Switched to clocksource kvm-clock Oct 8 20:06:50.975790 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 20:06:50.975800 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 20:06:50.975807 kernel: pnp: PnP ACPI init Oct 8 20:06:50.975937 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 8 20:06:50.975952 kernel: pnp: PnP ACPI: found 5 devices Oct 8 20:06:50.975960 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 20:06:50.975967 kernel: NET: Registered PF_INET protocol family Oct 8 20:06:50.975975 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 20:06:50.975983 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 8 20:06:50.975991 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 20:06:50.975999 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 8 20:06:50.976020 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 8 20:06:50.976028 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 8 20:06:50.976038 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 8 20:06:50.976046 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 8 20:06:50.976054 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 20:06:50.976062 kernel: NET: Registered PF_XDP protocol family Oct 8 20:06:50.976184 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 8 20:06:50.978029 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 8 20:06:50.978175 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 8 20:06:50.978301 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Oct 8 20:06:50.978421 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Oct 8 20:06:50.978541 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Oct 8 20:06:50.978661 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 8 20:06:50.978785 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 8 20:06:50.978902 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 8 20:06:50.980073 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 8 20:06:50.980202 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 8 20:06:50.980327 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 8 20:06:50.980449 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 8 20:06:50.980574 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 8 20:06:50.980702 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 8 20:06:50.980824 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 8 20:06:50.980941 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 8 20:06:50.981094 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 8 20:06:50.981222 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 8 20:06:50.981361 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 8 20:06:50.981480 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 8 20:06:50.981599 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 8 20:06:50.981718 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 8 20:06:50.981838 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 8 20:06:50.981956 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 8 20:06:50.983157 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Oct 8 20:06:50.983282 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 8 20:06:50.983399 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 8 20:06:50.983522 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 8 20:06:50.983640 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Oct 8 20:06:50.983759 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 8 20:06:50.983876 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 8 20:06:50.983993 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 8 20:06:50.984126 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Oct 8 20:06:50.984244 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 8 20:06:50.984366 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 8 20:06:50.984481 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 8 20:06:50.984591 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 8 20:06:50.984703 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 8 20:06:50.984815 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Oct 8 20:06:50.984923 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 8 20:06:50.988072 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 8 20:06:50.988222 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Oct 8 20:06:50.988358 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 8 20:06:50.988484 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Oct 8 20:06:50.988606 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Oct 8 20:06:50.988729 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Oct 8 20:06:50.988845 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 8 20:06:50.988969 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Oct 8 20:06:50.989160 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 8 20:06:50.989296 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Oct 8 20:06:50.989462 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 8 20:06:50.989588 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Oct 8 20:06:50.989703 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 8 20:06:50.989833 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Oct 8 20:06:50.989947 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Oct 8 20:06:50.992111 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 8 20:06:50.992242 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Oct 8 20:06:50.992363 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Oct 8 20:06:50.992476 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 8 20:06:50.992647 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Oct 8 20:06:50.992765 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Oct 8 20:06:50.992879 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 8 20:06:50.992890 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 8 20:06:50.992899 kernel: PCI: CLS 0 bytes, default 64 Oct 8 20:06:50.992910 kernel: Initialise system trusted keyrings Oct 8 20:06:50.992919 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 8 20:06:50.992927 kernel: Key type asymmetric registered Oct 8 20:06:50.992935 kernel: Asymmetric key parser 'x509' registered Oct 8 20:06:50.992943 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 20:06:50.992951 kernel: io scheduler mq-deadline registered Oct 8 20:06:50.992959 kernel: io scheduler kyber registered Oct 8 20:06:50.992967 kernel: io scheduler bfq registered Oct 8 20:06:50.993183 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Oct 8 20:06:50.993312 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Oct 8 20:06:50.993458 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Oct 8 20:06:50.993579 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Oct 8 20:06:50.993698 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Oct 8 20:06:50.993818 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Oct 8 20:06:50.993938 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Oct 8 20:06:50.995052 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Oct 8 20:06:50.995177 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Oct 8 20:06:50.995301 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Oct 8 20:06:50.995419 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Oct 8 20:06:50.995546 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Oct 8 20:06:50.995678 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Oct 8 20:06:50.995798 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Oct 8 20:06:50.995918 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Oct 8 20:06:50.997095 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Oct 8 20:06:50.997111 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 8 20:06:50.997235 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Oct 8 20:06:50.997361 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Oct 8 20:06:50.997372 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 20:06:50.997380 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Oct 8 20:06:50.997389 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 20:06:50.997397 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 20:06:50.997405 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 8 20:06:50.997416 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 8 20:06:50.997424 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 8 20:06:50.997433 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 8 20:06:50.997558 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 8 20:06:50.997671 kernel: rtc_cmos 00:03: registered as rtc0 Oct 8 20:06:50.997789 kernel: rtc_cmos 00:03: setting system clock to 2024-10-08T20:06:50 UTC (1728418010) Oct 8 20:06:50.997901 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 8 20:06:50.997911 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 8 20:06:50.997919 kernel: NET: Registered PF_INET6 protocol family Oct 8 20:06:50.997928 kernel: Segment Routing with IPv6 Oct 8 20:06:50.997939 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 20:06:50.997947 kernel: NET: Registered PF_PACKET protocol family Oct 8 20:06:50.997955 kernel: Key type dns_resolver registered Oct 8 20:06:50.997963 kernel: IPI shorthand broadcast: enabled Oct 8 20:06:50.997971 kernel: sched_clock: Marking stable (1354009960, 144589821)->(1512237958, -13638177) Oct 8 20:06:50.997979 kernel: registered taskstats version 1 Oct 8 20:06:50.997987 kernel: Loading compiled-in X.509 certificates Oct 8 20:06:50.997995 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 20:06:51.000029 kernel: Key type .fscrypt registered Oct 8 20:06:51.000045 kernel: Key type fscrypt-provisioning registered Oct 8 20:06:51.000053 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 20:06:51.000062 kernel: ima: Allocated hash algorithm: sha1 Oct 8 20:06:51.000070 kernel: ima: No architecture policies found Oct 8 20:06:51.000079 kernel: clk: Disabling unused clocks Oct 8 20:06:51.000087 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 20:06:51.000095 kernel: Write protecting the kernel read-only data: 36864k Oct 8 20:06:51.000104 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 20:06:51.000112 kernel: Run /init as init process Oct 8 20:06:51.000123 kernel: with arguments: Oct 8 20:06:51.000131 kernel: /init Oct 8 20:06:51.000140 kernel: with environment: Oct 8 20:06:51.000148 kernel: HOME=/ Oct 8 20:06:51.000156 kernel: TERM=linux Oct 8 20:06:51.000164 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 20:06:51.000175 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:06:51.000186 systemd[1]: Detected virtualization kvm. Oct 8 20:06:51.000197 systemd[1]: Detected architecture x86-64. Oct 8 20:06:51.000205 systemd[1]: Running in initrd. Oct 8 20:06:51.000214 systemd[1]: No hostname configured, using default hostname. Oct 8 20:06:51.000222 systemd[1]: Hostname set to . Oct 8 20:06:51.000231 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:06:51.000240 systemd[1]: Queued start job for default target initrd.target. Oct 8 20:06:51.000248 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:06:51.000257 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:06:51.000268 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 20:06:51.000277 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:06:51.000286 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 20:06:51.000295 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 20:06:51.000305 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 20:06:51.000313 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 20:06:51.000322 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:06:51.000333 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:06:51.000341 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:06:51.000350 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:06:51.000358 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:06:51.000366 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:06:51.000375 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:06:51.000383 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:06:51.000392 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 20:06:51.000403 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 20:06:51.000411 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:06:51.000420 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:06:51.000428 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:06:51.000436 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:06:51.000445 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 20:06:51.000453 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:06:51.000462 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 20:06:51.000470 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 20:06:51.000481 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:06:51.000490 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:06:51.000498 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:06:51.000507 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 20:06:51.000539 systemd-journald[188]: Collecting audit messages is disabled. Oct 8 20:06:51.000562 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:06:51.000571 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 20:06:51.000580 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:06:51.000591 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:06:51.000600 systemd-journald[188]: Journal started Oct 8 20:06:51.000619 systemd-journald[188]: Runtime Journal (/run/log/journal/c5f7039c6f324855b8f3708c26a6ee7f) is 4.8M, max 38.4M, 33.6M free. Oct 8 20:06:50.967047 systemd-modules-load[189]: Inserted module 'overlay' Oct 8 20:06:51.033503 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 20:06:51.033529 kernel: Bridge firewalling registered Oct 8 20:06:51.033549 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:06:51.006493 systemd-modules-load[189]: Inserted module 'br_netfilter' Oct 8 20:06:51.034179 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:06:51.035135 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:06:51.042187 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:06:51.048208 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:06:51.052172 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:06:51.055333 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:06:51.059349 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:06:51.073415 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:06:51.078167 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 20:06:51.084928 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:06:51.091282 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:06:51.095156 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:06:51.102025 dracut-cmdline[221]: dracut-dracut-053 Oct 8 20:06:51.104297 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:06:51.136654 systemd-resolved[224]: Positive Trust Anchors: Oct 8 20:06:51.137450 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:06:51.137483 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:06:51.144097 systemd-resolved[224]: Defaulting to hostname 'linux'. Oct 8 20:06:51.145347 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:06:51.146185 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:06:51.182072 kernel: SCSI subsystem initialized Oct 8 20:06:51.191095 kernel: Loading iSCSI transport class v2.0-870. Oct 8 20:06:51.203059 kernel: iscsi: registered transport (tcp) Oct 8 20:06:51.224263 kernel: iscsi: registered transport (qla4xxx) Oct 8 20:06:51.224392 kernel: QLogic iSCSI HBA Driver Oct 8 20:06:51.292233 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 20:06:51.301277 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 20:06:51.344034 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 20:06:51.344113 kernel: device-mapper: uevent: version 1.0.3 Oct 8 20:06:51.346061 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 20:06:51.400101 kernel: raid6: avx2x4 gen() 16913 MB/s Oct 8 20:06:51.417135 kernel: raid6: avx2x2 gen() 25882 MB/s Oct 8 20:06:51.434305 kernel: raid6: avx2x1 gen() 25470 MB/s Oct 8 20:06:51.434410 kernel: raid6: using algorithm avx2x2 gen() 25882 MB/s Oct 8 20:06:51.453101 kernel: raid6: .... xor() 17918 MB/s, rmw enabled Oct 8 20:06:51.453181 kernel: raid6: using avx2x2 recovery algorithm Oct 8 20:06:51.475051 kernel: xor: automatically using best checksumming function avx Oct 8 20:06:51.670108 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 20:06:51.689512 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:06:51.699156 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:06:51.729127 systemd-udevd[407]: Using default interface naming scheme 'v255'. Oct 8 20:06:51.734844 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:06:51.747305 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 20:06:51.776760 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Oct 8 20:06:51.818220 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:06:51.822156 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:06:51.900133 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:06:51.907290 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 20:06:51.925331 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 20:06:51.928485 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:06:51.930729 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:06:51.931869 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:06:51.939460 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 20:06:51.955510 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:06:51.995580 kernel: scsi host0: Virtio SCSI HBA Oct 8 20:06:51.995659 kernel: cryptd: max_cpu_qlen set to 1000 Oct 8 20:06:52.009099 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Oct 8 20:06:52.080508 kernel: AVX2 version of gcm_enc/dec engaged. Oct 8 20:06:52.080666 kernel: AES CTR mode by8 optimization enabled Oct 8 20:06:52.093051 kernel: libata version 3.00 loaded. Oct 8 20:06:52.098040 kernel: ACPI: bus type USB registered Oct 8 20:06:52.101678 kernel: usbcore: registered new interface driver usbfs Oct 8 20:06:52.101721 kernel: usbcore: registered new interface driver hub Oct 8 20:06:52.102894 kernel: usbcore: registered new device driver usb Oct 8 20:06:52.130052 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 20:06:52.130315 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Oct 8 20:06:52.132676 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 8 20:06:52.136192 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 20:06:52.136376 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Oct 8 20:06:52.137774 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Oct 8 20:06:52.140024 kernel: hub 1-0:1.0: USB hub found Oct 8 20:06:52.142511 kernel: hub 1-0:1.0: 4 ports detected Oct 8 20:06:52.146036 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 8 20:06:52.148164 kernel: hub 2-0:1.0: USB hub found Oct 8 20:06:52.148591 kernel: hub 2-0:1.0: 4 ports detected Oct 8 20:06:52.150218 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:06:52.150373 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:06:52.154033 kernel: ahci 0000:00:1f.2: version 3.0 Oct 8 20:06:52.154221 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 8 20:06:52.153268 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:06:52.161972 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 8 20:06:52.162182 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 8 20:06:52.155061 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:06:52.155237 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:06:52.164938 kernel: scsi host1: ahci Oct 8 20:06:52.160812 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:06:52.171072 kernel: scsi host2: ahci Oct 8 20:06:52.172298 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:06:52.194046 kernel: scsi host3: ahci Oct 8 20:06:52.194270 kernel: scsi host4: ahci Oct 8 20:06:52.194416 kernel: scsi host5: ahci Oct 8 20:06:52.194598 kernel: scsi host6: ahci Oct 8 20:06:52.194754 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 49 Oct 8 20:06:52.194765 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 49 Oct 8 20:06:52.194776 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 49 Oct 8 20:06:52.194786 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 49 Oct 8 20:06:52.194796 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 49 Oct 8 20:06:52.194807 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 49 Oct 8 20:06:52.194817 kernel: sd 0:0:0:0: Power-on or device reset occurred Oct 8 20:06:52.194998 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Oct 8 20:06:52.195183 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 8 20:06:52.195337 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Oct 8 20:06:52.195489 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 8 20:06:52.200294 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 20:06:52.200363 kernel: GPT:17805311 != 80003071 Oct 8 20:06:52.202223 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 20:06:52.202257 kernel: GPT:17805311 != 80003071 Oct 8 20:06:52.202268 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 20:06:52.202286 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:06:52.202297 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 8 20:06:52.248329 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:06:52.255212 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:06:52.276553 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:06:52.386307 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 8 20:06:52.489900 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 8 20:06:52.489990 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 8 20:06:52.490025 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 8 20:06:52.493621 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 8 20:06:52.493654 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 8 20:06:52.497311 kernel: ata1.00: applying bridge limits Oct 8 20:06:52.504286 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 8 20:06:52.505039 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 8 20:06:52.508047 kernel: ata1.00: configured for UDMA/100 Oct 8 20:06:52.514060 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 8 20:06:52.540040 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 20:06:52.557204 kernel: usbcore: registered new interface driver usbhid Oct 8 20:06:52.557266 kernel: usbhid: USB HID core driver Oct 8 20:06:52.573454 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 8 20:06:52.573820 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 20:06:52.586034 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Oct 8 20:06:52.593028 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Oct 8 20:06:52.599064 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (455) Oct 8 20:06:52.605090 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (467) Oct 8 20:06:52.605114 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Oct 8 20:06:52.622433 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Oct 8 20:06:52.632837 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Oct 8 20:06:52.650202 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 20:06:52.659141 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Oct 8 20:06:52.660057 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Oct 8 20:06:52.666184 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 20:06:52.675732 disk-uuid[576]: Primary Header is updated. Oct 8 20:06:52.675732 disk-uuid[576]: Secondary Entries is updated. Oct 8 20:06:52.675732 disk-uuid[576]: Secondary Header is updated. Oct 8 20:06:52.692044 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:06:52.707046 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:06:53.706355 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:06:53.708845 disk-uuid[578]: The operation has completed successfully. Oct 8 20:06:53.796481 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 20:06:53.796619 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 20:06:53.803185 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 20:06:53.807135 sh[594]: Success Oct 8 20:06:53.821049 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 8 20:06:53.875572 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 20:06:53.888087 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 20:06:53.892071 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 20:06:53.907392 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 20:06:53.907447 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:06:53.910201 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 20:06:53.910224 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 20:06:53.911432 kernel: BTRFS info (device dm-0): using free space tree Oct 8 20:06:53.920040 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 8 20:06:53.921732 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 20:06:53.923229 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 20:06:53.928173 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 20:06:53.931167 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 20:06:53.950128 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:06:53.950182 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:06:53.950195 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:06:53.955432 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:06:53.955499 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:06:53.965744 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 20:06:53.969049 kernel: BTRFS info (device sda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:06:53.975614 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 20:06:53.983153 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 20:06:54.053536 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:06:54.063993 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:06:54.081786 ignition[698]: Ignition 2.19.0 Oct 8 20:06:54.082658 ignition[698]: Stage: fetch-offline Oct 8 20:06:54.082698 ignition[698]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:06:54.082708 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:06:54.082907 ignition[698]: parsed url from cmdline: "" Oct 8 20:06:54.082913 ignition[698]: no config URL provided Oct 8 20:06:54.082920 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:06:54.082933 ignition[698]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:06:54.082940 ignition[698]: failed to fetch config: resource requires networking Oct 8 20:06:54.083674 ignition[698]: Ignition finished successfully Oct 8 20:06:54.089570 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:06:54.096392 systemd-networkd[775]: lo: Link UP Oct 8 20:06:54.096404 systemd-networkd[775]: lo: Gained carrier Oct 8 20:06:54.099209 systemd-networkd[775]: Enumeration completed Oct 8 20:06:54.099400 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:06:54.099971 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:06:54.099975 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:06:54.100095 systemd[1]: Reached target network.target - Network. Oct 8 20:06:54.101308 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:06:54.101312 systemd-networkd[775]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:06:54.102151 systemd-networkd[775]: eth0: Link UP Oct 8 20:06:54.102156 systemd-networkd[775]: eth0: Gained carrier Oct 8 20:06:54.102162 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:06:54.106340 systemd-networkd[775]: eth1: Link UP Oct 8 20:06:54.106346 systemd-networkd[775]: eth1: Gained carrier Oct 8 20:06:54.106355 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:06:54.107221 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 20:06:54.127559 ignition[783]: Ignition 2.19.0 Oct 8 20:06:54.127572 ignition[783]: Stage: fetch Oct 8 20:06:54.127778 ignition[783]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:06:54.127793 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:06:54.127890 ignition[783]: parsed url from cmdline: "" Oct 8 20:06:54.127894 ignition[783]: no config URL provided Oct 8 20:06:54.127899 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:06:54.127908 ignition[783]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:06:54.127928 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Oct 8 20:06:54.128118 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 8 20:06:54.135082 systemd-networkd[775]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:06:54.235142 systemd-networkd[775]: eth0: DHCPv4 address 157.90.145.6/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 20:06:54.328360 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Oct 8 20:06:54.335300 ignition[783]: GET result: OK Oct 8 20:06:54.335445 ignition[783]: parsing config with SHA512: b278f7b9322adb52f8072f267512f6035177ae5ee615cc09ca539ffe91a0402e4f8ecbd3ed1876c469016400a490f1910ac65ee16570b321dfcacc3d762e21fd Oct 8 20:06:54.342153 unknown[783]: fetched base config from "system" Oct 8 20:06:54.342174 unknown[783]: fetched base config from "system" Oct 8 20:06:54.342939 ignition[783]: fetch: fetch complete Oct 8 20:06:54.342188 unknown[783]: fetched user config from "hetzner" Oct 8 20:06:54.342950 ignition[783]: fetch: fetch passed Oct 8 20:06:54.343060 ignition[783]: Ignition finished successfully Oct 8 20:06:54.348884 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 20:06:54.358244 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 20:06:54.404513 ignition[790]: Ignition 2.19.0 Oct 8 20:06:54.404539 ignition[790]: Stage: kargs Oct 8 20:06:54.404950 ignition[790]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:06:54.407990 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 20:06:54.404980 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:06:54.405957 ignition[790]: kargs: kargs passed Oct 8 20:06:54.418365 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 20:06:54.406037 ignition[790]: Ignition finished successfully Oct 8 20:06:54.441739 ignition[797]: Ignition 2.19.0 Oct 8 20:06:54.441766 ignition[797]: Stage: disks Oct 8 20:06:54.443397 ignition[797]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:06:54.443425 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:06:54.446139 ignition[797]: disks: disks passed Oct 8 20:06:54.446284 ignition[797]: Ignition finished successfully Oct 8 20:06:54.449729 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 20:06:54.453059 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 20:06:54.455779 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 20:06:54.456947 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:06:54.459494 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:06:54.461602 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:06:54.469323 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 20:06:54.520522 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 8 20:06:54.524155 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 20:06:54.531211 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 20:06:54.634062 kernel: EXT4-fs (sda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 20:06:54.636080 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 20:06:54.637187 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 20:06:54.642182 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:06:54.646627 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 20:06:54.660029 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (814) Oct 8 20:06:54.663270 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:06:54.662760 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 8 20:06:54.666732 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:06:54.666754 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:06:54.666826 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 20:06:54.667975 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:06:54.672895 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 20:06:54.681233 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:06:54.681296 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:06:54.682840 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 20:06:54.693517 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:06:54.740317 coreos-metadata[816]: Oct 08 20:06:54.740 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Oct 8 20:06:54.742937 coreos-metadata[816]: Oct 08 20:06:54.742 INFO Fetch successful Oct 8 20:06:54.744610 coreos-metadata[816]: Oct 08 20:06:54.743 INFO wrote hostname ci-4081-1-0-7-2461ba8d61 to /sysroot/etc/hostname Oct 8 20:06:54.747426 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 20:06:54.754337 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 20:06:54.761635 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Oct 8 20:06:54.767890 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 20:06:54.775851 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 20:06:54.887144 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 20:06:54.897092 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 20:06:54.900136 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 20:06:54.913722 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 20:06:54.915043 kernel: BTRFS info (device sda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:06:54.942863 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 20:06:54.948588 ignition[930]: INFO : Ignition 2.19.0 Oct 8 20:06:54.948588 ignition[930]: INFO : Stage: mount Oct 8 20:06:54.951207 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:06:54.951207 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:06:54.954122 ignition[930]: INFO : mount: mount passed Oct 8 20:06:54.954122 ignition[930]: INFO : Ignition finished successfully Oct 8 20:06:54.953734 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 20:06:54.962119 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 20:06:54.968408 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:06:54.984193 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (942) Oct 8 20:06:54.984253 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:06:54.984264 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:06:54.985704 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:06:54.992787 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:06:54.992835 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:06:54.995898 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:06:55.019262 ignition[958]: INFO : Ignition 2.19.0 Oct 8 20:06:55.019262 ignition[958]: INFO : Stage: files Oct 8 20:06:55.020704 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:06:55.020704 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:06:55.020704 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Oct 8 20:06:55.022711 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 20:06:55.022711 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 20:06:55.024709 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 20:06:55.025411 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 20:06:55.025411 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 20:06:55.025172 unknown[958]: wrote ssh authorized keys file for user: core Oct 8 20:06:55.027516 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:06:55.027516 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 20:06:55.145614 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 20:06:55.528295 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:06:55.530535 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 20:06:55.530535 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Oct 8 20:06:55.901363 systemd-networkd[775]: eth1: Gained IPv6LL Oct 8 20:06:56.157298 systemd-networkd[775]: eth0: Gained IPv6LL Oct 8 20:06:56.194061 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 8 20:06:56.470452 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 20:06:56.470452 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 8 20:06:56.475250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 20:06:56.475250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:06:56.475250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:06:56.475250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:06:56.475250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:06:56.475250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:06:56.475250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:06:56.475250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:06:56.475250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:06:56.475250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 20:06:56.500193 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 20:06:56.500193 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 20:06:56.500193 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Oct 8 20:06:57.102858 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 8 20:06:58.267089 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 20:06:58.267089 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 8 20:06:58.271610 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:06:58.271610 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:06:58.271610 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 8 20:06:58.271610 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 8 20:06:58.271610 ignition[958]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 20:06:58.271610 ignition[958]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 20:06:58.271610 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 8 20:06:58.271610 ignition[958]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Oct 8 20:06:58.271610 ignition[958]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 20:06:58.271610 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:06:58.271610 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:06:58.271610 ignition[958]: INFO : files: files passed Oct 8 20:06:58.271610 ignition[958]: INFO : Ignition finished successfully Oct 8 20:06:58.271409 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 20:06:58.280256 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 20:06:58.287296 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 20:06:58.290898 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 20:06:58.291062 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 20:06:58.311464 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:06:58.313779 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:06:58.314575 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:06:58.315258 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:06:58.316538 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 20:06:58.322223 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 20:06:58.351564 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 20:06:58.351686 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 20:06:58.353239 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 20:06:58.354677 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 20:06:58.355224 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 20:06:58.360182 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 20:06:58.375053 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:06:58.381337 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 20:06:58.393019 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:06:58.393748 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:06:58.394913 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 20:06:58.395950 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 20:06:58.396168 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:06:58.397241 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 20:06:58.397937 systemd[1]: Stopped target basic.target - Basic System. Oct 8 20:06:58.399074 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 20:06:58.400055 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:06:58.401057 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 20:06:58.402143 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 20:06:58.403232 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:06:58.404377 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 20:06:58.405430 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 20:06:58.406572 systemd[1]: Stopped target swap.target - Swaps. Oct 8 20:06:58.407543 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 20:06:58.407664 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:06:58.409058 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:06:58.410198 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:06:58.411245 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 20:06:58.411376 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:06:58.412363 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 20:06:58.412499 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 20:06:58.413916 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 20:06:58.414085 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:06:58.415522 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 20:06:58.415694 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 20:06:58.416467 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 8 20:06:58.416639 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 20:06:58.424636 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 20:06:58.425187 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 20:06:58.425370 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:06:58.429287 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 20:06:58.429785 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 20:06:58.429997 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:06:58.430794 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 20:06:58.430975 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:06:58.444216 ignition[1012]: INFO : Ignition 2.19.0 Oct 8 20:06:58.444953 ignition[1012]: INFO : Stage: umount Oct 8 20:06:58.445336 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 20:06:58.445451 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 20:06:58.449710 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:06:58.449710 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:06:58.449710 ignition[1012]: INFO : umount: umount passed Oct 8 20:06:58.449710 ignition[1012]: INFO : Ignition finished successfully Oct 8 20:06:58.450308 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 20:06:58.451247 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 20:06:58.452945 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 20:06:58.453377 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 20:06:58.454129 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 20:06:58.454185 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 20:06:58.455037 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 20:06:58.455097 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 20:06:58.456726 systemd[1]: Stopped target network.target - Network. Oct 8 20:06:58.459323 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 20:06:58.459383 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:06:58.463561 systemd[1]: Stopped target paths.target - Path Units. Oct 8 20:06:58.464258 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 20:06:58.469074 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:06:58.469622 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 20:06:58.470134 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 20:06:58.470711 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 20:06:58.470773 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:06:58.473166 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 20:06:58.473215 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:06:58.474178 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 20:06:58.474247 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 20:06:58.476126 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 20:06:58.476189 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 20:06:58.476953 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 20:06:58.478139 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 20:06:58.481176 systemd-networkd[775]: eth1: DHCPv6 lease lost Oct 8 20:06:58.482837 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 20:06:58.483511 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 20:06:58.483645 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 20:06:58.486049 systemd-networkd[775]: eth0: DHCPv6 lease lost Oct 8 20:06:58.488527 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 20:06:58.488688 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 20:06:58.489414 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 20:06:58.489527 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 20:06:58.491543 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 20:06:58.491611 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:06:58.493104 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 20:06:58.493159 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 20:06:58.497188 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 20:06:58.497641 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 20:06:58.497703 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:06:58.498231 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:06:58.498278 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:06:58.498732 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 20:06:58.498775 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 20:06:58.499464 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 20:06:58.499510 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:06:58.500159 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:06:58.513161 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 20:06:58.513863 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 20:06:58.519804 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 20:06:58.519992 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:06:58.521152 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 20:06:58.521203 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 20:06:58.522096 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 20:06:58.522137 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:06:58.523118 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 20:06:58.523170 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:06:58.524696 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 20:06:58.524753 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 20:06:58.525780 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:06:58.525832 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:06:58.534451 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 20:06:58.534964 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 20:06:58.535044 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:06:58.535570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:06:58.535616 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:06:58.543065 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 20:06:58.543191 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 20:06:58.544730 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 20:06:58.556149 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 20:06:58.563137 systemd[1]: Switching root. Oct 8 20:06:58.598433 systemd-journald[188]: Journal stopped Oct 8 20:06:59.836573 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Oct 8 20:06:59.836651 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 20:06:59.836664 kernel: SELinux: policy capability open_perms=1 Oct 8 20:06:59.836675 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 20:06:59.836686 kernel: SELinux: policy capability always_check_network=0 Oct 8 20:06:59.836699 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 20:06:59.836722 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 20:06:59.836733 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 20:06:59.836744 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 20:06:59.836755 kernel: audit: type=1403 audit(1728418018.778:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 20:06:59.836767 systemd[1]: Successfully loaded SELinux policy in 56.242ms. Oct 8 20:06:59.836796 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.810ms. Oct 8 20:06:59.836809 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:06:59.836821 systemd[1]: Detected virtualization kvm. Oct 8 20:06:59.836835 systemd[1]: Detected architecture x86-64. Oct 8 20:06:59.836847 systemd[1]: Detected first boot. Oct 8 20:06:59.836861 systemd[1]: Hostname set to . Oct 8 20:06:59.836872 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:06:59.836884 zram_generator::config[1055]: No configuration found. Oct 8 20:06:59.836902 systemd[1]: Populated /etc with preset unit settings. Oct 8 20:06:59.836914 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 20:06:59.836926 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 20:06:59.836939 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 20:06:59.836953 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 20:06:59.836964 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 20:06:59.836976 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 20:06:59.836988 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 20:06:59.837000 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 20:06:59.837038 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 20:06:59.837050 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 20:06:59.837070 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 20:06:59.837095 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:06:59.837107 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:06:59.837118 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 20:06:59.837130 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 20:06:59.837142 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 20:06:59.837156 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:06:59.837171 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 20:06:59.837183 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:06:59.837197 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 20:06:59.837209 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 20:06:59.837221 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 20:06:59.837232 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 20:06:59.837244 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:06:59.837258 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:06:59.837272 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:06:59.837284 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:06:59.837296 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 20:06:59.837307 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 20:06:59.837320 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:06:59.837331 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:06:59.837343 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:06:59.837355 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 20:06:59.837367 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 20:06:59.837387 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 20:06:59.837404 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 20:06:59.837417 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:06:59.837429 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 20:06:59.837442 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 20:06:59.837456 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 20:06:59.837473 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 20:06:59.837484 systemd[1]: Reached target machines.target - Containers. Oct 8 20:06:59.837496 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 20:06:59.837508 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:06:59.837520 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:06:59.837532 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 20:06:59.837544 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:06:59.837556 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:06:59.837568 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:06:59.837583 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 20:06:59.837595 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:06:59.837607 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 20:06:59.837619 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 20:06:59.837631 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 20:06:59.837648 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 20:06:59.837660 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 20:06:59.837672 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:06:59.837686 kernel: ACPI: bus type drm_connector registered Oct 8 20:06:59.837697 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:06:59.837709 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 20:06:59.837721 kernel: loop: module loaded Oct 8 20:06:59.837732 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 20:06:59.837744 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:06:59.837756 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 20:06:59.837767 systemd[1]: Stopped verity-setup.service. Oct 8 20:06:59.837779 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:06:59.837793 kernel: fuse: init (API version 7.39) Oct 8 20:06:59.837805 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 20:06:59.837818 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 20:06:59.837847 systemd-journald[1135]: Collecting audit messages is disabled. Oct 8 20:06:59.837871 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 20:06:59.837883 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 20:06:59.837895 systemd-journald[1135]: Journal started Oct 8 20:06:59.837917 systemd-journald[1135]: Runtime Journal (/run/log/journal/c5f7039c6f324855b8f3708c26a6ee7f) is 4.8M, max 38.4M, 33.6M free. Oct 8 20:06:59.502263 systemd[1]: Queued start job for default target multi-user.target. Oct 8 20:06:59.524692 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 8 20:06:59.525380 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 20:06:59.841661 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:06:59.850340 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 20:06:59.851034 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 20:06:59.851797 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 20:06:59.852567 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:06:59.853411 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 20:06:59.853578 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 20:06:59.854521 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:06:59.854683 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:06:59.855451 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:06:59.855602 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:06:59.856375 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:06:59.856546 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:06:59.857598 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 20:06:59.857779 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 20:06:59.858593 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:06:59.858829 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:06:59.860154 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:06:59.860969 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 20:06:59.862044 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 20:06:59.879652 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 20:06:59.887804 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 20:06:59.894117 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 20:06:59.894767 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 20:06:59.894803 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:06:59.898784 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 20:06:59.911608 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 20:06:59.918719 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 20:06:59.919336 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:06:59.924102 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 20:06:59.925611 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 20:06:59.928743 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:06:59.931137 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 20:06:59.931697 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:06:59.940181 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:06:59.942081 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 20:06:59.948393 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 20:06:59.952822 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 20:06:59.954270 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 20:06:59.956536 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 20:06:59.967975 systemd-journald[1135]: Time spent on flushing to /var/log/journal/c5f7039c6f324855b8f3708c26a6ee7f is 89.208ms for 1137 entries. Oct 8 20:06:59.967975 systemd-journald[1135]: System Journal (/var/log/journal/c5f7039c6f324855b8f3708c26a6ee7f) is 8.0M, max 584.8M, 576.8M free. Oct 8 20:07:00.094749 systemd-journald[1135]: Received client request to flush runtime journal. Oct 8 20:07:00.094798 kernel: loop0: detected capacity change from 0 to 140768 Oct 8 20:07:00.094817 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 20:07:00.011649 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 20:07:00.013785 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 20:07:00.023223 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 20:07:00.026830 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:07:00.038677 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 20:07:00.056032 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:07:00.082235 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 20:07:00.099152 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 20:07:00.111909 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 20:07:00.116597 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 20:07:00.117508 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 20:07:00.122032 kernel: loop1: detected capacity change from 0 to 8 Oct 8 20:07:00.130558 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:07:00.153453 kernel: loop2: detected capacity change from 0 to 142488 Oct 8 20:07:00.164578 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Oct 8 20:07:00.164601 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Oct 8 20:07:00.172826 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:07:00.208206 kernel: loop3: detected capacity change from 0 to 210664 Oct 8 20:07:00.251165 kernel: loop4: detected capacity change from 0 to 140768 Oct 8 20:07:00.273042 kernel: loop5: detected capacity change from 0 to 8 Oct 8 20:07:00.276032 kernel: loop6: detected capacity change from 0 to 142488 Oct 8 20:07:00.305166 kernel: loop7: detected capacity change from 0 to 210664 Oct 8 20:07:00.329676 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Oct 8 20:07:00.330364 (sd-merge)[1201]: Merged extensions into '/usr'. Oct 8 20:07:00.339627 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 20:07:00.339777 systemd[1]: Reloading... Oct 8 20:07:00.452123 zram_generator::config[1227]: No configuration found. Oct 8 20:07:00.550427 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 20:07:00.611692 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:07:00.668178 systemd[1]: Reloading finished in 327 ms. Oct 8 20:07:00.706786 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 20:07:00.713748 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 20:07:00.722093 systemd[1]: Starting ensure-sysext.service... Oct 8 20:07:00.724248 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:07:00.732614 systemd[1]: Reloading requested from client PID 1270 ('systemctl') (unit ensure-sysext.service)... Oct 8 20:07:00.732630 systemd[1]: Reloading... Oct 8 20:07:00.775560 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 20:07:00.780465 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 20:07:00.788154 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 20:07:00.788491 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Oct 8 20:07:00.788566 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Oct 8 20:07:00.793076 zram_generator::config[1294]: No configuration found. Oct 8 20:07:00.798925 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:07:00.798939 systemd-tmpfiles[1271]: Skipping /boot Oct 8 20:07:00.832744 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:07:00.832760 systemd-tmpfiles[1271]: Skipping /boot Oct 8 20:07:00.945047 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:07:01.006998 systemd[1]: Reloading finished in 273 ms. Oct 8 20:07:01.024367 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 20:07:01.025428 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:07:01.049205 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:07:01.053234 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 20:07:01.063297 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 20:07:01.072343 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:07:01.077247 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:07:01.081214 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 20:07:01.095381 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 20:07:01.099131 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:07:01.099371 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:07:01.108117 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:07:01.118513 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:07:01.124228 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:07:01.125159 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:07:01.125272 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:07:01.126415 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:07:01.126613 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:07:01.135521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:07:01.136649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:07:01.138525 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:07:01.138738 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:07:01.140602 augenrules[1369]: No rules Oct 8 20:07:01.146323 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:07:01.146987 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:07:01.147116 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:07:01.147208 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:07:01.147969 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:07:01.155450 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:07:01.156073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:07:01.163089 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:07:01.175760 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:07:01.177223 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:07:01.178136 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:07:01.179339 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 20:07:01.180900 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:07:01.182167 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:07:01.183375 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:07:01.183552 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:07:01.185254 systemd-udevd[1355]: Using default interface naming scheme 'v255'. Oct 8 20:07:01.190155 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:07:01.190586 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:07:01.200842 systemd[1]: Finished ensure-sysext.service. Oct 8 20:07:01.204175 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:07:01.204369 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:07:01.207897 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:07:01.207968 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:07:01.215201 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 20:07:01.219156 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 20:07:01.219823 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 20:07:01.220657 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 20:07:01.240096 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:07:01.244600 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:07:01.262166 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 20:07:01.275919 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 20:07:01.280141 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:07:01.343231 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 8 20:07:01.394058 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 20:07:01.396434 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 20:07:01.411387 systemd-networkd[1395]: lo: Link UP Oct 8 20:07:01.411744 systemd-networkd[1395]: lo: Gained carrier Oct 8 20:07:01.425924 systemd-resolved[1352]: Positive Trust Anchors: Oct 8 20:07:01.426292 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:07:01.426373 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:07:01.433031 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1394) Oct 8 20:07:01.431914 systemd-resolved[1352]: Using system hostname 'ci-4081-1-0-7-2461ba8d61'. Oct 8 20:07:01.433605 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:07:01.434212 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:07:01.438215 systemd-networkd[1395]: Enumeration completed Oct 8 20:07:01.439487 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:07:01.440143 systemd[1]: Reached target network.target - Network. Oct 8 20:07:01.443166 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:07:01.443230 systemd-networkd[1395]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:07:01.448229 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 20:07:01.448842 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:07:01.448873 systemd-networkd[1395]: eth1: Link UP Oct 8 20:07:01.448877 systemd-networkd[1395]: eth1: Gained carrier Oct 8 20:07:01.448887 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:07:01.459031 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1405) Oct 8 20:07:01.475064 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1394) Oct 8 20:07:01.479121 systemd-networkd[1395]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:07:01.482785 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Oct 8 20:07:01.490777 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 20:07:01.498251 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 20:07:01.504756 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:07:01.504765 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:07:01.507229 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Oct 8 20:07:01.508614 systemd-networkd[1395]: eth0: Link UP Oct 8 20:07:01.508969 systemd-networkd[1395]: eth0: Gained carrier Oct 8 20:07:01.509132 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:07:01.519025 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Oct 8 20:07:01.534287 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 20:07:01.554034 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 8 20:07:01.561851 kernel: ACPI: button: Power Button [PWRF] Oct 8 20:07:01.561895 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 20:07:01.584072 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Oct 8 20:07:01.584560 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:07:01.584671 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:07:01.591024 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Oct 8 20:07:01.591173 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:07:01.594265 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Oct 8 20:07:01.594099 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:07:01.601242 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 8 20:07:01.601483 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 8 20:07:01.601686 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 8 20:07:01.600180 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:07:01.616027 kernel: Console: switching to colour dummy device 80x25 Oct 8 20:07:01.616363 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:07:01.616401 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:07:01.616415 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:07:01.616872 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:07:01.617759 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:07:01.620060 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 8 20:07:01.620093 kernel: [drm] features: -context_init Oct 8 20:07:01.624074 kernel: [drm] number of scanouts: 1 Oct 8 20:07:01.629778 kernel: [drm] number of cap sets: 0 Oct 8 20:07:01.627496 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:07:01.627688 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:07:01.628164 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:07:01.628356 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:07:01.629695 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:07:01.629744 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:07:01.636031 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 8 20:07:01.636080 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Oct 8 20:07:01.641034 kernel: EDAC MC: Ver: 3.0.0 Oct 8 20:07:01.656146 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 8 20:07:01.656229 kernel: Console: switching to colour frame buffer device 160x50 Oct 8 20:07:01.658284 systemd-networkd[1395]: eth0: DHCPv4 address 157.90.145.6/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 20:07:01.663072 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Oct 8 20:07:01.672041 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 8 20:07:01.676836 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:07:01.692995 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:07:01.693918 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:07:01.704274 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:07:01.707538 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:07:01.707888 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:07:01.714252 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:07:01.778562 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:07:01.829300 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 20:07:01.837313 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 20:07:01.850630 lvm[1455]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:07:01.889288 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 20:07:01.891666 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:07:01.891780 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:07:01.891983 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 20:07:01.893067 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 20:07:01.893465 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 20:07:01.893750 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 20:07:01.893846 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 20:07:01.893924 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 20:07:01.893950 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:07:01.894029 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:07:01.896204 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 20:07:01.900775 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 20:07:01.919311 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 20:07:01.921157 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 20:07:01.921762 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 20:07:01.922687 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:07:01.925699 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:07:01.926437 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:07:01.926485 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:07:01.937488 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 20:07:01.943206 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 20:07:01.947933 lvm[1459]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:07:01.948230 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 20:07:01.959214 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 20:07:01.973501 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 20:07:01.974283 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 20:07:01.979281 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 20:07:01.982414 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 20:07:01.992155 jq[1463]: false Oct 8 20:07:01.993207 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Oct 8 20:07:02.003210 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 20:07:02.006082 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 20:07:02.012635 coreos-metadata[1461]: Oct 08 20:07:02.011 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Oct 8 20:07:02.016334 coreos-metadata[1461]: Oct 08 20:07:02.015 INFO Fetch successful Oct 8 20:07:02.016489 coreos-metadata[1461]: Oct 08 20:07:02.016 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Oct 8 20:07:02.016667 coreos-metadata[1461]: Oct 08 20:07:02.016 INFO Fetch successful Oct 8 20:07:02.027246 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 20:07:02.030681 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 20:07:02.032265 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 20:07:02.039232 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 20:07:02.044427 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 20:07:02.048894 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 20:07:02.055285 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 20:07:02.064134 extend-filesystems[1464]: Found loop4 Oct 8 20:07:02.064134 extend-filesystems[1464]: Found loop5 Oct 8 20:07:02.064134 extend-filesystems[1464]: Found loop6 Oct 8 20:07:02.064134 extend-filesystems[1464]: Found loop7 Oct 8 20:07:02.064134 extend-filesystems[1464]: Found sda Oct 8 20:07:02.064134 extend-filesystems[1464]: Found sda1 Oct 8 20:07:02.064134 extend-filesystems[1464]: Found sda2 Oct 8 20:07:02.064134 extend-filesystems[1464]: Found sda3 Oct 8 20:07:02.064134 extend-filesystems[1464]: Found usr Oct 8 20:07:02.064134 extend-filesystems[1464]: Found sda4 Oct 8 20:07:02.064134 extend-filesystems[1464]: Found sda6 Oct 8 20:07:02.064134 extend-filesystems[1464]: Found sda7 Oct 8 20:07:02.064134 extend-filesystems[1464]: Found sda9 Oct 8 20:07:02.064134 extend-filesystems[1464]: Checking size of /dev/sda9 Oct 8 20:07:02.195356 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Oct 8 20:07:02.056299 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 20:07:02.069784 dbus-daemon[1462]: [system] SELinux support is enabled Oct 8 20:07:02.204046 update_engine[1476]: I20241008 20:07:02.154183 1476 main.cc:92] Flatcar Update Engine starting Oct 8 20:07:02.204046 update_engine[1476]: I20241008 20:07:02.174376 1476 update_check_scheduler.cc:74] Next update check in 2m11s Oct 8 20:07:02.204364 extend-filesystems[1464]: Resized partition /dev/sda9 Oct 8 20:07:02.207403 jq[1479]: true Oct 8 20:07:02.063788 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 20:07:02.215464 extend-filesystems[1491]: resize2fs 1.47.1 (20-May-2024) Oct 8 20:07:02.064527 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 20:07:02.219649 tar[1485]: linux-amd64/helm Oct 8 20:07:02.070334 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 20:07:02.093457 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 20:07:02.095058 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 20:07:02.245190 jq[1494]: true Oct 8 20:07:02.121788 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 20:07:02.121820 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 20:07:02.132041 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 20:07:02.132065 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 20:07:02.133539 (ntainerd)[1496]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 20:07:02.174119 systemd[1]: Started update-engine.service - Update Engine. Oct 8 20:07:02.186233 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 20:07:02.235000 systemd-logind[1474]: New seat seat0. Oct 8 20:07:02.262177 bash[1523]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:07:02.273409 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 20:07:02.278347 systemd-logind[1474]: Watching system buttons on /dev/input/event2 (Power Button) Oct 8 20:07:02.278372 systemd-logind[1474]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 8 20:07:02.284214 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 20:07:02.300171 systemd[1]: Starting sshkeys.service... Oct 8 20:07:02.319273 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 20:07:02.320605 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 20:07:02.333568 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1410) Oct 8 20:07:02.391038 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Oct 8 20:07:02.404461 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 8 20:07:02.417600 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 8 20:07:02.431715 extend-filesystems[1491]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 8 20:07:02.431715 extend-filesystems[1491]: old_desc_blocks = 1, new_desc_blocks = 5 Oct 8 20:07:02.431715 extend-filesystems[1491]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Oct 8 20:07:02.436711 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 20:07:02.446891 extend-filesystems[1464]: Resized filesystem in /dev/sda9 Oct 8 20:07:02.446891 extend-filesystems[1464]: Found sr0 Oct 8 20:07:02.438138 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 20:07:02.475399 coreos-metadata[1544]: Oct 08 20:07:02.474 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Oct 8 20:07:02.477591 coreos-metadata[1544]: Oct 08 20:07:02.476 INFO Fetch successful Oct 8 20:07:02.478029 unknown[1544]: wrote ssh authorized keys file for user: core Oct 8 20:07:02.503823 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 20:07:02.510201 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 20:07:02.523426 update-ssh-keys[1550]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:07:02.525693 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 8 20:07:02.529767 systemd[1]: Finished sshkeys.service. Oct 8 20:07:02.557213 systemd-networkd[1395]: eth1: Gained IPv6LL Oct 8 20:07:02.557991 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Oct 8 20:07:02.565113 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 20:07:02.569224 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 20:07:02.581445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:07:02.591329 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 20:07:02.595061 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 20:07:02.616459 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 20:07:02.667355 containerd[1496]: time="2024-10-08T20:07:02.666286798Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 20:07:02.667720 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 20:07:02.667979 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 20:07:02.685415 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 20:07:02.699651 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 20:07:02.708094 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 20:07:02.722407 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 20:07:02.726789 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 20:07:02.727544 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 20:07:02.755380 containerd[1496]: time="2024-10-08T20:07:02.755079513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:07:02.757471 containerd[1496]: time="2024-10-08T20:07:02.757418754Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:07:02.758156 containerd[1496]: time="2024-10-08T20:07:02.757656200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 20:07:02.758156 containerd[1496]: time="2024-10-08T20:07:02.757680936Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 20:07:02.758156 containerd[1496]: time="2024-10-08T20:07:02.757904205Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 20:07:02.758156 containerd[1496]: time="2024-10-08T20:07:02.757926657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 20:07:02.758156 containerd[1496]: time="2024-10-08T20:07:02.758046382Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:07:02.758156 containerd[1496]: time="2024-10-08T20:07:02.758065297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:07:02.758560 containerd[1496]: time="2024-10-08T20:07:02.758535580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:07:02.758629 containerd[1496]: time="2024-10-08T20:07:02.758613617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 20:07:02.758703 containerd[1496]: time="2024-10-08T20:07:02.758685501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:07:02.759364 containerd[1496]: time="2024-10-08T20:07:02.758737098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 20:07:02.759364 containerd[1496]: time="2024-10-08T20:07:02.758873173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:07:02.759364 containerd[1496]: time="2024-10-08T20:07:02.759320753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:07:02.759631 containerd[1496]: time="2024-10-08T20:07:02.759608874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:07:02.759696 containerd[1496]: time="2024-10-08T20:07:02.759680559Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 20:07:02.759893 containerd[1496]: time="2024-10-08T20:07:02.759870004Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 20:07:02.760028 containerd[1496]: time="2024-10-08T20:07:02.759992324Z" level=info msg="metadata content store policy set" policy=shared Oct 8 20:07:02.771816 containerd[1496]: time="2024-10-08T20:07:02.771762163Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 20:07:02.773232 containerd[1496]: time="2024-10-08T20:07:02.773198189Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 20:07:02.773402 containerd[1496]: time="2024-10-08T20:07:02.773377726Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 20:07:02.774092 containerd[1496]: time="2024-10-08T20:07:02.773469668Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 20:07:02.774092 containerd[1496]: time="2024-10-08T20:07:02.773497811Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 20:07:02.774092 containerd[1496]: time="2024-10-08T20:07:02.773721752Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 20:07:02.776385 containerd[1496]: time="2024-10-08T20:07:02.776304699Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 20:07:02.778718 containerd[1496]: time="2024-10-08T20:07:02.778669017Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 20:07:02.779031 containerd[1496]: time="2024-10-08T20:07:02.778991582Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 20:07:02.781047 containerd[1496]: time="2024-10-08T20:07:02.779112679Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 20:07:02.781047 containerd[1496]: time="2024-10-08T20:07:02.779139109Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 20:07:02.781047 containerd[1496]: time="2024-10-08T20:07:02.779160880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 20:07:02.781047 containerd[1496]: time="2024-10-08T20:07:02.779177391Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 20:07:02.781047 containerd[1496]: time="2024-10-08T20:07:02.779196867Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 20:07:02.781047 containerd[1496]: time="2024-10-08T20:07:02.779220401Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 20:07:02.781047 containerd[1496]: time="2024-10-08T20:07:02.779237633Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 20:07:02.781047 containerd[1496]: time="2024-10-08T20:07:02.779253854Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 20:07:02.781047 containerd[1496]: time="2024-10-08T20:07:02.779270165Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 20:07:02.781047 containerd[1496]: time="2024-10-08T20:07:02.779295522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 20:07:02.781047 containerd[1496]: time="2024-10-08T20:07:02.779312685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 20:07:02.781047 containerd[1496]: time="2024-10-08T20:07:02.779328163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 20:07:02.781047 containerd[1496]: time="2024-10-08T20:07:02.779344634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 20:07:02.781047 containerd[1496]: time="2024-10-08T20:07:02.779365684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 20:07:02.781476 containerd[1496]: time="2024-10-08T20:07:02.779387956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 20:07:02.781476 containerd[1496]: time="2024-10-08T20:07:02.779404297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 20:07:02.781476 containerd[1496]: time="2024-10-08T20:07:02.779421499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 20:07:02.781476 containerd[1496]: time="2024-10-08T20:07:02.779438601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 20:07:02.781476 containerd[1496]: time="2024-10-08T20:07:02.779491881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 20:07:02.781476 containerd[1496]: time="2024-10-08T20:07:02.779508362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 20:07:02.781476 containerd[1496]: time="2024-10-08T20:07:02.779524071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 20:07:02.781476 containerd[1496]: time="2024-10-08T20:07:02.779539831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 20:07:02.781476 containerd[1496]: time="2024-10-08T20:07:02.779560059Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 20:07:02.781476 containerd[1496]: time="2024-10-08T20:07:02.779593191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 20:07:02.781476 containerd[1496]: time="2024-10-08T20:07:02.779608720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 20:07:02.781476 containerd[1496]: time="2024-10-08T20:07:02.779623728Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 20:07:02.781476 containerd[1496]: time="2024-10-08T20:07:02.779691375Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 20:07:02.781476 containerd[1496]: time="2024-10-08T20:07:02.779714528Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 20:07:02.781839 containerd[1496]: time="2024-10-08T20:07:02.779730899Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 20:07:02.781839 containerd[1496]: time="2024-10-08T20:07:02.779747360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 20:07:02.781839 containerd[1496]: time="2024-10-08T20:07:02.779760625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 20:07:02.781839 containerd[1496]: time="2024-10-08T20:07:02.779792815Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 20:07:02.781839 containerd[1496]: time="2024-10-08T20:07:02.779813424Z" level=info msg="NRI interface is disabled by configuration." Oct 8 20:07:02.781839 containerd[1496]: time="2024-10-08T20:07:02.779828012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 20:07:02.782034 containerd[1496]: time="2024-10-08T20:07:02.780200400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 20:07:02.782034 containerd[1496]: time="2024-10-08T20:07:02.780275752Z" level=info msg="Connect containerd service" Oct 8 20:07:02.782034 containerd[1496]: time="2024-10-08T20:07:02.780327589Z" level=info msg="using legacy CRI server" Oct 8 20:07:02.782034 containerd[1496]: time="2024-10-08T20:07:02.780337388Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 20:07:02.782034 containerd[1496]: time="2024-10-08T20:07:02.780435642Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 20:07:02.783107 containerd[1496]: time="2024-10-08T20:07:02.783064346Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:07:02.783808 containerd[1496]: time="2024-10-08T20:07:02.783334222Z" level=info msg="Start subscribing containerd event" Oct 8 20:07:02.783808 containerd[1496]: time="2024-10-08T20:07:02.783415655Z" level=info msg="Start recovering state" Oct 8 20:07:02.783808 containerd[1496]: time="2024-10-08T20:07:02.783504191Z" level=info msg="Start event monitor" Oct 8 20:07:02.783808 containerd[1496]: time="2024-10-08T20:07:02.783517035Z" level=info msg="Start snapshots syncer" Oct 8 20:07:02.783808 containerd[1496]: time="2024-10-08T20:07:02.783527896Z" level=info msg="Start cni network conf syncer for default" Oct 8 20:07:02.783808 containerd[1496]: time="2024-10-08T20:07:02.783534528Z" level=info msg="Start streaming server" Oct 8 20:07:02.785258 containerd[1496]: time="2024-10-08T20:07:02.784665100Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 20:07:02.785258 containerd[1496]: time="2024-10-08T20:07:02.784874503Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 20:07:02.785258 containerd[1496]: time="2024-10-08T20:07:02.784959923Z" level=info msg="containerd successfully booted in 0.121165s" Oct 8 20:07:02.785402 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 20:07:02.966217 tar[1485]: linux-amd64/LICENSE Oct 8 20:07:02.966217 tar[1485]: linux-amd64/README.md Oct 8 20:07:02.980325 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 20:07:03.133620 systemd-networkd[1395]: eth0: Gained IPv6LL Oct 8 20:07:03.134187 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Oct 8 20:07:03.634343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:07:03.642337 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:07:03.643575 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 20:07:03.647358 systemd[1]: Startup finished in 1.497s (kernel) + 8.035s (initrd) + 4.924s (userspace) = 14.457s. Oct 8 20:07:04.374517 kubelet[1592]: E1008 20:07:04.374406 1592 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:07:04.381289 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:07:04.381679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:07:04.382362 systemd[1]: kubelet.service: Consumed 1.195s CPU time. Oct 8 20:07:14.632282 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 20:07:14.639314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:07:14.847098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:07:14.873431 (kubelet)[1613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:07:14.937926 kubelet[1613]: E1008 20:07:14.937768 1613 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:07:14.946050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:07:14.946367 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:07:25.076431 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 20:07:25.083337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:07:25.243199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:07:25.244320 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:07:25.293297 kubelet[1629]: E1008 20:07:25.293245 1629 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:07:25.298053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:07:25.298261 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:07:34.045645 systemd-timesyncd[1384]: Contacted time server 168.119.211.223:123 (2.flatcar.pool.ntp.org). Oct 8 20:07:34.045726 systemd-timesyncd[1384]: Initial clock synchronization to Tue 2024-10-08 20:07:34.045425 UTC. Oct 8 20:07:34.045902 systemd-resolved[1352]: Clock change detected. Flushing caches. Oct 8 20:07:35.929542 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 20:07:35.935189 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:07:36.099802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:07:36.104418 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:07:36.150866 kubelet[1645]: E1008 20:07:36.150793 1645 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:07:36.155867 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:07:36.156087 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:07:46.179688 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 8 20:07:46.188293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:07:46.379773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:07:46.385200 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:07:46.425349 kubelet[1661]: E1008 20:07:46.425269 1661 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:07:46.433214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:07:46.433429 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:07:47.838436 update_engine[1476]: I20241008 20:07:47.838299 1476 update_attempter.cc:509] Updating boot flags... Oct 8 20:07:47.935873 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1678) Oct 8 20:07:48.002057 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1680) Oct 8 20:07:48.048906 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1680) Oct 8 20:07:56.679566 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 8 20:07:56.686260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:07:56.853880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:07:56.858443 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:07:56.899672 kubelet[1698]: E1008 20:07:56.899583 1698 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:07:56.907021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:07:56.907272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:08:06.930565 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Oct 8 20:08:06.939357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:08:07.163184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:08:07.168309 (kubelet)[1714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:08:07.213935 kubelet[1714]: E1008 20:08:07.213708 1714 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:08:07.218511 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:08:07.218719 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:08:17.430221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Oct 8 20:08:17.437199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:08:17.645647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:08:17.650057 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:08:17.684192 kubelet[1731]: E1008 20:08:17.684021 1731 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:08:17.692211 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:08:17.692422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:08:27.930178 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Oct 8 20:08:27.937137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:08:28.104736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:08:28.118183 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:08:28.156961 kubelet[1747]: E1008 20:08:28.156873 1747 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:08:28.161792 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:08:28.162034 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:08:38.180233 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Oct 8 20:08:38.192141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:08:38.385108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:08:38.387998 (kubelet)[1763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:08:38.449512 kubelet[1763]: E1008 20:08:38.449302 1763 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:08:38.454471 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:08:38.454897 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:08:48.680287 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Oct 8 20:08:48.687148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:08:48.906812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:08:48.910977 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:08:48.961550 kubelet[1779]: E1008 20:08:48.961392 1779 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:08:48.965148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:08:48.965557 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:08:59.180392 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Oct 8 20:08:59.187195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:08:59.364353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:08:59.369035 (kubelet)[1796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:08:59.418546 kubelet[1796]: E1008 20:08:59.418421 1796 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:08:59.424761 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:08:59.425379 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:09:09.430457 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Oct 8 20:09:09.440241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:09:09.606143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:09:09.607325 (kubelet)[1813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:09:09.648666 kubelet[1813]: E1008 20:09:09.648590 1813 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:09:09.653818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:09:09.654048 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:09:13.880408 update_engine[1476]: I20241008 20:09:13.880293 1476 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 8 20:09:13.880408 update_engine[1476]: I20241008 20:09:13.880360 1476 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 8 20:09:13.881233 update_engine[1476]: I20241008 20:09:13.880609 1476 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Oct 8 20:09:13.881449 update_engine[1476]: I20241008 20:09:13.881373 1476 omaha_request_params.cc:62] Current group set to beta Oct 8 20:09:13.881524 update_engine[1476]: I20241008 20:09:13.881508 1476 update_attempter.cc:499] Already updated boot flags. Skipping. Oct 8 20:09:13.881571 update_engine[1476]: I20241008 20:09:13.881522 1476 update_attempter.cc:643] Scheduling an action processor start. Oct 8 20:09:13.881571 update_engine[1476]: I20241008 20:09:13.881542 1476 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 8 20:09:13.881655 update_engine[1476]: I20241008 20:09:13.881577 1476 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Oct 8 20:09:13.881702 update_engine[1476]: I20241008 20:09:13.881649 1476 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 8 20:09:13.881702 update_engine[1476]: I20241008 20:09:13.881662 1476 omaha_request_action.cc:272] Request: Oct 8 20:09:13.881702 update_engine[1476]: Oct 8 20:09:13.881702 update_engine[1476]: Oct 8 20:09:13.881702 update_engine[1476]: Oct 8 20:09:13.881702 update_engine[1476]: Oct 8 20:09:13.881702 update_engine[1476]: Oct 8 20:09:13.881702 update_engine[1476]: Oct 8 20:09:13.881702 update_engine[1476]: Oct 8 20:09:13.881702 update_engine[1476]: Oct 8 20:09:13.881702 update_engine[1476]: I20241008 20:09:13.881672 1476 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:09:13.882205 locksmithd[1507]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 8 20:09:13.883337 update_engine[1476]: I20241008 20:09:13.883280 1476 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:09:13.883621 update_engine[1476]: I20241008 20:09:13.883574 1476 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:09:13.884690 update_engine[1476]: E20241008 20:09:13.884646 1476 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:09:13.884800 update_engine[1476]: I20241008 20:09:13.884704 1476 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Oct 8 20:09:19.679889 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Oct 8 20:09:19.692159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:09:19.876902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:09:19.897335 (kubelet)[1829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:09:19.937782 kubelet[1829]: E1008 20:09:19.937633 1829 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:09:19.942141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:09:19.942356 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:09:23.804979 update_engine[1476]: I20241008 20:09:23.804793 1476 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:09:23.805717 update_engine[1476]: I20241008 20:09:23.805262 1476 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:09:23.805717 update_engine[1476]: I20241008 20:09:23.805599 1476 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:09:23.806350 update_engine[1476]: E20241008 20:09:23.806293 1476 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:09:23.806427 update_engine[1476]: I20241008 20:09:23.806380 1476 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Oct 8 20:09:30.179558 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Oct 8 20:09:30.185983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:09:30.349774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:09:30.355071 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:09:30.395875 kubelet[1845]: E1008 20:09:30.395797 1845 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:09:30.404138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:09:30.404351 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:09:33.804662 update_engine[1476]: I20241008 20:09:33.804494 1476 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:09:33.805443 update_engine[1476]: I20241008 20:09:33.804863 1476 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:09:33.805443 update_engine[1476]: I20241008 20:09:33.805043 1476 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:09:33.805745 update_engine[1476]: E20241008 20:09:33.805706 1476 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:09:33.805803 update_engine[1476]: I20241008 20:09:33.805748 1476 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Oct 8 20:09:40.430080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Oct 8 20:09:40.436174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:09:40.644691 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:09:40.649035 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:09:40.689726 kubelet[1862]: E1008 20:09:40.689533 1862 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:09:40.697612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:09:40.697843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:09:43.805006 update_engine[1476]: I20241008 20:09:43.804877 1476 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:09:43.805805 update_engine[1476]: I20241008 20:09:43.805150 1476 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:09:43.805805 update_engine[1476]: I20241008 20:09:43.805341 1476 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:09:43.806061 update_engine[1476]: E20241008 20:09:43.806012 1476 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:09:43.806061 update_engine[1476]: I20241008 20:09:43.806058 1476 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 8 20:09:43.806182 update_engine[1476]: I20241008 20:09:43.806068 1476 omaha_request_action.cc:617] Omaha request response: Oct 8 20:09:43.806182 update_engine[1476]: E20241008 20:09:43.806159 1476 omaha_request_action.cc:636] Omaha request network transfer failed. Oct 8 20:09:43.806182 update_engine[1476]: I20241008 20:09:43.806177 1476 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Oct 8 20:09:43.806364 update_engine[1476]: I20241008 20:09:43.806185 1476 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 20:09:43.806364 update_engine[1476]: I20241008 20:09:43.806192 1476 update_attempter.cc:306] Processing Done. Oct 8 20:09:43.806364 update_engine[1476]: E20241008 20:09:43.806207 1476 update_attempter.cc:619] Update failed. Oct 8 20:09:43.806364 update_engine[1476]: I20241008 20:09:43.806214 1476 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Oct 8 20:09:43.806364 update_engine[1476]: I20241008 20:09:43.806221 1476 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Oct 8 20:09:43.806364 update_engine[1476]: I20241008 20:09:43.806228 1476 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Oct 8 20:09:43.806364 update_engine[1476]: I20241008 20:09:43.806299 1476 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 8 20:09:43.806364 update_engine[1476]: I20241008 20:09:43.806317 1476 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 8 20:09:43.806364 update_engine[1476]: I20241008 20:09:43.806324 1476 omaha_request_action.cc:272] Request: Oct 8 20:09:43.806364 update_engine[1476]: Oct 8 20:09:43.806364 update_engine[1476]: Oct 8 20:09:43.806364 update_engine[1476]: Oct 8 20:09:43.806364 update_engine[1476]: Oct 8 20:09:43.806364 update_engine[1476]: Oct 8 20:09:43.806364 update_engine[1476]: Oct 8 20:09:43.806364 update_engine[1476]: I20241008 20:09:43.806331 1476 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:09:43.807491 update_engine[1476]: I20241008 20:09:43.806454 1476 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:09:43.807491 update_engine[1476]: I20241008 20:09:43.806555 1476 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:09:43.807491 update_engine[1476]: E20241008 20:09:43.807291 1476 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:09:43.807491 update_engine[1476]: I20241008 20:09:43.807325 1476 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 8 20:09:43.807491 update_engine[1476]: I20241008 20:09:43.807333 1476 omaha_request_action.cc:617] Omaha request response: Oct 8 20:09:43.807491 update_engine[1476]: I20241008 20:09:43.807340 1476 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 20:09:43.807491 update_engine[1476]: I20241008 20:09:43.807348 1476 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 20:09:43.807491 update_engine[1476]: I20241008 20:09:43.807353 1476 update_attempter.cc:306] Processing Done. Oct 8 20:09:43.807491 update_engine[1476]: I20241008 20:09:43.807360 1476 update_attempter.cc:310] Error event sent. Oct 8 20:09:43.807491 update_engine[1476]: I20241008 20:09:43.807374 1476 update_check_scheduler.cc:74] Next update check in 45m41s Oct 8 20:09:43.808153 locksmithd[1507]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Oct 8 20:09:43.808153 locksmithd[1507]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 8 20:09:50.929557 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Oct 8 20:09:50.935138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:09:51.089074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:09:51.093393 (kubelet)[1879]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:09:51.129635 kubelet[1879]: E1008 20:09:51.129594 1879 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:09:51.133880 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:09:51.134088 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:10:01.179821 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Oct 8 20:10:01.189082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:10:01.349521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:10:01.354489 (kubelet)[1895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:10:01.402891 kubelet[1895]: E1008 20:10:01.402759 1895 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:10:01.410605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:10:01.410923 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:10:11.429997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Oct 8 20:10:11.438139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:10:11.632819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:10:11.647687 (kubelet)[1911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:10:11.724597 kubelet[1911]: E1008 20:10:11.724455 1911 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:10:11.731647 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:10:11.732079 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:10:21.929703 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19. Oct 8 20:10:21.935248 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:10:22.148340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:10:22.152799 (kubelet)[1927]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:10:22.213149 kubelet[1927]: E1008 20:10:22.212927 1927 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:10:22.220108 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:10:22.220556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:10:32.430287 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20. Oct 8 20:10:32.444159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:10:32.597259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:10:32.601495 (kubelet)[1943]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:10:32.638433 kubelet[1943]: E1008 20:10:32.638372 1943 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:10:32.645677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:10:32.645896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:10:42.679506 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 21. Oct 8 20:10:42.686170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:10:42.861572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:10:42.869118 (kubelet)[1960]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:10:42.912947 kubelet[1960]: E1008 20:10:42.912878 1960 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:10:42.917563 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:10:42.917776 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:10:52.930202 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 22. Oct 8 20:10:52.937696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:10:53.097294 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:10:53.097806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:10:53.133590 kubelet[1976]: E1008 20:10:53.133448 1976 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:10:53.136549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:10:53.136950 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:11:03.180178 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 23. Oct 8 20:11:03.187177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:11:03.416125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:11:03.429251 (kubelet)[1992]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:11:03.489312 kubelet[1992]: E1008 20:11:03.489112 1992 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:11:03.495574 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:11:03.495800 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:11:04.505742 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 20:11:04.512379 systemd[1]: Started sshd@0-157.90.145.6:22-147.75.109.163:52706.service - OpenSSH per-connection server daemon (147.75.109.163:52706). Oct 8 20:11:05.516412 sshd[2001]: Accepted publickey for core from 147.75.109.163 port 52706 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:11:05.520591 sshd[2001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:11:05.537913 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 20:11:05.549333 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 20:11:05.555972 systemd-logind[1474]: New session 1 of user core. Oct 8 20:11:05.578703 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 20:11:05.588331 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 20:11:05.601466 (systemd)[2005]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:11:05.754057 systemd[2005]: Queued start job for default target default.target. Oct 8 20:11:05.765235 systemd[2005]: Created slice app.slice - User Application Slice. Oct 8 20:11:05.765261 systemd[2005]: Reached target paths.target - Paths. Oct 8 20:11:05.765274 systemd[2005]: Reached target timers.target - Timers. Oct 8 20:11:05.766987 systemd[2005]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 20:11:05.790936 systemd[2005]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 20:11:05.791228 systemd[2005]: Reached target sockets.target - Sockets. Oct 8 20:11:05.791268 systemd[2005]: Reached target basic.target - Basic System. Oct 8 20:11:05.791370 systemd[2005]: Reached target default.target - Main User Target. Oct 8 20:11:05.791455 systemd[2005]: Startup finished in 176ms. Oct 8 20:11:05.791700 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 20:11:05.798979 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 20:11:06.508384 systemd[1]: Started sshd@1-157.90.145.6:22-147.75.109.163:52714.service - OpenSSH per-connection server daemon (147.75.109.163:52714). Oct 8 20:11:07.487814 sshd[2016]: Accepted publickey for core from 147.75.109.163 port 52714 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:11:07.491632 sshd[2016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:11:07.501931 systemd-logind[1474]: New session 2 of user core. Oct 8 20:11:07.510113 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 20:11:08.181248 sshd[2016]: pam_unix(sshd:session): session closed for user core Oct 8 20:11:08.189463 systemd[1]: sshd@1-157.90.145.6:22-147.75.109.163:52714.service: Deactivated successfully. Oct 8 20:11:08.192258 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 20:11:08.192783 systemd-logind[1474]: Session 2 logged out. Waiting for processes to exit. Oct 8 20:11:08.193611 systemd-logind[1474]: Removed session 2. Oct 8 20:11:08.348088 systemd[1]: Started sshd@2-157.90.145.6:22-147.75.109.163:35826.service - OpenSSH per-connection server daemon (147.75.109.163:35826). Oct 8 20:11:09.322669 sshd[2023]: Accepted publickey for core from 147.75.109.163 port 35826 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:11:09.325529 sshd[2023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:11:09.333424 systemd-logind[1474]: New session 3 of user core. Oct 8 20:11:09.340124 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 20:11:09.999470 sshd[2023]: pam_unix(sshd:session): session closed for user core Oct 8 20:11:10.004227 systemd[1]: sshd@2-157.90.145.6:22-147.75.109.163:35826.service: Deactivated successfully. Oct 8 20:11:10.006732 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 20:11:10.009667 systemd-logind[1474]: Session 3 logged out. Waiting for processes to exit. Oct 8 20:11:10.011292 systemd-logind[1474]: Removed session 3. Oct 8 20:11:10.171515 systemd[1]: Started sshd@3-157.90.145.6:22-147.75.109.163:35838.service - OpenSSH per-connection server daemon (147.75.109.163:35838). Oct 8 20:11:11.169657 sshd[2030]: Accepted publickey for core from 147.75.109.163 port 35838 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:11:11.172397 sshd[2030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:11:11.179939 systemd-logind[1474]: New session 4 of user core. Oct 8 20:11:11.188024 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 20:11:11.850078 sshd[2030]: pam_unix(sshd:session): session closed for user core Oct 8 20:11:11.855394 systemd[1]: sshd@3-157.90.145.6:22-147.75.109.163:35838.service: Deactivated successfully. Oct 8 20:11:11.859539 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 20:11:11.862346 systemd-logind[1474]: Session 4 logged out. Waiting for processes to exit. Oct 8 20:11:11.863892 systemd-logind[1474]: Removed session 4. Oct 8 20:11:12.033263 systemd[1]: Started sshd@4-157.90.145.6:22-147.75.109.163:35848.service - OpenSSH per-connection server daemon (147.75.109.163:35848). Oct 8 20:11:13.036215 sshd[2037]: Accepted publickey for core from 147.75.109.163 port 35848 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:11:13.039019 sshd[2037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:11:13.045915 systemd-logind[1474]: New session 5 of user core. Oct 8 20:11:13.057030 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 20:11:13.589579 sudo[2040]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 20:11:13.590418 sudo[2040]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:11:13.592138 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 24. Oct 8 20:11:13.603656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:11:13.622742 sudo[2040]: pam_unix(sudo:session): session closed for user root Oct 8 20:11:13.787039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:11:13.788039 (kubelet)[2050]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:11:13.789463 sshd[2037]: pam_unix(sshd:session): session closed for user core Oct 8 20:11:13.793302 systemd[1]: sshd@4-157.90.145.6:22-147.75.109.163:35848.service: Deactivated successfully. Oct 8 20:11:13.796509 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 20:11:13.798734 systemd-logind[1474]: Session 5 logged out. Waiting for processes to exit. Oct 8 20:11:13.800408 systemd-logind[1474]: Removed session 5. Oct 8 20:11:13.831927 kubelet[2050]: E1008 20:11:13.831863 2050 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:11:13.836205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:11:13.836518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:11:13.965312 systemd[1]: Started sshd@5-157.90.145.6:22-147.75.109.163:35850.service - OpenSSH per-connection server daemon (147.75.109.163:35850). Oct 8 20:11:14.967931 sshd[2061]: Accepted publickey for core from 147.75.109.163 port 35850 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:11:14.970136 sshd[2061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:11:14.977464 systemd-logind[1474]: New session 6 of user core. Oct 8 20:11:14.987127 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 20:11:15.505727 sudo[2065]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 20:11:15.506482 sudo[2065]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:11:15.513726 sudo[2065]: pam_unix(sudo:session): session closed for user root Oct 8 20:11:15.526600 sudo[2064]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 20:11:15.527402 sudo[2064]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:11:15.551158 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 20:11:15.557072 auditctl[2068]: No rules Oct 8 20:11:15.558139 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 20:11:15.558602 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 20:11:15.567428 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:11:15.617422 augenrules[2086]: No rules Oct 8 20:11:15.618385 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:11:15.620961 sudo[2064]: pam_unix(sudo:session): session closed for user root Oct 8 20:11:15.782794 sshd[2061]: pam_unix(sshd:session): session closed for user core Oct 8 20:11:15.786331 systemd[1]: sshd@5-157.90.145.6:22-147.75.109.163:35850.service: Deactivated successfully. Oct 8 20:11:15.788808 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 20:11:15.790708 systemd-logind[1474]: Session 6 logged out. Waiting for processes to exit. Oct 8 20:11:15.792442 systemd-logind[1474]: Removed session 6. Oct 8 20:11:15.963347 systemd[1]: Started sshd@6-157.90.145.6:22-147.75.109.163:35866.service - OpenSSH per-connection server daemon (147.75.109.163:35866). Oct 8 20:11:16.952811 sshd[2094]: Accepted publickey for core from 147.75.109.163 port 35866 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:11:16.955094 sshd[2094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:11:16.961376 systemd-logind[1474]: New session 7 of user core. Oct 8 20:11:16.971026 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 20:11:17.488535 sudo[2097]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 20:11:17.489257 sudo[2097]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:11:17.901221 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 20:11:17.903431 (dockerd)[2113]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 20:11:18.389555 dockerd[2113]: time="2024-10-08T20:11:18.389465148Z" level=info msg="Starting up" Oct 8 20:11:18.558513 dockerd[2113]: time="2024-10-08T20:11:18.558435269Z" level=info msg="Loading containers: start." Oct 8 20:11:18.699875 kernel: Initializing XFRM netlink socket Oct 8 20:11:18.830061 systemd-networkd[1395]: docker0: Link UP Oct 8 20:11:18.848300 dockerd[2113]: time="2024-10-08T20:11:18.848229070Z" level=info msg="Loading containers: done." Oct 8 20:11:18.864898 dockerd[2113]: time="2024-10-08T20:11:18.864400287Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 20:11:18.864898 dockerd[2113]: time="2024-10-08T20:11:18.864506506Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 20:11:18.864898 dockerd[2113]: time="2024-10-08T20:11:18.864608588Z" level=info msg="Daemon has completed initialization" Oct 8 20:11:18.864654 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3466862964-merged.mount: Deactivated successfully. Oct 8 20:11:18.905878 dockerd[2113]: time="2024-10-08T20:11:18.905594652Z" level=info msg="API listen on /run/docker.sock" Oct 8 20:11:18.906117 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 20:11:20.134278 containerd[1496]: time="2024-10-08T20:11:20.134213600Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\"" Oct 8 20:11:20.843652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1869451796.mount: Deactivated successfully. Oct 8 20:11:22.186986 containerd[1496]: time="2024-10-08T20:11:22.186925932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:22.188085 containerd[1496]: time="2024-10-08T20:11:22.188046160Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.5: active requests=0, bytes read=32754189" Oct 8 20:11:22.188703 containerd[1496]: time="2024-10-08T20:11:22.188547591Z" level=info msg="ImageCreate event name:\"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:22.191318 containerd[1496]: time="2024-10-08T20:11:22.191281192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:22.192685 containerd[1496]: time="2024-10-08T20:11:22.192494697Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.5\" with image id \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\", size \"32750897\" in 2.058222779s" Oct 8 20:11:22.192685 containerd[1496]: time="2024-10-08T20:11:22.192544820Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\" returns image reference \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\"" Oct 8 20:11:22.217135 containerd[1496]: time="2024-10-08T20:11:22.217084561Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\"" Oct 8 20:11:23.786398 containerd[1496]: time="2024-10-08T20:11:23.785593307Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.5: active requests=0, bytes read=29591672" Oct 8 20:11:23.786398 containerd[1496]: time="2024-10-08T20:11:23.785802829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:23.788516 containerd[1496]: time="2024-10-08T20:11:23.788483062Z" level=info msg="ImageCreate event name:\"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:23.789469 containerd[1496]: time="2024-10-08T20:11:23.789426289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:23.791411 containerd[1496]: time="2024-10-08T20:11:23.790676171Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.5\" with image id \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\", size \"31122208\" in 1.573555582s" Oct 8 20:11:23.791411 containerd[1496]: time="2024-10-08T20:11:23.790708832Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\" returns image reference \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\"" Oct 8 20:11:23.813642 containerd[1496]: time="2024-10-08T20:11:23.813593492Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 8 20:11:23.930083 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 25. Oct 8 20:11:23.938111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:11:24.122027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:11:24.124116 (kubelet)[2332]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:11:24.167724 kubelet[2332]: E1008 20:11:24.167651 2332 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:11:24.172673 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:11:24.172940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:11:24.790708 containerd[1496]: time="2024-10-08T20:11:24.790628053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:24.791645 containerd[1496]: time="2024-10-08T20:11:24.791525755Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=17780007" Oct 8 20:11:24.792226 containerd[1496]: time="2024-10-08T20:11:24.792122453Z" level=info msg="ImageCreate event name:\"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:24.794508 containerd[1496]: time="2024-10-08T20:11:24.794475222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:24.795602 containerd[1496]: time="2024-10-08T20:11:24.795486687Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.5\" with image id \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\", size \"19310561\" in 981.852789ms" Oct 8 20:11:24.795602 containerd[1496]: time="2024-10-08T20:11:24.795514750Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" returns image reference \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\"" Oct 8 20:11:24.818458 containerd[1496]: time="2024-10-08T20:11:24.817905434Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\"" Oct 8 20:11:26.090042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3532735958.mount: Deactivated successfully. Oct 8 20:11:26.469458 containerd[1496]: time="2024-10-08T20:11:26.469367222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:26.470455 containerd[1496]: time="2024-10-08T20:11:26.470301072Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.5: active requests=0, bytes read=29039388" Oct 8 20:11:26.471298 containerd[1496]: time="2024-10-08T20:11:26.471251483Z" level=info msg="ImageCreate event name:\"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:26.473474 containerd[1496]: time="2024-10-08T20:11:26.473433512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:26.474173 containerd[1496]: time="2024-10-08T20:11:26.474138363Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.5\" with image id \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\", repo tag \"registry.k8s.io/kube-proxy:v1.30.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\", size \"29038381\" in 1.656194176s" Oct 8 20:11:26.474226 containerd[1496]: time="2024-10-08T20:11:26.474177746Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\" returns image reference \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\"" Oct 8 20:11:26.501300 containerd[1496]: time="2024-10-08T20:11:26.501230678Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 20:11:27.087546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1761800954.mount: Deactivated successfully. Oct 8 20:11:27.863063 containerd[1496]: time="2024-10-08T20:11:27.862995286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:27.864210 containerd[1496]: time="2024-10-08T20:11:27.864162112Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Oct 8 20:11:27.865461 containerd[1496]: time="2024-10-08T20:11:27.865419890Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:27.868318 containerd[1496]: time="2024-10-08T20:11:27.868235395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:27.869363 containerd[1496]: time="2024-10-08T20:11:27.869233154Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.367965688s" Oct 8 20:11:27.869363 containerd[1496]: time="2024-10-08T20:11:27.869261387Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 8 20:11:27.894333 containerd[1496]: time="2024-10-08T20:11:27.893733975Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 20:11:28.416383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3468427959.mount: Deactivated successfully. Oct 8 20:11:28.421398 containerd[1496]: time="2024-10-08T20:11:28.421337416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:28.422081 containerd[1496]: time="2024-10-08T20:11:28.422035743Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Oct 8 20:11:28.422584 containerd[1496]: time="2024-10-08T20:11:28.422564966Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:28.424428 containerd[1496]: time="2024-10-08T20:11:28.424379215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:28.425166 containerd[1496]: time="2024-10-08T20:11:28.425140201Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 531.36575ms" Oct 8 20:11:28.425221 containerd[1496]: time="2024-10-08T20:11:28.425167632Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 8 20:11:28.448162 containerd[1496]: time="2024-10-08T20:11:28.447999215Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Oct 8 20:11:29.030206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount670086914.mount: Deactivated successfully. Oct 8 20:11:30.489220 containerd[1496]: time="2024-10-08T20:11:30.489104763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:30.490503 containerd[1496]: time="2024-10-08T20:11:30.490457077Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238651" Oct 8 20:11:30.491526 containerd[1496]: time="2024-10-08T20:11:30.491477018Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:30.494438 containerd[1496]: time="2024-10-08T20:11:30.494394435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:11:30.496578 containerd[1496]: time="2024-10-08T20:11:30.495627535Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.047595419s" Oct 8 20:11:30.496578 containerd[1496]: time="2024-10-08T20:11:30.495666218Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Oct 8 20:11:33.201936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:11:33.216319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:11:33.253417 systemd[1]: Reloading requested from client PID 2533 ('systemctl') (unit session-7.scope)... Oct 8 20:11:33.253615 systemd[1]: Reloading... Oct 8 20:11:33.401861 zram_generator::config[2574]: No configuration found. Oct 8 20:11:33.532375 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:11:33.628134 systemd[1]: Reloading finished in 373 ms. Oct 8 20:11:33.683809 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 20:11:33.683950 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 20:11:33.684291 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:11:33.689120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:11:33.859105 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:11:33.859195 (kubelet)[2626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:11:33.907192 kubelet[2626]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:11:33.907192 kubelet[2626]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:11:33.907192 kubelet[2626]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:11:33.910724 kubelet[2626]: I1008 20:11:33.909289 2626 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:11:34.267504 kubelet[2626]: I1008 20:11:34.267445 2626 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 20:11:34.267504 kubelet[2626]: I1008 20:11:34.267473 2626 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:11:34.267771 kubelet[2626]: I1008 20:11:34.267654 2626 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 20:11:34.294531 kubelet[2626]: I1008 20:11:34.294271 2626 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:11:34.297197 kubelet[2626]: E1008 20:11:34.296899 2626 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://157.90.145.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 157.90.145.6:6443: connect: connection refused Oct 8 20:11:34.318851 kubelet[2626]: I1008 20:11:34.318779 2626 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:11:34.320100 kubelet[2626]: I1008 20:11:34.320061 2626 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:11:34.320301 kubelet[2626]: I1008 20:11:34.320094 2626 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-1-0-7-2461ba8d61","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:11:34.320301 kubelet[2626]: I1008 20:11:34.320261 2626 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:11:34.320301 kubelet[2626]: I1008 20:11:34.320286 2626 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:11:34.320588 kubelet[2626]: I1008 20:11:34.320413 2626 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:11:34.322030 kubelet[2626]: W1008 20:11:34.321917 2626 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.90.145.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-7-2461ba8d61&limit=500&resourceVersion=0": dial tcp 157.90.145.6:6443: connect: connection refused Oct 8 20:11:34.322030 kubelet[2626]: E1008 20:11:34.322001 2626 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://157.90.145.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-7-2461ba8d61&limit=500&resourceVersion=0": dial tcp 157.90.145.6:6443: connect: connection refused Oct 8 20:11:34.323007 kubelet[2626]: I1008 20:11:34.322978 2626 kubelet.go:400] "Attempting to sync node with API server" Oct 8 20:11:34.323007 kubelet[2626]: I1008 20:11:34.322999 2626 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:11:34.323095 kubelet[2626]: I1008 20:11:34.323029 2626 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:11:34.323095 kubelet[2626]: I1008 20:11:34.323048 2626 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:11:34.326954 kubelet[2626]: W1008 20:11:34.326231 2626 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.90.145.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.90.145.6:6443: connect: connection refused Oct 8 20:11:34.326954 kubelet[2626]: E1008 20:11:34.326267 2626 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://157.90.145.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.90.145.6:6443: connect: connection refused Oct 8 20:11:34.326954 kubelet[2626]: I1008 20:11:34.326665 2626 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:11:34.329309 kubelet[2626]: I1008 20:11:34.328731 2626 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:11:34.329309 kubelet[2626]: W1008 20:11:34.328811 2626 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 20:11:34.329761 kubelet[2626]: I1008 20:11:34.329740 2626 server.go:1264] "Started kubelet" Oct 8 20:11:34.333106 kubelet[2626]: I1008 20:11:34.333086 2626 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:11:34.337252 kubelet[2626]: I1008 20:11:34.337237 2626 server.go:455] "Adding debug handlers to kubelet server" Oct 8 20:11:34.341736 kubelet[2626]: I1008 20:11:34.340758 2626 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:11:34.341736 kubelet[2626]: I1008 20:11:34.341218 2626 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:11:34.341929 kubelet[2626]: E1008 20:11:34.341781 2626 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.90.145.6:6443/api/v1/namespaces/default/events\": dial tcp 157.90.145.6:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-1-0-7-2461ba8d61.17fc9347f8cf2a53 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-7-2461ba8d61,UID:ci-4081-1-0-7-2461ba8d61,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-7-2461ba8d61,},FirstTimestamp:2024-10-08 20:11:34.329715283 +0000 UTC m=+0.464343242,LastTimestamp:2024-10-08 20:11:34.329715283 +0000 UTC m=+0.464343242,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-7-2461ba8d61,}" Oct 8 20:11:34.342292 kubelet[2626]: I1008 20:11:34.342280 2626 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:11:34.342477 kubelet[2626]: I1008 20:11:34.342455 2626 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:11:34.352622 kubelet[2626]: I1008 20:11:34.352581 2626 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 20:11:34.352754 kubelet[2626]: I1008 20:11:34.352706 2626 reconciler.go:26] "Reconciler: start to sync state" Oct 8 20:11:34.353422 kubelet[2626]: W1008 20:11:34.353184 2626 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.90.145.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.90.145.6:6443: connect: connection refused Oct 8 20:11:34.353422 kubelet[2626]: E1008 20:11:34.353246 2626 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://157.90.145.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.90.145.6:6443: connect: connection refused Oct 8 20:11:34.353422 kubelet[2626]: E1008 20:11:34.353316 2626 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.145.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-7-2461ba8d61?timeout=10s\": dial tcp 157.90.145.6:6443: connect: connection refused" interval="200ms" Oct 8 20:11:34.357865 kubelet[2626]: I1008 20:11:34.357688 2626 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:11:34.357865 kubelet[2626]: I1008 20:11:34.357782 2626 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:11:34.361092 kubelet[2626]: I1008 20:11:34.361065 2626 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:11:34.373757 kubelet[2626]: I1008 20:11:34.372847 2626 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:11:34.376237 kubelet[2626]: I1008 20:11:34.376202 2626 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:11:34.376339 kubelet[2626]: I1008 20:11:34.376242 2626 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:11:34.376366 kubelet[2626]: I1008 20:11:34.376353 2626 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 20:11:34.376472 kubelet[2626]: E1008 20:11:34.376429 2626 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:11:34.384794 kubelet[2626]: W1008 20:11:34.384534 2626 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.90.145.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.145.6:6443: connect: connection refused Oct 8 20:11:34.384927 kubelet[2626]: E1008 20:11:34.384811 2626 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://157.90.145.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.145.6:6443: connect: connection refused Oct 8 20:11:34.385166 kubelet[2626]: E1008 20:11:34.385135 2626 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:11:34.397906 kubelet[2626]: I1008 20:11:34.397888 2626 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:11:34.398200 kubelet[2626]: I1008 20:11:34.398032 2626 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:11:34.398200 kubelet[2626]: I1008 20:11:34.398066 2626 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:11:34.400692 kubelet[2626]: I1008 20:11:34.400619 2626 policy_none.go:49] "None policy: Start" Oct 8 20:11:34.401272 kubelet[2626]: I1008 20:11:34.401243 2626 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:11:34.401323 kubelet[2626]: I1008 20:11:34.401288 2626 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:11:34.408246 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 20:11:34.429376 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 20:11:34.434060 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 20:11:34.441164 kubelet[2626]: I1008 20:11:34.441118 2626 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:11:34.442178 kubelet[2626]: I1008 20:11:34.441728 2626 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 20:11:34.442178 kubelet[2626]: I1008 20:11:34.441911 2626 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:11:34.444589 kubelet[2626]: E1008 20:11:34.444556 2626 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-1-0-7-2461ba8d61\" not found" Oct 8 20:11:34.448238 kubelet[2626]: I1008 20:11:34.448173 2626 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:34.448713 kubelet[2626]: E1008 20:11:34.448682 2626 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.90.145.6:6443/api/v1/nodes\": dial tcp 157.90.145.6:6443: connect: connection refused" node="ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:34.468901 kubelet[2626]: E1008 20:11:34.468782 2626 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.90.145.6:6443/api/v1/namespaces/default/events\": dial tcp 157.90.145.6:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-1-0-7-2461ba8d61.17fc9347f8cf2a53 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-7-2461ba8d61,UID:ci-4081-1-0-7-2461ba8d61,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-7-2461ba8d61,},FirstTimestamp:2024-10-08 20:11:34.329715283 +0000 UTC m=+0.464343242,LastTimestamp:2024-10-08 20:11:34.329715283 +0000 UTC m=+0.464343242,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-7-2461ba8d61,}" Oct 8 20:11:34.476991 kubelet[2626]: I1008 20:11:34.476926 2626 topology_manager.go:215] "Topology Admit Handler" podUID="f85c65bc8abac43dc04f2c9226f1cc68" podNamespace="kube-system" podName="kube-apiserver-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:34.478978 kubelet[2626]: I1008 20:11:34.478940 2626 topology_manager.go:215] "Topology Admit Handler" podUID="dc406d080da69752b9403bc749922dc4" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:34.480448 kubelet[2626]: I1008 20:11:34.480298 2626 topology_manager.go:215] "Topology Admit Handler" podUID="630670da550b5b80207b889c27115e2e" podNamespace="kube-system" podName="kube-scheduler-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:34.488649 systemd[1]: Created slice kubepods-burstable-podf85c65bc8abac43dc04f2c9226f1cc68.slice - libcontainer container kubepods-burstable-podf85c65bc8abac43dc04f2c9226f1cc68.slice. Oct 8 20:11:34.514342 systemd[1]: Created slice kubepods-burstable-poddc406d080da69752b9403bc749922dc4.slice - libcontainer container kubepods-burstable-poddc406d080da69752b9403bc749922dc4.slice. Oct 8 20:11:34.528203 systemd[1]: Created slice kubepods-burstable-pod630670da550b5b80207b889c27115e2e.slice - libcontainer container kubepods-burstable-pod630670da550b5b80207b889c27115e2e.slice. Oct 8 20:11:34.553447 kubelet[2626]: I1008 20:11:34.553346 2626 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f85c65bc8abac43dc04f2c9226f1cc68-k8s-certs\") pod \"kube-apiserver-ci-4081-1-0-7-2461ba8d61\" (UID: \"f85c65bc8abac43dc04f2c9226f1cc68\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:34.553555 kubelet[2626]: I1008 20:11:34.553466 2626 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc406d080da69752b9403bc749922dc4-k8s-certs\") pod \"kube-controller-manager-ci-4081-1-0-7-2461ba8d61\" (UID: \"dc406d080da69752b9403bc749922dc4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:34.553594 kubelet[2626]: I1008 20:11:34.553517 2626 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc406d080da69752b9403bc749922dc4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-1-0-7-2461ba8d61\" (UID: \"dc406d080da69752b9403bc749922dc4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:34.553630 kubelet[2626]: I1008 20:11:34.553600 2626 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/630670da550b5b80207b889c27115e2e-kubeconfig\") pod \"kube-scheduler-ci-4081-1-0-7-2461ba8d61\" (UID: \"630670da550b5b80207b889c27115e2e\") " pod="kube-system/kube-scheduler-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:34.553666 kubelet[2626]: I1008 20:11:34.553649 2626 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f85c65bc8abac43dc04f2c9226f1cc68-ca-certs\") pod \"kube-apiserver-ci-4081-1-0-7-2461ba8d61\" (UID: \"f85c65bc8abac43dc04f2c9226f1cc68\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:34.553957 kubelet[2626]: I1008 20:11:34.553691 2626 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f85c65bc8abac43dc04f2c9226f1cc68-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-1-0-7-2461ba8d61\" (UID: \"f85c65bc8abac43dc04f2c9226f1cc68\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:34.553957 kubelet[2626]: I1008 20:11:34.553739 2626 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc406d080da69752b9403bc749922dc4-ca-certs\") pod \"kube-controller-manager-ci-4081-1-0-7-2461ba8d61\" (UID: \"dc406d080da69752b9403bc749922dc4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:34.553957 kubelet[2626]: I1008 20:11:34.553780 2626 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dc406d080da69752b9403bc749922dc4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-1-0-7-2461ba8d61\" (UID: \"dc406d080da69752b9403bc749922dc4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:34.553957 kubelet[2626]: I1008 20:11:34.553821 2626 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc406d080da69752b9403bc749922dc4-kubeconfig\") pod \"kube-controller-manager-ci-4081-1-0-7-2461ba8d61\" (UID: \"dc406d080da69752b9403bc749922dc4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:34.554408 kubelet[2626]: E1008 20:11:34.554349 2626 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.145.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-7-2461ba8d61?timeout=10s\": dial tcp 157.90.145.6:6443: connect: connection refused" interval="400ms" Oct 8 20:11:34.653025 kubelet[2626]: I1008 20:11:34.652903 2626 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:34.653405 kubelet[2626]: E1008 20:11:34.653350 2626 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.90.145.6:6443/api/v1/nodes\": dial tcp 157.90.145.6:6443: connect: connection refused" node="ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:34.814984 containerd[1496]: time="2024-10-08T20:11:34.814720333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-1-0-7-2461ba8d61,Uid:f85c65bc8abac43dc04f2c9226f1cc68,Namespace:kube-system,Attempt:0,}" Oct 8 20:11:34.826634 containerd[1496]: time="2024-10-08T20:11:34.826565939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-1-0-7-2461ba8d61,Uid:dc406d080da69752b9403bc749922dc4,Namespace:kube-system,Attempt:0,}" Oct 8 20:11:34.831355 containerd[1496]: time="2024-10-08T20:11:34.831322302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-1-0-7-2461ba8d61,Uid:630670da550b5b80207b889c27115e2e,Namespace:kube-system,Attempt:0,}" Oct 8 20:11:34.955941 kubelet[2626]: E1008 20:11:34.955865 2626 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.145.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-7-2461ba8d61?timeout=10s\": dial tcp 157.90.145.6:6443: connect: connection refused" interval="800ms" Oct 8 20:11:35.055652 kubelet[2626]: I1008 20:11:35.055584 2626 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:35.055895 kubelet[2626]: E1008 20:11:35.055861 2626 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.90.145.6:6443/api/v1/nodes\": dial tcp 157.90.145.6:6443: connect: connection refused" node="ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:35.328092 kubelet[2626]: W1008 20:11:35.327984 2626 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.90.145.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-7-2461ba8d61&limit=500&resourceVersion=0": dial tcp 157.90.145.6:6443: connect: connection refused Oct 8 20:11:35.328092 kubelet[2626]: E1008 20:11:35.328088 2626 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://157.90.145.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-7-2461ba8d61&limit=500&resourceVersion=0": dial tcp 157.90.145.6:6443: connect: connection refused Oct 8 20:11:35.389287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2933497423.mount: Deactivated successfully. Oct 8 20:11:35.399767 containerd[1496]: time="2024-10-08T20:11:35.399702891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:11:35.401430 containerd[1496]: time="2024-10-08T20:11:35.401374784Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:11:35.403171 containerd[1496]: time="2024-10-08T20:11:35.403000680Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:11:35.403462 containerd[1496]: time="2024-10-08T20:11:35.403366326Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:11:35.404228 containerd[1496]: time="2024-10-08T20:11:35.404157518Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:11:35.406681 containerd[1496]: time="2024-10-08T20:11:35.406229311Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:11:35.406681 containerd[1496]: time="2024-10-08T20:11:35.406603883Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Oct 8 20:11:35.409890 containerd[1496]: time="2024-10-08T20:11:35.409757793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:11:35.415349 containerd[1496]: time="2024-10-08T20:11:35.415163753Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 588.504498ms" Oct 8 20:11:35.419057 containerd[1496]: time="2024-10-08T20:11:35.419008478Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 604.134749ms" Oct 8 20:11:35.423896 containerd[1496]: time="2024-10-08T20:11:35.423756064Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 592.369862ms" Oct 8 20:11:35.555014 kubelet[2626]: W1008 20:11:35.554910 2626 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.90.145.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.145.6:6443: connect: connection refused Oct 8 20:11:35.555014 kubelet[2626]: E1008 20:11:35.554974 2626 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://157.90.145.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.145.6:6443: connect: connection refused Oct 8 20:11:35.604555 containerd[1496]: time="2024-10-08T20:11:35.603602766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:11:35.604555 containerd[1496]: time="2024-10-08T20:11:35.603650426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:11:35.604555 containerd[1496]: time="2024-10-08T20:11:35.603664110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:11:35.604555 containerd[1496]: time="2024-10-08T20:11:35.603736577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:11:35.609458 containerd[1496]: time="2024-10-08T20:11:35.609366617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:11:35.609708 containerd[1496]: time="2024-10-08T20:11:35.609647493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:11:35.609869 containerd[1496]: time="2024-10-08T20:11:35.609811771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:11:35.610247 containerd[1496]: time="2024-10-08T20:11:35.610204797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:11:35.612087 containerd[1496]: time="2024-10-08T20:11:35.611435624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:11:35.612087 containerd[1496]: time="2024-10-08T20:11:35.611493112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:11:35.612087 containerd[1496]: time="2024-10-08T20:11:35.611529499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:11:35.612087 containerd[1496]: time="2024-10-08T20:11:35.611629537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:11:35.632997 systemd[1]: Started cri-containerd-25ea783f9ad16c9ddc09dd50fd8e123057d6c1d5966a9724ae9425880aaeadb1.scope - libcontainer container 25ea783f9ad16c9ddc09dd50fd8e123057d6c1d5966a9724ae9425880aaeadb1. Oct 8 20:11:35.637297 systemd[1]: Started cri-containerd-34bfe054e0905c5f3bdaabc0cd72099b32f661e5d3d1f685e444ff72c3403e01.scope - libcontainer container 34bfe054e0905c5f3bdaabc0cd72099b32f661e5d3d1f685e444ff72c3403e01. Oct 8 20:11:35.643578 systemd[1]: Started cri-containerd-1c9d825e82229f0d0a99304ebd7fac33c3805a4af8aa146bc962cb6498ecd853.scope - libcontainer container 1c9d825e82229f0d0a99304ebd7fac33c3805a4af8aa146bc962cb6498ecd853. Oct 8 20:11:35.706955 containerd[1496]: time="2024-10-08T20:11:35.706914690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-1-0-7-2461ba8d61,Uid:630670da550b5b80207b889c27115e2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"25ea783f9ad16c9ddc09dd50fd8e123057d6c1d5966a9724ae9425880aaeadb1\"" Oct 8 20:11:35.711708 containerd[1496]: time="2024-10-08T20:11:35.711680922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-1-0-7-2461ba8d61,Uid:dc406d080da69752b9403bc749922dc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c9d825e82229f0d0a99304ebd7fac33c3805a4af8aa146bc962cb6498ecd853\"" Oct 8 20:11:35.718856 containerd[1496]: time="2024-10-08T20:11:35.718494700Z" level=info msg="CreateContainer within sandbox \"1c9d825e82229f0d0a99304ebd7fac33c3805a4af8aa146bc962cb6498ecd853\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 20:11:35.719049 containerd[1496]: time="2024-10-08T20:11:35.719030655Z" level=info msg="CreateContainer within sandbox \"25ea783f9ad16c9ddc09dd50fd8e123057d6c1d5966a9724ae9425880aaeadb1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 20:11:35.720849 containerd[1496]: time="2024-10-08T20:11:35.720807724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-1-0-7-2461ba8d61,Uid:f85c65bc8abac43dc04f2c9226f1cc68,Namespace:kube-system,Attempt:0,} returns sandbox id \"34bfe054e0905c5f3bdaabc0cd72099b32f661e5d3d1f685e444ff72c3403e01\"" Oct 8 20:11:35.724493 containerd[1496]: time="2024-10-08T20:11:35.724461442Z" level=info msg="CreateContainer within sandbox \"34bfe054e0905c5f3bdaabc0cd72099b32f661e5d3d1f685e444ff72c3403e01\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 20:11:35.738162 containerd[1496]: time="2024-10-08T20:11:35.738117842Z" level=info msg="CreateContainer within sandbox \"25ea783f9ad16c9ddc09dd50fd8e123057d6c1d5966a9724ae9425880aaeadb1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5514b5d549969197cb2b43a6de5dc64749b5a17a138fd60fad6391b15136117c\"" Oct 8 20:11:35.738767 containerd[1496]: time="2024-10-08T20:11:35.738734076Z" level=info msg="StartContainer for \"5514b5d549969197cb2b43a6de5dc64749b5a17a138fd60fad6391b15136117c\"" Oct 8 20:11:35.742773 containerd[1496]: time="2024-10-08T20:11:35.742674760Z" level=info msg="CreateContainer within sandbox \"1c9d825e82229f0d0a99304ebd7fac33c3805a4af8aa146bc962cb6498ecd853\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"571220c0ab9dd7d8a7e3ab3911c59f1df24e754383b4b4cf21a52999b6041630\"" Oct 8 20:11:35.743709 containerd[1496]: time="2024-10-08T20:11:35.743574948Z" level=info msg="StartContainer for \"571220c0ab9dd7d8a7e3ab3911c59f1df24e754383b4b4cf21a52999b6041630\"" Oct 8 20:11:35.748174 containerd[1496]: time="2024-10-08T20:11:35.747986815Z" level=info msg="CreateContainer within sandbox \"34bfe054e0905c5f3bdaabc0cd72099b32f661e5d3d1f685e444ff72c3403e01\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5d49b9a8d10eee7a80330d3f2dbfca678ad8979061c21ea3d3f644a2f54189fc\"" Oct 8 20:11:35.749012 containerd[1496]: time="2024-10-08T20:11:35.748953757Z" level=info msg="StartContainer for \"5d49b9a8d10eee7a80330d3f2dbfca678ad8979061c21ea3d3f644a2f54189fc\"" Oct 8 20:11:35.757181 kubelet[2626]: E1008 20:11:35.757032 2626 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.145.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-7-2461ba8d61?timeout=10s\": dial tcp 157.90.145.6:6443: connect: connection refused" interval="1.6s" Oct 8 20:11:35.774082 systemd[1]: Started cri-containerd-5514b5d549969197cb2b43a6de5dc64749b5a17a138fd60fad6391b15136117c.scope - libcontainer container 5514b5d549969197cb2b43a6de5dc64749b5a17a138fd60fad6391b15136117c. Oct 8 20:11:35.786218 systemd[1]: Started cri-containerd-5d49b9a8d10eee7a80330d3f2dbfca678ad8979061c21ea3d3f644a2f54189fc.scope - libcontainer container 5d49b9a8d10eee7a80330d3f2dbfca678ad8979061c21ea3d3f644a2f54189fc. Oct 8 20:11:35.791220 systemd[1]: Started cri-containerd-571220c0ab9dd7d8a7e3ab3911c59f1df24e754383b4b4cf21a52999b6041630.scope - libcontainer container 571220c0ab9dd7d8a7e3ab3911c59f1df24e754383b4b4cf21a52999b6041630. Oct 8 20:11:35.835765 containerd[1496]: time="2024-10-08T20:11:35.835711251Z" level=info msg="StartContainer for \"5514b5d549969197cb2b43a6de5dc64749b5a17a138fd60fad6391b15136117c\" returns successfully" Oct 8 20:11:35.845767 kubelet[2626]: W1008 20:11:35.845699 2626 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.90.145.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.90.145.6:6443: connect: connection refused Oct 8 20:11:35.845767 kubelet[2626]: E1008 20:11:35.845765 2626 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://157.90.145.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.90.145.6:6443: connect: connection refused Oct 8 20:11:35.857993 containerd[1496]: time="2024-10-08T20:11:35.857903256Z" level=info msg="StartContainer for \"5d49b9a8d10eee7a80330d3f2dbfca678ad8979061c21ea3d3f644a2f54189fc\" returns successfully" Oct 8 20:11:35.860235 kubelet[2626]: I1008 20:11:35.860209 2626 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:35.860544 kubelet[2626]: E1008 20:11:35.860521 2626 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.90.145.6:6443/api/v1/nodes\": dial tcp 157.90.145.6:6443: connect: connection refused" node="ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:35.874882 containerd[1496]: time="2024-10-08T20:11:35.874588071Z" level=info msg="StartContainer for \"571220c0ab9dd7d8a7e3ab3911c59f1df24e754383b4b4cf21a52999b6041630\" returns successfully" Oct 8 20:11:35.914128 kubelet[2626]: W1008 20:11:35.914015 2626 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.90.145.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.90.145.6:6443: connect: connection refused Oct 8 20:11:35.914128 kubelet[2626]: E1008 20:11:35.914088 2626 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://157.90.145.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.90.145.6:6443: connect: connection refused Oct 8 20:11:37.464093 kubelet[2626]: I1008 20:11:37.464024 2626 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:37.680388 kubelet[2626]: E1008 20:11:37.680334 2626 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-1-0-7-2461ba8d61\" not found" node="ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:37.841592 kubelet[2626]: I1008 20:11:37.840989 2626 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:38.326656 kubelet[2626]: I1008 20:11:38.326583 2626 apiserver.go:52] "Watching apiserver" Oct 8 20:11:38.353355 kubelet[2626]: I1008 20:11:38.353263 2626 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 20:11:39.843880 systemd[1]: Reloading requested from client PID 2898 ('systemctl') (unit session-7.scope)... Oct 8 20:11:39.843909 systemd[1]: Reloading... Oct 8 20:11:39.979933 zram_generator::config[2953]: No configuration found. Oct 8 20:11:40.074983 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:11:40.171959 systemd[1]: Reloading finished in 327 ms. Oct 8 20:11:40.219416 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:11:40.219698 kubelet[2626]: E1008 20:11:40.219338 2626 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081-1-0-7-2461ba8d61.17fc9347f8cf2a53 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-7-2461ba8d61,UID:ci-4081-1-0-7-2461ba8d61,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-7-2461ba8d61,},FirstTimestamp:2024-10-08 20:11:34.329715283 +0000 UTC m=+0.464343242,LastTimestamp:2024-10-08 20:11:34.329715283 +0000 UTC m=+0.464343242,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-7-2461ba8d61,}" Oct 8 20:11:40.220236 kubelet[2626]: I1008 20:11:40.219816 2626 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:11:40.240861 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 20:11:40.241188 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:11:40.246584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:11:40.458173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:11:40.461656 (kubelet)[2989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:11:40.539440 kubelet[2989]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:11:40.539440 kubelet[2989]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:11:40.539440 kubelet[2989]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:11:40.542373 kubelet[2989]: I1008 20:11:40.542299 2989 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:11:40.547312 kubelet[2989]: I1008 20:11:40.547280 2989 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 20:11:40.548783 kubelet[2989]: I1008 20:11:40.547411 2989 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:11:40.548783 kubelet[2989]: I1008 20:11:40.547746 2989 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 20:11:40.549887 kubelet[2989]: I1008 20:11:40.549864 2989 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 20:11:40.551730 kubelet[2989]: I1008 20:11:40.551684 2989 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:11:40.566593 kubelet[2989]: I1008 20:11:40.566560 2989 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:11:40.567145 kubelet[2989]: I1008 20:11:40.567101 2989 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:11:40.567460 kubelet[2989]: I1008 20:11:40.567244 2989 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-1-0-7-2461ba8d61","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:11:40.567646 kubelet[2989]: I1008 20:11:40.567629 2989 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:11:40.567747 kubelet[2989]: I1008 20:11:40.567732 2989 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:11:40.567924 kubelet[2989]: I1008 20:11:40.567906 2989 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:11:40.569259 kubelet[2989]: I1008 20:11:40.569236 2989 kubelet.go:400] "Attempting to sync node with API server" Oct 8 20:11:40.569383 kubelet[2989]: I1008 20:11:40.569366 2989 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:11:40.571354 kubelet[2989]: I1008 20:11:40.571335 2989 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:11:40.571476 kubelet[2989]: I1008 20:11:40.571462 2989 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:11:40.577850 kubelet[2989]: I1008 20:11:40.577801 2989 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:11:40.578194 kubelet[2989]: I1008 20:11:40.578176 2989 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:11:40.578797 kubelet[2989]: I1008 20:11:40.578776 2989 server.go:1264] "Started kubelet" Oct 8 20:11:40.583869 kubelet[2989]: I1008 20:11:40.581657 2989 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:11:40.583869 kubelet[2989]: I1008 20:11:40.582181 2989 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:11:40.583869 kubelet[2989]: I1008 20:11:40.582213 2989 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:11:40.583869 kubelet[2989]: I1008 20:11:40.583019 2989 server.go:455] "Adding debug handlers to kubelet server" Oct 8 20:11:40.584472 kubelet[2989]: E1008 20:11:40.584448 2989 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:11:40.586259 kubelet[2989]: I1008 20:11:40.586047 2989 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:11:40.591804 kubelet[2989]: I1008 20:11:40.591345 2989 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:11:40.591804 kubelet[2989]: I1008 20:11:40.591442 2989 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 20:11:40.591804 kubelet[2989]: I1008 20:11:40.591588 2989 reconciler.go:26] "Reconciler: start to sync state" Oct 8 20:11:40.593270 kubelet[2989]: I1008 20:11:40.593237 2989 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:11:40.593476 kubelet[2989]: I1008 20:11:40.593324 2989 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:11:40.595805 kubelet[2989]: I1008 20:11:40.595777 2989 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:11:40.607794 kubelet[2989]: I1008 20:11:40.607342 2989 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:11:40.609600 kubelet[2989]: I1008 20:11:40.609573 2989 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:11:40.610114 kubelet[2989]: I1008 20:11:40.609757 2989 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:11:40.610114 kubelet[2989]: I1008 20:11:40.609789 2989 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 20:11:40.610114 kubelet[2989]: E1008 20:11:40.609886 2989 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:11:40.654178 kubelet[2989]: I1008 20:11:40.654134 2989 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:11:40.654178 kubelet[2989]: I1008 20:11:40.654161 2989 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:11:40.654178 kubelet[2989]: I1008 20:11:40.654181 2989 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:11:40.654408 kubelet[2989]: I1008 20:11:40.654385 2989 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 20:11:40.654438 kubelet[2989]: I1008 20:11:40.654415 2989 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 20:11:40.654438 kubelet[2989]: I1008 20:11:40.654437 2989 policy_none.go:49] "None policy: Start" Oct 8 20:11:40.655248 kubelet[2989]: I1008 20:11:40.655216 2989 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:11:40.655248 kubelet[2989]: I1008 20:11:40.655238 2989 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:11:40.655369 kubelet[2989]: I1008 20:11:40.655345 2989 state_mem.go:75] "Updated machine memory state" Oct 8 20:11:40.660303 kubelet[2989]: I1008 20:11:40.660277 2989 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:11:40.660507 kubelet[2989]: I1008 20:11:40.660464 2989 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 20:11:40.663498 kubelet[2989]: I1008 20:11:40.662292 2989 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:11:40.710719 kubelet[2989]: I1008 20:11:40.710534 2989 topology_manager.go:215] "Topology Admit Handler" podUID="f85c65bc8abac43dc04f2c9226f1cc68" podNamespace="kube-system" podName="kube-apiserver-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:40.710719 kubelet[2989]: I1008 20:11:40.710659 2989 topology_manager.go:215] "Topology Admit Handler" podUID="dc406d080da69752b9403bc749922dc4" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:40.710719 kubelet[2989]: I1008 20:11:40.710708 2989 topology_manager.go:215] "Topology Admit Handler" podUID="630670da550b5b80207b889c27115e2e" podNamespace="kube-system" podName="kube-scheduler-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:40.767983 kubelet[2989]: I1008 20:11:40.767945 2989 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:40.775373 kubelet[2989]: I1008 20:11:40.775322 2989 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:40.775558 kubelet[2989]: I1008 20:11:40.775414 2989 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:40.792960 kubelet[2989]: I1008 20:11:40.792895 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc406d080da69752b9403bc749922dc4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-1-0-7-2461ba8d61\" (UID: \"dc406d080da69752b9403bc749922dc4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:40.792960 kubelet[2989]: I1008 20:11:40.792928 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f85c65bc8abac43dc04f2c9226f1cc68-ca-certs\") pod \"kube-apiserver-ci-4081-1-0-7-2461ba8d61\" (UID: \"f85c65bc8abac43dc04f2c9226f1cc68\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:40.792960 kubelet[2989]: I1008 20:11:40.792947 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f85c65bc8abac43dc04f2c9226f1cc68-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-1-0-7-2461ba8d61\" (UID: \"f85c65bc8abac43dc04f2c9226f1cc68\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:40.792960 kubelet[2989]: I1008 20:11:40.792961 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc406d080da69752b9403bc749922dc4-k8s-certs\") pod \"kube-controller-manager-ci-4081-1-0-7-2461ba8d61\" (UID: \"dc406d080da69752b9403bc749922dc4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:40.793175 kubelet[2989]: I1008 20:11:40.792977 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dc406d080da69752b9403bc749922dc4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-1-0-7-2461ba8d61\" (UID: \"dc406d080da69752b9403bc749922dc4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:40.793175 kubelet[2989]: I1008 20:11:40.792993 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc406d080da69752b9403bc749922dc4-kubeconfig\") pod \"kube-controller-manager-ci-4081-1-0-7-2461ba8d61\" (UID: \"dc406d080da69752b9403bc749922dc4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:40.793175 kubelet[2989]: I1008 20:11:40.793008 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/630670da550b5b80207b889c27115e2e-kubeconfig\") pod \"kube-scheduler-ci-4081-1-0-7-2461ba8d61\" (UID: \"630670da550b5b80207b889c27115e2e\") " pod="kube-system/kube-scheduler-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:40.793175 kubelet[2989]: I1008 20:11:40.793021 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f85c65bc8abac43dc04f2c9226f1cc68-k8s-certs\") pod \"kube-apiserver-ci-4081-1-0-7-2461ba8d61\" (UID: \"f85c65bc8abac43dc04f2c9226f1cc68\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:40.793175 kubelet[2989]: I1008 20:11:40.793035 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc406d080da69752b9403bc749922dc4-ca-certs\") pod \"kube-controller-manager-ci-4081-1-0-7-2461ba8d61\" (UID: \"dc406d080da69752b9403bc749922dc4\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:40.837858 sudo[3021]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 8 20:11:40.838359 sudo[3021]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 8 20:11:41.461016 sudo[3021]: pam_unix(sudo:session): session closed for user root Oct 8 20:11:41.579796 kubelet[2989]: I1008 20:11:41.579722 2989 apiserver.go:52] "Watching apiserver" Oct 8 20:11:41.592348 kubelet[2989]: I1008 20:11:41.592240 2989 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 20:11:41.641858 kubelet[2989]: E1008 20:11:41.641144 2989 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-1-0-7-2461ba8d61\" already exists" pod="kube-system/kube-apiserver-ci-4081-1-0-7-2461ba8d61" Oct 8 20:11:41.660824 kubelet[2989]: I1008 20:11:41.660770 2989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-1-0-7-2461ba8d61" podStartSLOduration=1.660750314 podStartE2EDuration="1.660750314s" podCreationTimestamp="2024-10-08 20:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:11:41.653401723 +0000 UTC m=+1.181515703" watchObservedRunningTime="2024-10-08 20:11:41.660750314 +0000 UTC m=+1.188864303" Oct 8 20:11:41.661171 kubelet[2989]: I1008 20:11:41.661076 2989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-1-0-7-2461ba8d61" podStartSLOduration=1.6610705540000001 podStartE2EDuration="1.661070554s" podCreationTimestamp="2024-10-08 20:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:11:41.659868001 +0000 UTC m=+1.187981981" watchObservedRunningTime="2024-10-08 20:11:41.661070554 +0000 UTC m=+1.189184534" Oct 8 20:11:41.676378 kubelet[2989]: I1008 20:11:41.676242 2989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-1-0-7-2461ba8d61" podStartSLOduration=1.67622373 podStartE2EDuration="1.67622373s" podCreationTimestamp="2024-10-08 20:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:11:41.667747226 +0000 UTC m=+1.195861215" watchObservedRunningTime="2024-10-08 20:11:41.67622373 +0000 UTC m=+1.204337720" Oct 8 20:11:42.946961 sudo[2097]: pam_unix(sudo:session): session closed for user root Oct 8 20:11:43.109647 sshd[2094]: pam_unix(sshd:session): session closed for user core Oct 8 20:11:43.113048 systemd[1]: sshd@6-157.90.145.6:22-147.75.109.163:35866.service: Deactivated successfully. Oct 8 20:11:43.115126 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 20:11:43.115373 systemd[1]: session-7.scope: Consumed 5.226s CPU time, 191.0M memory peak, 0B memory swap peak. Oct 8 20:11:43.117261 systemd-logind[1474]: Session 7 logged out. Waiting for processes to exit. Oct 8 20:11:43.118747 systemd-logind[1474]: Removed session 7. Oct 8 20:11:54.601224 kubelet[2989]: I1008 20:11:54.601166 2989 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 20:11:54.601905 containerd[1496]: time="2024-10-08T20:11:54.601744469Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 20:11:54.602847 kubelet[2989]: I1008 20:11:54.601930 2989 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 20:11:55.261358 kubelet[2989]: I1008 20:11:55.261138 2989 topology_manager.go:215] "Topology Admit Handler" podUID="a2ced1b5-347f-4548-91ea-643a129232db" podNamespace="kube-system" podName="kube-proxy-nhcrs" Oct 8 20:11:55.275041 systemd[1]: Created slice kubepods-besteffort-poda2ced1b5_347f_4548_91ea_643a129232db.slice - libcontainer container kubepods-besteffort-poda2ced1b5_347f_4548_91ea_643a129232db.slice. Oct 8 20:11:55.289695 kubelet[2989]: I1008 20:11:55.289388 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a2ced1b5-347f-4548-91ea-643a129232db-kube-proxy\") pod \"kube-proxy-nhcrs\" (UID: \"a2ced1b5-347f-4548-91ea-643a129232db\") " pod="kube-system/kube-proxy-nhcrs" Oct 8 20:11:55.289695 kubelet[2989]: I1008 20:11:55.289426 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2ced1b5-347f-4548-91ea-643a129232db-xtables-lock\") pod \"kube-proxy-nhcrs\" (UID: \"a2ced1b5-347f-4548-91ea-643a129232db\") " pod="kube-system/kube-proxy-nhcrs" Oct 8 20:11:55.289695 kubelet[2989]: I1008 20:11:55.289451 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2ced1b5-347f-4548-91ea-643a129232db-lib-modules\") pod \"kube-proxy-nhcrs\" (UID: \"a2ced1b5-347f-4548-91ea-643a129232db\") " pod="kube-system/kube-proxy-nhcrs" Oct 8 20:11:55.289695 kubelet[2989]: I1008 20:11:55.289470 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lclbm\" (UniqueName: \"kubernetes.io/projected/a2ced1b5-347f-4548-91ea-643a129232db-kube-api-access-lclbm\") pod \"kube-proxy-nhcrs\" (UID: \"a2ced1b5-347f-4548-91ea-643a129232db\") " pod="kube-system/kube-proxy-nhcrs" Oct 8 20:11:55.301264 kubelet[2989]: I1008 20:11:55.301120 2989 topology_manager.go:215] "Topology Admit Handler" podUID="a41cab81-f22e-4fd0-8151-ff2a8186038d" podNamespace="kube-system" podName="cilium-gfmrp" Oct 8 20:11:55.306723 systemd[1]: Created slice kubepods-burstable-poda41cab81_f22e_4fd0_8151_ff2a8186038d.slice - libcontainer container kubepods-burstable-poda41cab81_f22e_4fd0_8151_ff2a8186038d.slice. Oct 8 20:11:55.389884 kubelet[2989]: I1008 20:11:55.389794 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-cilium-run\") pod \"cilium-gfmrp\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " pod="kube-system/cilium-gfmrp" Oct 8 20:11:55.389884 kubelet[2989]: I1008 20:11:55.389846 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a41cab81-f22e-4fd0-8151-ff2a8186038d-hubble-tls\") pod \"cilium-gfmrp\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " pod="kube-system/cilium-gfmrp" Oct 8 20:11:55.389884 kubelet[2989]: I1008 20:11:55.389875 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-hostproc\") pod \"cilium-gfmrp\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " pod="kube-system/cilium-gfmrp" Oct 8 20:11:55.389884 kubelet[2989]: I1008 20:11:55.389888 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a41cab81-f22e-4fd0-8151-ff2a8186038d-cilium-config-path\") pod \"cilium-gfmrp\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " pod="kube-system/cilium-gfmrp" Oct 8 20:11:55.389884 kubelet[2989]: I1008 20:11:55.389902 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-host-proc-sys-kernel\") pod \"cilium-gfmrp\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " pod="kube-system/cilium-gfmrp" Oct 8 20:11:55.390298 kubelet[2989]: I1008 20:11:55.389940 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-lib-modules\") pod \"cilium-gfmrp\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " pod="kube-system/cilium-gfmrp" Oct 8 20:11:55.390298 kubelet[2989]: I1008 20:11:55.389954 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a41cab81-f22e-4fd0-8151-ff2a8186038d-clustermesh-secrets\") pod \"cilium-gfmrp\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " pod="kube-system/cilium-gfmrp" Oct 8 20:11:55.390298 kubelet[2989]: I1008 20:11:55.389968 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-host-proc-sys-net\") pod \"cilium-gfmrp\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " pod="kube-system/cilium-gfmrp" Oct 8 20:11:55.390298 kubelet[2989]: I1008 20:11:55.389981 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-cilium-cgroup\") pod \"cilium-gfmrp\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " pod="kube-system/cilium-gfmrp" Oct 8 20:11:55.390298 kubelet[2989]: I1008 20:11:55.389995 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-bpf-maps\") pod \"cilium-gfmrp\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " pod="kube-system/cilium-gfmrp" Oct 8 20:11:55.390298 kubelet[2989]: I1008 20:11:55.390009 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-xtables-lock\") pod \"cilium-gfmrp\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " pod="kube-system/cilium-gfmrp" Oct 8 20:11:55.390646 kubelet[2989]: I1008 20:11:55.390022 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-cni-path\") pod \"cilium-gfmrp\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " pod="kube-system/cilium-gfmrp" Oct 8 20:11:55.390646 kubelet[2989]: I1008 20:11:55.390036 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdqt9\" (UniqueName: \"kubernetes.io/projected/a41cab81-f22e-4fd0-8151-ff2a8186038d-kube-api-access-vdqt9\") pod \"cilium-gfmrp\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " pod="kube-system/cilium-gfmrp" Oct 8 20:11:55.390646 kubelet[2989]: I1008 20:11:55.390053 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-etc-cni-netd\") pod \"cilium-gfmrp\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " pod="kube-system/cilium-gfmrp" Oct 8 20:11:55.396708 kubelet[2989]: E1008 20:11:55.396665 2989 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 8 20:11:55.396708 kubelet[2989]: E1008 20:11:55.396698 2989 projected.go:200] Error preparing data for projected volume kube-api-access-lclbm for pod kube-system/kube-proxy-nhcrs: configmap "kube-root-ca.crt" not found Oct 8 20:11:55.396881 kubelet[2989]: E1008 20:11:55.396746 2989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a2ced1b5-347f-4548-91ea-643a129232db-kube-api-access-lclbm podName:a2ced1b5-347f-4548-91ea-643a129232db nodeName:}" failed. No retries permitted until 2024-10-08 20:11:55.896730244 +0000 UTC m=+15.424844234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lclbm" (UniqueName: "kubernetes.io/projected/a2ced1b5-347f-4548-91ea-643a129232db-kube-api-access-lclbm") pod "kube-proxy-nhcrs" (UID: "a2ced1b5-347f-4548-91ea-643a129232db") : configmap "kube-root-ca.crt" not found Oct 8 20:11:55.611881 containerd[1496]: time="2024-10-08T20:11:55.611660262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gfmrp,Uid:a41cab81-f22e-4fd0-8151-ff2a8186038d,Namespace:kube-system,Attempt:0,}" Oct 8 20:11:55.664267 containerd[1496]: time="2024-10-08T20:11:55.663398521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:11:55.664267 containerd[1496]: time="2024-10-08T20:11:55.663485675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:11:55.664267 containerd[1496]: time="2024-10-08T20:11:55.663504330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:11:55.664267 containerd[1496]: time="2024-10-08T20:11:55.663661866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:11:55.700467 kubelet[2989]: I1008 20:11:55.700399 2989 topology_manager.go:215] "Topology Admit Handler" podUID="f502ddf9-ed33-470d-b6a5-3d14c016c73f" podNamespace="kube-system" podName="cilium-operator-599987898-csbd2" Oct 8 20:11:55.704778 systemd[1]: Started cri-containerd-2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935.scope - libcontainer container 2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935. Oct 8 20:11:55.714120 systemd[1]: Created slice kubepods-besteffort-podf502ddf9_ed33_470d_b6a5_3d14c016c73f.slice - libcontainer container kubepods-besteffort-podf502ddf9_ed33_470d_b6a5_3d14c016c73f.slice. Oct 8 20:11:55.754542 containerd[1496]: time="2024-10-08T20:11:55.754440959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gfmrp,Uid:a41cab81-f22e-4fd0-8151-ff2a8186038d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\"" Oct 8 20:11:55.758237 containerd[1496]: time="2024-10-08T20:11:55.758061023Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 8 20:11:55.792095 kubelet[2989]: I1008 20:11:55.792050 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59zzr\" (UniqueName: \"kubernetes.io/projected/f502ddf9-ed33-470d-b6a5-3d14c016c73f-kube-api-access-59zzr\") pod \"cilium-operator-599987898-csbd2\" (UID: \"f502ddf9-ed33-470d-b6a5-3d14c016c73f\") " pod="kube-system/cilium-operator-599987898-csbd2" Oct 8 20:11:55.792260 kubelet[2989]: I1008 20:11:55.792119 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f502ddf9-ed33-470d-b6a5-3d14c016c73f-cilium-config-path\") pod \"cilium-operator-599987898-csbd2\" (UID: \"f502ddf9-ed33-470d-b6a5-3d14c016c73f\") " pod="kube-system/cilium-operator-599987898-csbd2" Oct 8 20:11:56.017248 containerd[1496]: time="2024-10-08T20:11:56.017184383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-csbd2,Uid:f502ddf9-ed33-470d-b6a5-3d14c016c73f,Namespace:kube-system,Attempt:0,}" Oct 8 20:11:56.057295 containerd[1496]: time="2024-10-08T20:11:56.056796798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:11:56.057295 containerd[1496]: time="2024-10-08T20:11:56.056973399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:11:56.057295 containerd[1496]: time="2024-10-08T20:11:56.057012712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:11:56.059112 containerd[1496]: time="2024-10-08T20:11:56.058997972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:11:56.097215 systemd[1]: Started cri-containerd-033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6.scope - libcontainer container 033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6. Oct 8 20:11:56.172052 containerd[1496]: time="2024-10-08T20:11:56.171910627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-csbd2,Uid:f502ddf9-ed33-470d-b6a5-3d14c016c73f,Namespace:kube-system,Attempt:0,} returns sandbox id \"033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6\"" Oct 8 20:11:56.183302 containerd[1496]: time="2024-10-08T20:11:56.183227024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nhcrs,Uid:a2ced1b5-347f-4548-91ea-643a129232db,Namespace:kube-system,Attempt:0,}" Oct 8 20:11:56.208735 containerd[1496]: time="2024-10-08T20:11:56.208562624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:11:56.208735 containerd[1496]: time="2024-10-08T20:11:56.208649798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:11:56.208735 containerd[1496]: time="2024-10-08T20:11:56.208671198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:11:56.209119 containerd[1496]: time="2024-10-08T20:11:56.208792786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:11:56.231048 systemd[1]: Started cri-containerd-35e38649658d14b88fa1f7c20621cf90c4df8ca3ca6d7adff4106a1ea21dda3e.scope - libcontainer container 35e38649658d14b88fa1f7c20621cf90c4df8ca3ca6d7adff4106a1ea21dda3e. Oct 8 20:11:56.263100 containerd[1496]: time="2024-10-08T20:11:56.263042933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nhcrs,Uid:a2ced1b5-347f-4548-91ea-643a129232db,Namespace:kube-system,Attempt:0,} returns sandbox id \"35e38649658d14b88fa1f7c20621cf90c4df8ca3ca6d7adff4106a1ea21dda3e\"" Oct 8 20:11:56.267471 containerd[1496]: time="2024-10-08T20:11:56.267355685Z" level=info msg="CreateContainer within sandbox \"35e38649658d14b88fa1f7c20621cf90c4df8ca3ca6d7adff4106a1ea21dda3e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 20:11:56.285081 containerd[1496]: time="2024-10-08T20:11:56.285034748Z" level=info msg="CreateContainer within sandbox \"35e38649658d14b88fa1f7c20621cf90c4df8ca3ca6d7adff4106a1ea21dda3e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"df1986dafd20784451a601b4496069e36883222fa2acc4fc4d6c7cf0a1e036c3\"" Oct 8 20:11:56.286339 containerd[1496]: time="2024-10-08T20:11:56.285542388Z" level=info msg="StartContainer for \"df1986dafd20784451a601b4496069e36883222fa2acc4fc4d6c7cf0a1e036c3\"" Oct 8 20:11:56.324003 systemd[1]: Started cri-containerd-df1986dafd20784451a601b4496069e36883222fa2acc4fc4d6c7cf0a1e036c3.scope - libcontainer container df1986dafd20784451a601b4496069e36883222fa2acc4fc4d6c7cf0a1e036c3. Oct 8 20:11:56.358017 containerd[1496]: time="2024-10-08T20:11:56.357978277Z" level=info msg="StartContainer for \"df1986dafd20784451a601b4496069e36883222fa2acc4fc4d6c7cf0a1e036c3\" returns successfully" Oct 8 20:11:56.684258 kubelet[2989]: I1008 20:11:56.684186 2989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nhcrs" podStartSLOduration=1.684166901 podStartE2EDuration="1.684166901s" podCreationTimestamp="2024-10-08 20:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:11:56.683986553 +0000 UTC m=+16.212100544" watchObservedRunningTime="2024-10-08 20:11:56.684166901 +0000 UTC m=+16.212280891" Oct 8 20:12:00.615099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2708062700.mount: Deactivated successfully. Oct 8 20:12:02.398019 containerd[1496]: time="2024-10-08T20:12:02.397944580Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:12:02.399645 containerd[1496]: time="2024-10-08T20:12:02.399609119Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735379" Oct 8 20:12:02.401108 containerd[1496]: time="2024-10-08T20:12:02.401067793Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:12:02.402916 containerd[1496]: time="2024-10-08T20:12:02.402880230Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.644785784s" Oct 8 20:12:02.402965 containerd[1496]: time="2024-10-08T20:12:02.402919503Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 8 20:12:02.404590 containerd[1496]: time="2024-10-08T20:12:02.404553526Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 8 20:12:02.405715 containerd[1496]: time="2024-10-08T20:12:02.405653256Z" level=info msg="CreateContainer within sandbox \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 20:12:02.488876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount410970000.mount: Deactivated successfully. Oct 8 20:12:02.492103 containerd[1496]: time="2024-10-08T20:12:02.492062686Z" level=info msg="CreateContainer within sandbox \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e\"" Oct 8 20:12:02.493938 containerd[1496]: time="2024-10-08T20:12:02.492950790Z" level=info msg="StartContainer for \"d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e\"" Oct 8 20:12:02.641014 systemd[1]: Started cri-containerd-d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e.scope - libcontainer container d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e. Oct 8 20:12:02.676341 containerd[1496]: time="2024-10-08T20:12:02.676228118Z" level=info msg="StartContainer for \"d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e\" returns successfully" Oct 8 20:12:02.689347 systemd[1]: cri-containerd-d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e.scope: Deactivated successfully. Oct 8 20:12:02.775681 containerd[1496]: time="2024-10-08T20:12:02.758967188Z" level=info msg="shim disconnected" id=d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e namespace=k8s.io Oct 8 20:12:02.775681 containerd[1496]: time="2024-10-08T20:12:02.775663038Z" level=warning msg="cleaning up after shim disconnected" id=d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e namespace=k8s.io Oct 8 20:12:02.775681 containerd[1496]: time="2024-10-08T20:12:02.775679629Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:12:03.482645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e-rootfs.mount: Deactivated successfully. Oct 8 20:12:03.708197 containerd[1496]: time="2024-10-08T20:12:03.708007798Z" level=info msg="CreateContainer within sandbox \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 20:12:03.735155 containerd[1496]: time="2024-10-08T20:12:03.735025532Z" level=info msg="CreateContainer within sandbox \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733\"" Oct 8 20:12:03.735909 containerd[1496]: time="2024-10-08T20:12:03.735861529Z" level=info msg="StartContainer for \"58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733\"" Oct 8 20:12:03.773247 systemd[1]: Started cri-containerd-58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733.scope - libcontainer container 58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733. Oct 8 20:12:03.821659 containerd[1496]: time="2024-10-08T20:12:03.821221681Z" level=info msg="StartContainer for \"58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733\" returns successfully" Oct 8 20:12:03.840075 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:12:03.840331 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:12:03.840399 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:12:03.848154 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:12:03.850492 systemd[1]: cri-containerd-58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733.scope: Deactivated successfully. Oct 8 20:12:03.897523 containerd[1496]: time="2024-10-08T20:12:03.897337881Z" level=info msg="shim disconnected" id=58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733 namespace=k8s.io Oct 8 20:12:03.897523 containerd[1496]: time="2024-10-08T20:12:03.897391832Z" level=warning msg="cleaning up after shim disconnected" id=58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733 namespace=k8s.io Oct 8 20:12:03.897523 containerd[1496]: time="2024-10-08T20:12:03.897400007Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:12:03.904526 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:12:03.912422 containerd[1496]: time="2024-10-08T20:12:03.912335477Z" level=warning msg="cleanup warnings time=\"2024-10-08T20:12:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 20:12:04.258184 containerd[1496]: time="2024-10-08T20:12:04.258122677Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:12:04.259086 containerd[1496]: time="2024-10-08T20:12:04.259022173Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907229" Oct 8 20:12:04.260160 containerd[1496]: time="2024-10-08T20:12:04.260116634Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:12:04.261859 containerd[1496]: time="2024-10-08T20:12:04.261365385Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.856780129s" Oct 8 20:12:04.261859 containerd[1496]: time="2024-10-08T20:12:04.261425778Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 8 20:12:04.264176 containerd[1496]: time="2024-10-08T20:12:04.264144352Z" level=info msg="CreateContainer within sandbox \"033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 8 20:12:04.308611 containerd[1496]: time="2024-10-08T20:12:04.308548530Z" level=info msg="CreateContainer within sandbox \"033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d\"" Oct 8 20:12:04.310136 containerd[1496]: time="2024-10-08T20:12:04.309487801Z" level=info msg="StartContainer for \"8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d\"" Oct 8 20:12:04.340157 systemd[1]: Started cri-containerd-8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d.scope - libcontainer container 8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d. Oct 8 20:12:04.369069 containerd[1496]: time="2024-10-08T20:12:04.369005493Z" level=info msg="StartContainer for \"8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d\" returns successfully" Oct 8 20:12:04.487957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733-rootfs.mount: Deactivated successfully. Oct 8 20:12:04.711794 containerd[1496]: time="2024-10-08T20:12:04.711744370Z" level=info msg="CreateContainer within sandbox \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 20:12:04.740868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3981307158.mount: Deactivated successfully. Oct 8 20:12:04.749793 containerd[1496]: time="2024-10-08T20:12:04.749612097Z" level=info msg="CreateContainer within sandbox \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d\"" Oct 8 20:12:04.751211 containerd[1496]: time="2024-10-08T20:12:04.751081852Z" level=info msg="StartContainer for \"11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d\"" Oct 8 20:12:04.815013 systemd[1]: Started cri-containerd-11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d.scope - libcontainer container 11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d. Oct 8 20:12:04.817300 kubelet[2989]: I1008 20:12:04.809208 2989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-csbd2" podStartSLOduration=1.720316365 podStartE2EDuration="9.809184032s" podCreationTimestamp="2024-10-08 20:11:55 +0000 UTC" firstStartedPulling="2024-10-08 20:11:56.173471261 +0000 UTC m=+15.701585252" lastFinishedPulling="2024-10-08 20:12:04.262338929 +0000 UTC m=+23.790452919" observedRunningTime="2024-10-08 20:12:04.730010823 +0000 UTC m=+24.258124813" watchObservedRunningTime="2024-10-08 20:12:04.809184032 +0000 UTC m=+24.337298021" Oct 8 20:12:04.905785 containerd[1496]: time="2024-10-08T20:12:04.905669114Z" level=info msg="StartContainer for \"11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d\" returns successfully" Oct 8 20:12:04.937117 systemd[1]: cri-containerd-11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d.scope: Deactivated successfully. Oct 8 20:12:05.016293 containerd[1496]: time="2024-10-08T20:12:05.016124780Z" level=info msg="shim disconnected" id=11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d namespace=k8s.io Oct 8 20:12:05.016293 containerd[1496]: time="2024-10-08T20:12:05.016210800Z" level=warning msg="cleaning up after shim disconnected" id=11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d namespace=k8s.io Oct 8 20:12:05.016293 containerd[1496]: time="2024-10-08T20:12:05.016251246Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:12:05.486520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d-rootfs.mount: Deactivated successfully. Oct 8 20:12:05.716033 containerd[1496]: time="2024-10-08T20:12:05.715991294Z" level=info msg="CreateContainer within sandbox \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 20:12:05.747625 containerd[1496]: time="2024-10-08T20:12:05.747512899Z" level=info msg="CreateContainer within sandbox \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769\"" Oct 8 20:12:05.748569 containerd[1496]: time="2024-10-08T20:12:05.748546165Z" level=info msg="StartContainer for \"f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769\"" Oct 8 20:12:05.786964 systemd[1]: Started cri-containerd-f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769.scope - libcontainer container f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769. Oct 8 20:12:05.823070 systemd[1]: cri-containerd-f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769.scope: Deactivated successfully. Oct 8 20:12:05.823870 containerd[1496]: time="2024-10-08T20:12:05.823799628Z" level=info msg="StartContainer for \"f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769\" returns successfully" Oct 8 20:12:05.857878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769-rootfs.mount: Deactivated successfully. Oct 8 20:12:05.869029 containerd[1496]: time="2024-10-08T20:12:05.868946518Z" level=info msg="shim disconnected" id=f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769 namespace=k8s.io Oct 8 20:12:05.869315 containerd[1496]: time="2024-10-08T20:12:05.869002413Z" level=warning msg="cleaning up after shim disconnected" id=f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769 namespace=k8s.io Oct 8 20:12:05.869315 containerd[1496]: time="2024-10-08T20:12:05.869118521Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:12:06.720946 containerd[1496]: time="2024-10-08T20:12:06.720082471Z" level=info msg="CreateContainer within sandbox \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 20:12:06.751995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4089868735.mount: Deactivated successfully. Oct 8 20:12:06.762137 containerd[1496]: time="2024-10-08T20:12:06.761791950Z" level=info msg="CreateContainer within sandbox \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1\"" Oct 8 20:12:06.765358 containerd[1496]: time="2024-10-08T20:12:06.765069542Z" level=info msg="StartContainer for \"acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1\"" Oct 8 20:12:06.838021 systemd[1]: Started cri-containerd-acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1.scope - libcontainer container acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1. Oct 8 20:12:06.880123 containerd[1496]: time="2024-10-08T20:12:06.880013750Z" level=info msg="StartContainer for \"acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1\" returns successfully" Oct 8 20:12:07.068429 kubelet[2989]: I1008 20:12:07.067484 2989 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 20:12:07.093357 kubelet[2989]: I1008 20:12:07.093317 2989 topology_manager.go:215] "Topology Admit Handler" podUID="3c3d22c4-3d9b-4ec8-975e-23067e589195" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gmf9b" Oct 8 20:12:07.097230 kubelet[2989]: I1008 20:12:07.097202 2989 topology_manager.go:215] "Topology Admit Handler" podUID="c7994a9d-5653-4181-ae3c-14c6494ef44c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8z4xs" Oct 8 20:12:07.105364 systemd[1]: Created slice kubepods-burstable-pod3c3d22c4_3d9b_4ec8_975e_23067e589195.slice - libcontainer container kubepods-burstable-pod3c3d22c4_3d9b_4ec8_975e_23067e589195.slice. Oct 8 20:12:07.113779 kubelet[2989]: W1008 20:12:07.113753 2989 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-1-0-7-2461ba8d61" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-1-0-7-2461ba8d61' and this object Oct 8 20:12:07.114540 kubelet[2989]: E1008 20:12:07.114518 2989 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-1-0-7-2461ba8d61" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-1-0-7-2461ba8d61' and this object Oct 8 20:12:07.119310 systemd[1]: Created slice kubepods-burstable-podc7994a9d_5653_4181_ae3c_14c6494ef44c.slice - libcontainer container kubepods-burstable-podc7994a9d_5653_4181_ae3c_14c6494ef44c.slice. Oct 8 20:12:07.272568 kubelet[2989]: I1008 20:12:07.272495 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c3d22c4-3d9b-4ec8-975e-23067e589195-config-volume\") pod \"coredns-7db6d8ff4d-gmf9b\" (UID: \"3c3d22c4-3d9b-4ec8-975e-23067e589195\") " pod="kube-system/coredns-7db6d8ff4d-gmf9b" Oct 8 20:12:07.272755 kubelet[2989]: I1008 20:12:07.272578 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k96b\" (UniqueName: \"kubernetes.io/projected/c7994a9d-5653-4181-ae3c-14c6494ef44c-kube-api-access-2k96b\") pod \"coredns-7db6d8ff4d-8z4xs\" (UID: \"c7994a9d-5653-4181-ae3c-14c6494ef44c\") " pod="kube-system/coredns-7db6d8ff4d-8z4xs" Oct 8 20:12:07.272755 kubelet[2989]: I1008 20:12:07.272655 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpnmz\" (UniqueName: \"kubernetes.io/projected/3c3d22c4-3d9b-4ec8-975e-23067e589195-kube-api-access-qpnmz\") pod \"coredns-7db6d8ff4d-gmf9b\" (UID: \"3c3d22c4-3d9b-4ec8-975e-23067e589195\") " pod="kube-system/coredns-7db6d8ff4d-gmf9b" Oct 8 20:12:07.272755 kubelet[2989]: I1008 20:12:07.272711 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7994a9d-5653-4181-ae3c-14c6494ef44c-config-volume\") pod \"coredns-7db6d8ff4d-8z4xs\" (UID: \"c7994a9d-5653-4181-ae3c-14c6494ef44c\") " pod="kube-system/coredns-7db6d8ff4d-8z4xs" Oct 8 20:12:08.374438 kubelet[2989]: E1008 20:12:08.374339 2989 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Oct 8 20:12:08.374438 kubelet[2989]: E1008 20:12:08.374366 2989 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Oct 8 20:12:08.374438 kubelet[2989]: E1008 20:12:08.374442 2989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3c3d22c4-3d9b-4ec8-975e-23067e589195-config-volume podName:3c3d22c4-3d9b-4ec8-975e-23067e589195 nodeName:}" failed. No retries permitted until 2024-10-08 20:12:08.874422891 +0000 UTC m=+28.402536881 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3c3d22c4-3d9b-4ec8-975e-23067e589195-config-volume") pod "coredns-7db6d8ff4d-gmf9b" (UID: "3c3d22c4-3d9b-4ec8-975e-23067e589195") : failed to sync configmap cache: timed out waiting for the condition Oct 8 20:12:08.374438 kubelet[2989]: E1008 20:12:08.374456 2989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c7994a9d-5653-4181-ae3c-14c6494ef44c-config-volume podName:c7994a9d-5653-4181-ae3c-14c6494ef44c nodeName:}" failed. No retries permitted until 2024-10-08 20:12:08.874450122 +0000 UTC m=+28.402564112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c7994a9d-5653-4181-ae3c-14c6494ef44c-config-volume") pod "coredns-7db6d8ff4d-8z4xs" (UID: "c7994a9d-5653-4181-ae3c-14c6494ef44c") : failed to sync configmap cache: timed out waiting for the condition Oct 8 20:12:08.915400 containerd[1496]: time="2024-10-08T20:12:08.915341900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gmf9b,Uid:3c3d22c4-3d9b-4ec8-975e-23067e589195,Namespace:kube-system,Attempt:0,}" Oct 8 20:12:08.933899 containerd[1496]: time="2024-10-08T20:12:08.932781023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8z4xs,Uid:c7994a9d-5653-4181-ae3c-14c6494ef44c,Namespace:kube-system,Attempt:0,}" Oct 8 20:12:09.333853 systemd-networkd[1395]: cilium_host: Link UP Oct 8 20:12:09.334288 systemd-networkd[1395]: cilium_net: Link UP Oct 8 20:12:09.334295 systemd-networkd[1395]: cilium_net: Gained carrier Oct 8 20:12:09.334698 systemd-networkd[1395]: cilium_host: Gained carrier Oct 8 20:12:09.335188 systemd-networkd[1395]: cilium_host: Gained IPv6LL Oct 8 20:12:09.476100 systemd-networkd[1395]: cilium_vxlan: Link UP Oct 8 20:12:09.476116 systemd-networkd[1395]: cilium_vxlan: Gained carrier Oct 8 20:12:09.722988 systemd-networkd[1395]: cilium_net: Gained IPv6LL Oct 8 20:12:09.921139 kernel: NET: Registered PF_ALG protocol family Oct 8 20:12:10.656731 systemd-networkd[1395]: lxc_health: Link UP Oct 8 20:12:10.672583 systemd-networkd[1395]: lxc_health: Gained carrier Oct 8 20:12:11.014747 systemd-networkd[1395]: lxc04f5528467f8: Link UP Oct 8 20:12:11.021952 kernel: eth0: renamed from tmpf90cf Oct 8 20:12:11.032929 systemd-networkd[1395]: lxc04f5528467f8: Gained carrier Oct 8 20:12:11.033415 systemd-networkd[1395]: lxc1553f8cc4dfa: Link UP Oct 8 20:12:11.037196 kernel: eth0: renamed from tmpc87a1 Oct 8 20:12:11.041234 systemd-networkd[1395]: lxc1553f8cc4dfa: Gained carrier Oct 8 20:12:11.257016 systemd-networkd[1395]: cilium_vxlan: Gained IPv6LL Oct 8 20:12:11.629014 kubelet[2989]: I1008 20:12:11.628953 2989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gfmrp" podStartSLOduration=9.981872473 podStartE2EDuration="16.628818986s" podCreationTimestamp="2024-10-08 20:11:55 +0000 UTC" firstStartedPulling="2024-10-08 20:11:55.757472792 +0000 UTC m=+15.285586781" lastFinishedPulling="2024-10-08 20:12:02.404419304 +0000 UTC m=+21.932533294" observedRunningTime="2024-10-08 20:12:07.741216875 +0000 UTC m=+27.269330866" watchObservedRunningTime="2024-10-08 20:12:11.628818986 +0000 UTC m=+31.156932977" Oct 8 20:12:12.153095 systemd-networkd[1395]: lxc_health: Gained IPv6LL Oct 8 20:12:12.217072 systemd-networkd[1395]: lxc1553f8cc4dfa: Gained IPv6LL Oct 8 20:12:12.922133 systemd-networkd[1395]: lxc04f5528467f8: Gained IPv6LL Oct 8 20:12:14.793322 containerd[1496]: time="2024-10-08T20:12:14.793018374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:12:14.793322 containerd[1496]: time="2024-10-08T20:12:14.793061775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:12:14.793322 containerd[1496]: time="2024-10-08T20:12:14.793083426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:12:14.796523 containerd[1496]: time="2024-10-08T20:12:14.796013327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:12:14.836878 containerd[1496]: time="2024-10-08T20:12:14.829904804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:12:14.836878 containerd[1496]: time="2024-10-08T20:12:14.830746111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:12:14.836878 containerd[1496]: time="2024-10-08T20:12:14.830764576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:12:14.840026 containerd[1496]: time="2024-10-08T20:12:14.830898568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:12:14.869135 systemd[1]: Started cri-containerd-c87a161f812ade7e0974ea46f1865a031ebabf4452677828c7d451baf2b5b3a9.scope - libcontainer container c87a161f812ade7e0974ea46f1865a031ebabf4452677828c7d451baf2b5b3a9. Oct 8 20:12:14.889016 systemd[1]: run-containerd-runc-k8s.io-f90cf7bc37b7f4ac2b22d6bb9be495942c3b1358db4cc797e6151a853aa65d8a-runc.ioERDe.mount: Deactivated successfully. Oct 8 20:12:14.904415 systemd[1]: Started cri-containerd-f90cf7bc37b7f4ac2b22d6bb9be495942c3b1358db4cc797e6151a853aa65d8a.scope - libcontainer container f90cf7bc37b7f4ac2b22d6bb9be495942c3b1358db4cc797e6151a853aa65d8a. Oct 8 20:12:14.971265 containerd[1496]: time="2024-10-08T20:12:14.971223859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8z4xs,Uid:c7994a9d-5653-4181-ae3c-14c6494ef44c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c87a161f812ade7e0974ea46f1865a031ebabf4452677828c7d451baf2b5b3a9\"" Oct 8 20:12:14.977091 containerd[1496]: time="2024-10-08T20:12:14.976957794Z" level=info msg="CreateContainer within sandbox \"c87a161f812ade7e0974ea46f1865a031ebabf4452677828c7d451baf2b5b3a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:12:15.011719 containerd[1496]: time="2024-10-08T20:12:15.011570123Z" level=info msg="CreateContainer within sandbox \"c87a161f812ade7e0974ea46f1865a031ebabf4452677828c7d451baf2b5b3a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f816654baf4b5233e43ad696bf1e4c2c20b0f7cb3cb01657514ef2747d4685c1\"" Oct 8 20:12:15.013225 containerd[1496]: time="2024-10-08T20:12:15.012428612Z" level=info msg="StartContainer for \"f816654baf4b5233e43ad696bf1e4c2c20b0f7cb3cb01657514ef2747d4685c1\"" Oct 8 20:12:15.033354 containerd[1496]: time="2024-10-08T20:12:15.033298706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gmf9b,Uid:3c3d22c4-3d9b-4ec8-975e-23067e589195,Namespace:kube-system,Attempt:0,} returns sandbox id \"f90cf7bc37b7f4ac2b22d6bb9be495942c3b1358db4cc797e6151a853aa65d8a\"" Oct 8 20:12:15.040734 containerd[1496]: time="2024-10-08T20:12:15.040557610Z" level=info msg="CreateContainer within sandbox \"f90cf7bc37b7f4ac2b22d6bb9be495942c3b1358db4cc797e6151a853aa65d8a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:12:15.054238 containerd[1496]: time="2024-10-08T20:12:15.054064163Z" level=info msg="CreateContainer within sandbox \"f90cf7bc37b7f4ac2b22d6bb9be495942c3b1358db4cc797e6151a853aa65d8a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"960d35d78cc237ddbbe627c926bb745d4a0d80246466ea18b628a489367b6c16\"" Oct 8 20:12:15.056053 containerd[1496]: time="2024-10-08T20:12:15.056008819Z" level=info msg="StartContainer for \"960d35d78cc237ddbbe627c926bb745d4a0d80246466ea18b628a489367b6c16\"" Oct 8 20:12:15.075509 systemd[1]: Started cri-containerd-f816654baf4b5233e43ad696bf1e4c2c20b0f7cb3cb01657514ef2747d4685c1.scope - libcontainer container f816654baf4b5233e43ad696bf1e4c2c20b0f7cb3cb01657514ef2747d4685c1. Oct 8 20:12:15.091005 systemd[1]: Started cri-containerd-960d35d78cc237ddbbe627c926bb745d4a0d80246466ea18b628a489367b6c16.scope - libcontainer container 960d35d78cc237ddbbe627c926bb745d4a0d80246466ea18b628a489367b6c16. Oct 8 20:12:15.135451 containerd[1496]: time="2024-10-08T20:12:15.135316193Z" level=info msg="StartContainer for \"f816654baf4b5233e43ad696bf1e4c2c20b0f7cb3cb01657514ef2747d4685c1\" returns successfully" Oct 8 20:12:15.135451 containerd[1496]: time="2024-10-08T20:12:15.135355116Z" level=info msg="StartContainer for \"960d35d78cc237ddbbe627c926bb745d4a0d80246466ea18b628a489367b6c16\" returns successfully" Oct 8 20:12:15.771497 kubelet[2989]: I1008 20:12:15.771410 2989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gmf9b" podStartSLOduration=20.771389125 podStartE2EDuration="20.771389125s" podCreationTimestamp="2024-10-08 20:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:12:15.769689139 +0000 UTC m=+35.297803149" watchObservedRunningTime="2024-10-08 20:12:15.771389125 +0000 UTC m=+35.299503135" Oct 8 20:12:15.827558 kubelet[2989]: I1008 20:12:15.826295 2989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8z4xs" podStartSLOduration=20.826270421 podStartE2EDuration="20.826270421s" podCreationTimestamp="2024-10-08 20:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:12:15.823653617 +0000 UTC m=+35.351767627" watchObservedRunningTime="2024-10-08 20:12:15.826270421 +0000 UTC m=+35.354384432" Oct 8 20:12:22.630366 kubelet[2989]: I1008 20:12:22.622576 2989 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:13:20.127963 systemd[1]: Started sshd@7-157.90.145.6:22-80.64.30.138:38758.service - OpenSSH per-connection server daemon (80.64.30.138:38758). Oct 8 20:13:20.987425 sshd[4372]: Invalid user user from 80.64.30.138 port 38758 Oct 8 20:13:21.055456 sshd[4372]: Connection closed by invalid user user 80.64.30.138 port 38758 [preauth] Oct 8 20:13:21.061134 systemd[1]: sshd@7-157.90.145.6:22-80.64.30.138:38758.service: Deactivated successfully. Oct 8 20:15:45.839421 systemd[1]: Started sshd@8-157.90.145.6:22-213.6.203.226:54114.service - OpenSSH per-connection server daemon (213.6.203.226:54114). Oct 8 20:15:46.285553 sshd[4393]: Invalid user arjan from 213.6.203.226 port 54114 Oct 8 20:15:46.357330 sshd[4393]: Received disconnect from 213.6.203.226 port 54114:11: Bye Bye [preauth] Oct 8 20:15:46.357330 sshd[4393]: Disconnected from invalid user arjan 213.6.203.226 port 54114 [preauth] Oct 8 20:15:46.363079 systemd[1]: sshd@8-157.90.145.6:22-213.6.203.226:54114.service: Deactivated successfully. Oct 8 20:16:29.023444 systemd[1]: Started sshd@9-157.90.145.6:22-147.75.109.163:38418.service - OpenSSH per-connection server daemon (147.75.109.163:38418). Oct 8 20:16:30.008350 sshd[4402]: Accepted publickey for core from 147.75.109.163 port 38418 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:16:30.013151 sshd[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:16:30.023210 systemd-logind[1474]: New session 8 of user core. Oct 8 20:16:30.032146 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 20:16:31.190942 sshd[4402]: pam_unix(sshd:session): session closed for user core Oct 8 20:16:31.203137 systemd-logind[1474]: Session 8 logged out. Waiting for processes to exit. Oct 8 20:16:31.203992 systemd[1]: sshd@9-157.90.145.6:22-147.75.109.163:38418.service: Deactivated successfully. Oct 8 20:16:31.208892 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 20:16:31.211906 systemd-logind[1474]: Removed session 8. Oct 8 20:16:36.375223 systemd[1]: Started sshd@10-157.90.145.6:22-147.75.109.163:38424.service - OpenSSH per-connection server daemon (147.75.109.163:38424). Oct 8 20:16:37.373588 sshd[4416]: Accepted publickey for core from 147.75.109.163 port 38424 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:16:37.377263 sshd[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:16:37.387891 systemd-logind[1474]: New session 9 of user core. Oct 8 20:16:37.393250 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 20:16:38.116455 sshd[4416]: pam_unix(sshd:session): session closed for user core Oct 8 20:16:38.128130 systemd[1]: sshd@10-157.90.145.6:22-147.75.109.163:38424.service: Deactivated successfully. Oct 8 20:16:38.133904 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 20:16:38.135234 systemd-logind[1474]: Session 9 logged out. Waiting for processes to exit. Oct 8 20:16:38.136823 systemd-logind[1474]: Removed session 9. Oct 8 20:16:43.296964 systemd[1]: Started sshd@11-157.90.145.6:22-147.75.109.163:48768.service - OpenSSH per-connection server daemon (147.75.109.163:48768). Oct 8 20:16:44.275625 sshd[4432]: Accepted publickey for core from 147.75.109.163 port 48768 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:16:44.278419 sshd[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:16:44.284003 systemd-logind[1474]: New session 10 of user core. Oct 8 20:16:44.288997 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 20:16:45.056881 sshd[4432]: pam_unix(sshd:session): session closed for user core Oct 8 20:16:45.062463 systemd-logind[1474]: Session 10 logged out. Waiting for processes to exit. Oct 8 20:16:45.063098 systemd[1]: sshd@11-157.90.145.6:22-147.75.109.163:48768.service: Deactivated successfully. Oct 8 20:16:45.066199 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 20:16:45.067460 systemd-logind[1474]: Removed session 10. Oct 8 20:16:45.226236 systemd[1]: Started sshd@12-157.90.145.6:22-147.75.109.163:48772.service - OpenSSH per-connection server daemon (147.75.109.163:48772). Oct 8 20:16:46.185523 sshd[4446]: Accepted publickey for core from 147.75.109.163 port 48772 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:16:46.188143 sshd[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:16:46.195185 systemd-logind[1474]: New session 11 of user core. Oct 8 20:16:46.202092 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 20:16:46.992566 sshd[4446]: pam_unix(sshd:session): session closed for user core Oct 8 20:16:46.998088 systemd[1]: sshd@12-157.90.145.6:22-147.75.109.163:48772.service: Deactivated successfully. Oct 8 20:16:47.002140 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 20:16:47.005603 systemd-logind[1474]: Session 11 logged out. Waiting for processes to exit. Oct 8 20:16:47.006988 systemd-logind[1474]: Removed session 11. Oct 8 20:16:47.168971 systemd[1]: Started sshd@13-157.90.145.6:22-147.75.109.163:48776.service - OpenSSH per-connection server daemon (147.75.109.163:48776). Oct 8 20:16:48.190540 sshd[4457]: Accepted publickey for core from 147.75.109.163 port 48776 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:16:48.193994 sshd[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:16:48.203597 systemd-logind[1474]: New session 12 of user core. Oct 8 20:16:48.211148 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 20:16:48.947659 sshd[4457]: pam_unix(sshd:session): session closed for user core Oct 8 20:16:48.950853 systemd[1]: sshd@13-157.90.145.6:22-147.75.109.163:48776.service: Deactivated successfully. Oct 8 20:16:48.952768 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 20:16:48.955261 systemd-logind[1474]: Session 12 logged out. Waiting for processes to exit. Oct 8 20:16:48.956431 systemd-logind[1474]: Removed session 12. Oct 8 20:16:54.116667 systemd[1]: Started sshd@14-157.90.145.6:22-147.75.109.163:38930.service - OpenSSH per-connection server daemon (147.75.109.163:38930). Oct 8 20:16:55.103427 sshd[4471]: Accepted publickey for core from 147.75.109.163 port 38930 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:16:55.105789 sshd[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:16:55.111949 systemd-logind[1474]: New session 13 of user core. Oct 8 20:16:55.119181 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 20:16:55.880747 sshd[4471]: pam_unix(sshd:session): session closed for user core Oct 8 20:16:55.889887 systemd[1]: sshd@14-157.90.145.6:22-147.75.109.163:38930.service: Deactivated successfully. Oct 8 20:16:55.894618 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 20:16:55.896410 systemd-logind[1474]: Session 13 logged out. Waiting for processes to exit. Oct 8 20:16:55.898934 systemd-logind[1474]: Removed session 13. Oct 8 20:16:56.053272 systemd[1]: Started sshd@15-157.90.145.6:22-147.75.109.163:38938.service - OpenSSH per-connection server daemon (147.75.109.163:38938). Oct 8 20:16:57.040461 sshd[4484]: Accepted publickey for core from 147.75.109.163 port 38938 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:16:57.044145 sshd[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:16:57.054958 systemd-logind[1474]: New session 14 of user core. Oct 8 20:16:57.060396 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 20:16:58.024664 sshd[4484]: pam_unix(sshd:session): session closed for user core Oct 8 20:16:58.033807 systemd[1]: sshd@15-157.90.145.6:22-147.75.109.163:38938.service: Deactivated successfully. Oct 8 20:16:58.038268 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 20:16:58.042275 systemd-logind[1474]: Session 14 logged out. Waiting for processes to exit. Oct 8 20:16:58.043584 systemd-logind[1474]: Removed session 14. Oct 8 20:16:58.201555 systemd[1]: Started sshd@16-157.90.145.6:22-147.75.109.163:46934.service - OpenSSH per-connection server daemon (147.75.109.163:46934). Oct 8 20:16:59.211640 sshd[4498]: Accepted publickey for core from 147.75.109.163 port 46934 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:16:59.215438 sshd[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:16:59.226041 systemd-logind[1474]: New session 15 of user core. Oct 8 20:16:59.233510 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 20:17:01.708553 sshd[4498]: pam_unix(sshd:session): session closed for user core Oct 8 20:17:01.711938 systemd[1]: sshd@16-157.90.145.6:22-147.75.109.163:46934.service: Deactivated successfully. Oct 8 20:17:01.714402 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 20:17:01.716501 systemd-logind[1474]: Session 15 logged out. Waiting for processes to exit. Oct 8 20:17:01.718561 systemd-logind[1474]: Removed session 15. Oct 8 20:17:01.872179 systemd[1]: Started sshd@17-157.90.145.6:22-147.75.109.163:46948.service - OpenSSH per-connection server daemon (147.75.109.163:46948). Oct 8 20:17:02.844707 sshd[4517]: Accepted publickey for core from 147.75.109.163 port 46948 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:17:02.847348 sshd[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:17:02.853755 systemd-logind[1474]: New session 16 of user core. Oct 8 20:17:02.860130 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 20:17:03.853479 sshd[4517]: pam_unix(sshd:session): session closed for user core Oct 8 20:17:03.857975 systemd-logind[1474]: Session 16 logged out. Waiting for processes to exit. Oct 8 20:17:03.858610 systemd[1]: sshd@17-157.90.145.6:22-147.75.109.163:46948.service: Deactivated successfully. Oct 8 20:17:03.861272 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 20:17:03.862244 systemd-logind[1474]: Removed session 16. Oct 8 20:17:04.031315 systemd[1]: Started sshd@18-157.90.145.6:22-147.75.109.163:46952.service - OpenSSH per-connection server daemon (147.75.109.163:46952). Oct 8 20:17:05.011334 sshd[4528]: Accepted publickey for core from 147.75.109.163 port 46952 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:17:05.014239 sshd[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:17:05.022482 systemd-logind[1474]: New session 17 of user core. Oct 8 20:17:05.028075 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 20:17:05.807056 sshd[4528]: pam_unix(sshd:session): session closed for user core Oct 8 20:17:05.814415 systemd[1]: sshd@18-157.90.145.6:22-147.75.109.163:46952.service: Deactivated successfully. Oct 8 20:17:05.820626 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 20:17:05.822102 systemd-logind[1474]: Session 17 logged out. Waiting for processes to exit. Oct 8 20:17:05.823929 systemd-logind[1474]: Removed session 17. Oct 8 20:17:10.998623 systemd[1]: Started sshd@19-157.90.145.6:22-147.75.109.163:41630.service - OpenSSH per-connection server daemon (147.75.109.163:41630). Oct 8 20:17:12.024052 sshd[4544]: Accepted publickey for core from 147.75.109.163 port 41630 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:17:12.027394 sshd[4544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:17:12.041736 systemd-logind[1474]: New session 18 of user core. Oct 8 20:17:12.046620 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 20:17:12.834226 sshd[4544]: pam_unix(sshd:session): session closed for user core Oct 8 20:17:12.840742 systemd[1]: sshd@19-157.90.145.6:22-147.75.109.163:41630.service: Deactivated successfully. Oct 8 20:17:12.847164 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 20:17:12.851324 systemd-logind[1474]: Session 18 logged out. Waiting for processes to exit. Oct 8 20:17:12.854027 systemd-logind[1474]: Removed session 18. Oct 8 20:17:18.015025 systemd[1]: Started sshd@20-157.90.145.6:22-147.75.109.163:60416.service - OpenSSH per-connection server daemon (147.75.109.163:60416). Oct 8 20:17:19.012068 sshd[4559]: Accepted publickey for core from 147.75.109.163 port 60416 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:17:19.014252 sshd[4559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:17:19.020483 systemd-logind[1474]: New session 19 of user core. Oct 8 20:17:19.027085 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 20:17:19.814342 sshd[4559]: pam_unix(sshd:session): session closed for user core Oct 8 20:17:19.819386 systemd-logind[1474]: Session 19 logged out. Waiting for processes to exit. Oct 8 20:17:19.820205 systemd[1]: sshd@20-157.90.145.6:22-147.75.109.163:60416.service: Deactivated successfully. Oct 8 20:17:19.822450 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 20:17:19.823355 systemd-logind[1474]: Removed session 19. Oct 8 20:17:19.999549 systemd[1]: Started sshd@21-157.90.145.6:22-147.75.109.163:60432.service - OpenSSH per-connection server daemon (147.75.109.163:60432). Oct 8 20:17:20.995368 sshd[4575]: Accepted publickey for core from 147.75.109.163 port 60432 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:17:20.997575 sshd[4575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:17:21.002752 systemd-logind[1474]: New session 20 of user core. Oct 8 20:17:21.012064 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 20:17:22.863768 containerd[1496]: time="2024-10-08T20:17:22.863634012Z" level=info msg="StopContainer for \"8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d\" with timeout 30 (s)" Oct 8 20:17:22.865391 containerd[1496]: time="2024-10-08T20:17:22.864114452Z" level=info msg="Stop container \"8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d\" with signal terminated" Oct 8 20:17:22.973058 systemd[1]: cri-containerd-8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d.scope: Deactivated successfully. Oct 8 20:17:22.987116 containerd[1496]: time="2024-10-08T20:17:22.987055631Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:17:23.000387 containerd[1496]: time="2024-10-08T20:17:23.000244032Z" level=info msg="StopContainer for \"acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1\" with timeout 2 (s)" Oct 8 20:17:23.000579 containerd[1496]: time="2024-10-08T20:17:23.000549996Z" level=info msg="Stop container \"acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1\" with signal terminated" Oct 8 20:17:23.007430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d-rootfs.mount: Deactivated successfully. Oct 8 20:17:23.013942 systemd-networkd[1395]: lxc_health: Link DOWN Oct 8 20:17:23.013953 systemd-networkd[1395]: lxc_health: Lost carrier Oct 8 20:17:23.047218 systemd[1]: cri-containerd-acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1.scope: Deactivated successfully. Oct 8 20:17:23.047470 systemd[1]: cri-containerd-acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1.scope: Consumed 8.557s CPU time. Oct 8 20:17:23.064995 containerd[1496]: time="2024-10-08T20:17:23.064815369Z" level=info msg="shim disconnected" id=8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d namespace=k8s.io Oct 8 20:17:23.064995 containerd[1496]: time="2024-10-08T20:17:23.064979287Z" level=warning msg="cleaning up after shim disconnected" id=8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d namespace=k8s.io Oct 8 20:17:23.064995 containerd[1496]: time="2024-10-08T20:17:23.064988434Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:17:23.071516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1-rootfs.mount: Deactivated successfully. Oct 8 20:17:23.080947 containerd[1496]: time="2024-10-08T20:17:23.080895350Z" level=info msg="shim disconnected" id=acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1 namespace=k8s.io Oct 8 20:17:23.081443 containerd[1496]: time="2024-10-08T20:17:23.081290703Z" level=warning msg="cleaning up after shim disconnected" id=acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1 namespace=k8s.io Oct 8 20:17:23.081443 containerd[1496]: time="2024-10-08T20:17:23.081309187Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:17:23.087464 containerd[1496]: time="2024-10-08T20:17:23.084735740Z" level=info msg="StopContainer for \"8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d\" returns successfully" Oct 8 20:17:23.087795 containerd[1496]: time="2024-10-08T20:17:23.087771289Z" level=info msg="StopPodSandbox for \"033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6\"" Oct 8 20:17:23.087860 containerd[1496]: time="2024-10-08T20:17:23.087807006Z" level=info msg="Container to stop \"8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:17:23.091806 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6-shm.mount: Deactivated successfully. Oct 8 20:17:23.100860 systemd[1]: cri-containerd-033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6.scope: Deactivated successfully. Oct 8 20:17:23.120854 containerd[1496]: time="2024-10-08T20:17:23.120162035Z" level=info msg="StopContainer for \"acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1\" returns successfully" Oct 8 20:17:23.120854 containerd[1496]: time="2024-10-08T20:17:23.120617529Z" level=info msg="StopPodSandbox for \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\"" Oct 8 20:17:23.120854 containerd[1496]: time="2024-10-08T20:17:23.120647075Z" level=info msg="Container to stop \"d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:17:23.120854 containerd[1496]: time="2024-10-08T20:17:23.120658316Z" level=info msg="Container to stop \"f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:17:23.120854 containerd[1496]: time="2024-10-08T20:17:23.120668535Z" level=info msg="Container to stop \"11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:17:23.120854 containerd[1496]: time="2024-10-08T20:17:23.120678644Z" level=info msg="Container to stop \"acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:17:23.120854 containerd[1496]: time="2024-10-08T20:17:23.120687390Z" level=info msg="Container to stop \"58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:17:23.130317 systemd[1]: cri-containerd-2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935.scope: Deactivated successfully. Oct 8 20:17:23.140805 containerd[1496]: time="2024-10-08T20:17:23.139787482Z" level=info msg="shim disconnected" id=033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6 namespace=k8s.io Oct 8 20:17:23.140805 containerd[1496]: time="2024-10-08T20:17:23.139947823Z" level=warning msg="cleaning up after shim disconnected" id=033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6 namespace=k8s.io Oct 8 20:17:23.140805 containerd[1496]: time="2024-10-08T20:17:23.139959255Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:17:23.168128 containerd[1496]: time="2024-10-08T20:17:23.167881483Z" level=info msg="shim disconnected" id=2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935 namespace=k8s.io Oct 8 20:17:23.168128 containerd[1496]: time="2024-10-08T20:17:23.167935074Z" level=warning msg="cleaning up after shim disconnected" id=2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935 namespace=k8s.io Oct 8 20:17:23.168128 containerd[1496]: time="2024-10-08T20:17:23.167943660Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:17:23.173812 containerd[1496]: time="2024-10-08T20:17:23.173493613Z" level=info msg="TearDown network for sandbox \"033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6\" successfully" Oct 8 20:17:23.173812 containerd[1496]: time="2024-10-08T20:17:23.173538037Z" level=info msg="StopPodSandbox for \"033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6\" returns successfully" Oct 8 20:17:23.191793 containerd[1496]: time="2024-10-08T20:17:23.191661128Z" level=info msg="TearDown network for sandbox \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\" successfully" Oct 8 20:17:23.191793 containerd[1496]: time="2024-10-08T20:17:23.191694872Z" level=info msg="StopPodSandbox for \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\" returns successfully" Oct 8 20:17:23.313163 kubelet[2989]: I1008 20:17:23.313090 2989 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a41cab81-f22e-4fd0-8151-ff2a8186038d-clustermesh-secrets\") pod \"a41cab81-f22e-4fd0-8151-ff2a8186038d\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " Oct 8 20:17:23.313163 kubelet[2989]: I1008 20:17:23.313157 2989 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59zzr\" (UniqueName: \"kubernetes.io/projected/f502ddf9-ed33-470d-b6a5-3d14c016c73f-kube-api-access-59zzr\") pod \"f502ddf9-ed33-470d-b6a5-3d14c016c73f\" (UID: \"f502ddf9-ed33-470d-b6a5-3d14c016c73f\") " Oct 8 20:17:23.313763 kubelet[2989]: I1008 20:17:23.313194 2989 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a41cab81-f22e-4fd0-8151-ff2a8186038d-hubble-tls\") pod \"a41cab81-f22e-4fd0-8151-ff2a8186038d\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " Oct 8 20:17:23.313763 kubelet[2989]: I1008 20:17:23.313226 2989 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-host-proc-sys-net\") pod \"a41cab81-f22e-4fd0-8151-ff2a8186038d\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " Oct 8 20:17:23.313763 kubelet[2989]: I1008 20:17:23.313256 2989 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-lib-modules\") pod \"a41cab81-f22e-4fd0-8151-ff2a8186038d\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " Oct 8 20:17:23.313763 kubelet[2989]: I1008 20:17:23.313288 2989 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a41cab81-f22e-4fd0-8151-ff2a8186038d-cilium-config-path\") pod \"a41cab81-f22e-4fd0-8151-ff2a8186038d\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " Oct 8 20:17:23.313763 kubelet[2989]: I1008 20:17:23.313314 2989 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-bpf-maps\") pod \"a41cab81-f22e-4fd0-8151-ff2a8186038d\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " Oct 8 20:17:23.313763 kubelet[2989]: I1008 20:17:23.313343 2989 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f502ddf9-ed33-470d-b6a5-3d14c016c73f-cilium-config-path\") pod \"f502ddf9-ed33-470d-b6a5-3d14c016c73f\" (UID: \"f502ddf9-ed33-470d-b6a5-3d14c016c73f\") " Oct 8 20:17:23.314012 kubelet[2989]: I1008 20:17:23.313370 2989 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-etc-cni-netd\") pod \"a41cab81-f22e-4fd0-8151-ff2a8186038d\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " Oct 8 20:17:23.314012 kubelet[2989]: I1008 20:17:23.313451 2989 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-host-proc-sys-kernel\") pod \"a41cab81-f22e-4fd0-8151-ff2a8186038d\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " Oct 8 20:17:23.314245 kubelet[2989]: I1008 20:17:23.314205 2989 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-cilium-run\") pod \"a41cab81-f22e-4fd0-8151-ff2a8186038d\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " Oct 8 20:17:23.314411 kubelet[2989]: I1008 20:17:23.314377 2989 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-hostproc\") pod \"a41cab81-f22e-4fd0-8151-ff2a8186038d\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " Oct 8 20:17:23.314980 kubelet[2989]: I1008 20:17:23.314446 2989 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-cni-path\") pod \"a41cab81-f22e-4fd0-8151-ff2a8186038d\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " Oct 8 20:17:23.314980 kubelet[2989]: I1008 20:17:23.314646 2989 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdqt9\" (UniqueName: \"kubernetes.io/projected/a41cab81-f22e-4fd0-8151-ff2a8186038d-kube-api-access-vdqt9\") pod \"a41cab81-f22e-4fd0-8151-ff2a8186038d\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " Oct 8 20:17:23.314980 kubelet[2989]: I1008 20:17:23.314717 2989 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-xtables-lock\") pod \"a41cab81-f22e-4fd0-8151-ff2a8186038d\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " Oct 8 20:17:23.314980 kubelet[2989]: I1008 20:17:23.314757 2989 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-cilium-cgroup\") pod \"a41cab81-f22e-4fd0-8151-ff2a8186038d\" (UID: \"a41cab81-f22e-4fd0-8151-ff2a8186038d\") " Oct 8 20:17:23.319224 kubelet[2989]: I1008 20:17:23.317492 2989 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a41cab81-f22e-4fd0-8151-ff2a8186038d" (UID: "a41cab81-f22e-4fd0-8151-ff2a8186038d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:17:23.322259 kubelet[2989]: I1008 20:17:23.317512 2989 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a41cab81-f22e-4fd0-8151-ff2a8186038d" (UID: "a41cab81-f22e-4fd0-8151-ff2a8186038d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:17:23.348856 kubelet[2989]: I1008 20:17:23.347571 2989 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a41cab81-f22e-4fd0-8151-ff2a8186038d" (UID: "a41cab81-f22e-4fd0-8151-ff2a8186038d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:17:23.353369 kubelet[2989]: I1008 20:17:23.353336 2989 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f502ddf9-ed33-470d-b6a5-3d14c016c73f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f502ddf9-ed33-470d-b6a5-3d14c016c73f" (UID: "f502ddf9-ed33-470d-b6a5-3d14c016c73f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 20:17:23.354335 kubelet[2989]: I1008 20:17:23.354320 2989 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a41cab81-f22e-4fd0-8151-ff2a8186038d" (UID: "a41cab81-f22e-4fd0-8151-ff2a8186038d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:17:23.369499 kubelet[2989]: I1008 20:17:23.356897 2989 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a41cab81-f22e-4fd0-8151-ff2a8186038d" (UID: "a41cab81-f22e-4fd0-8151-ff2a8186038d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:17:23.369884 kubelet[2989]: I1008 20:17:23.356914 2989 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a41cab81-f22e-4fd0-8151-ff2a8186038d" (UID: "a41cab81-f22e-4fd0-8151-ff2a8186038d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:17:23.369948 kubelet[2989]: I1008 20:17:23.356927 2989 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a41cab81-f22e-4fd0-8151-ff2a8186038d" (UID: "a41cab81-f22e-4fd0-8151-ff2a8186038d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:17:23.370005 kubelet[2989]: I1008 20:17:23.356940 2989 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-hostproc" (OuterVolumeSpecName: "hostproc") pod "a41cab81-f22e-4fd0-8151-ff2a8186038d" (UID: "a41cab81-f22e-4fd0-8151-ff2a8186038d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:17:23.370112 kubelet[2989]: I1008 20:17:23.356951 2989 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-cni-path" (OuterVolumeSpecName: "cni-path") pod "a41cab81-f22e-4fd0-8151-ff2a8186038d" (UID: "a41cab81-f22e-4fd0-8151-ff2a8186038d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:17:23.370112 kubelet[2989]: I1008 20:17:23.360176 2989 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a41cab81-f22e-4fd0-8151-ff2a8186038d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a41cab81-f22e-4fd0-8151-ff2a8186038d" (UID: "a41cab81-f22e-4fd0-8151-ff2a8186038d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 20:17:23.370112 kubelet[2989]: I1008 20:17:23.368776 2989 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a41cab81-f22e-4fd0-8151-ff2a8186038d-kube-api-access-vdqt9" (OuterVolumeSpecName: "kube-api-access-vdqt9") pod "a41cab81-f22e-4fd0-8151-ff2a8186038d" (UID: "a41cab81-f22e-4fd0-8151-ff2a8186038d"). InnerVolumeSpecName "kube-api-access-vdqt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 20:17:23.370112 kubelet[2989]: I1008 20:17:23.368816 2989 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a41cab81-f22e-4fd0-8151-ff2a8186038d" (UID: "a41cab81-f22e-4fd0-8151-ff2a8186038d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:17:23.370112 kubelet[2989]: I1008 20:17:23.369698 2989 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f502ddf9-ed33-470d-b6a5-3d14c016c73f-kube-api-access-59zzr" (OuterVolumeSpecName: "kube-api-access-59zzr") pod "f502ddf9-ed33-470d-b6a5-3d14c016c73f" (UID: "f502ddf9-ed33-470d-b6a5-3d14c016c73f"). InnerVolumeSpecName "kube-api-access-59zzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 20:17:23.370235 kubelet[2989]: I1008 20:17:23.369766 2989 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a41cab81-f22e-4fd0-8151-ff2a8186038d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a41cab81-f22e-4fd0-8151-ff2a8186038d" (UID: "a41cab81-f22e-4fd0-8151-ff2a8186038d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 20:17:23.370367 kubelet[2989]: I1008 20:17:23.370322 2989 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a41cab81-f22e-4fd0-8151-ff2a8186038d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a41cab81-f22e-4fd0-8151-ff2a8186038d" (UID: "a41cab81-f22e-4fd0-8151-ff2a8186038d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 20:17:23.449862 kubelet[2989]: I1008 20:17:23.449823 2989 scope.go:117] "RemoveContainer" containerID="8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d" Oct 8 20:17:23.452440 containerd[1496]: time="2024-10-08T20:17:23.452049949Z" level=info msg="RemoveContainer for \"8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d\"" Oct 8 20:17:23.459039 containerd[1496]: time="2024-10-08T20:17:23.457745434Z" level=info msg="RemoveContainer for \"8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d\" returns successfully" Oct 8 20:17:23.462236 systemd[1]: Removed slice kubepods-besteffort-podf502ddf9_ed33_470d_b6a5_3d14c016c73f.slice - libcontainer container kubepods-besteffort-podf502ddf9_ed33_470d_b6a5_3d14c016c73f.slice. Oct 8 20:17:23.466482 kubelet[2989]: I1008 20:17:23.465553 2989 scope.go:117] "RemoveContainer" containerID="8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d" Oct 8 20:17:23.478480 kubelet[2989]: I1008 20:17:23.476541 2989 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-xtables-lock\") on node \"ci-4081-1-0-7-2461ba8d61\" DevicePath \"\"" Oct 8 20:17:23.478480 kubelet[2989]: I1008 20:17:23.476602 2989 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vdqt9\" (UniqueName: \"kubernetes.io/projected/a41cab81-f22e-4fd0-8151-ff2a8186038d-kube-api-access-vdqt9\") on node \"ci-4081-1-0-7-2461ba8d61\" DevicePath \"\"" Oct 8 20:17:23.478480 kubelet[2989]: I1008 20:17:23.476627 2989 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-cilium-cgroup\") on node \"ci-4081-1-0-7-2461ba8d61\" DevicePath \"\"" Oct 8 20:17:23.478480 kubelet[2989]: I1008 20:17:23.476650 2989 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-lib-modules\") on node \"ci-4081-1-0-7-2461ba8d61\" DevicePath \"\"" Oct 8 20:17:23.478480 kubelet[2989]: I1008 20:17:23.476673 2989 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a41cab81-f22e-4fd0-8151-ff2a8186038d-clustermesh-secrets\") on node \"ci-4081-1-0-7-2461ba8d61\" DevicePath \"\"" Oct 8 20:17:23.478480 kubelet[2989]: I1008 20:17:23.476694 2989 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-59zzr\" (UniqueName: \"kubernetes.io/projected/f502ddf9-ed33-470d-b6a5-3d14c016c73f-kube-api-access-59zzr\") on node \"ci-4081-1-0-7-2461ba8d61\" DevicePath \"\"" Oct 8 20:17:23.478480 kubelet[2989]: I1008 20:17:23.476714 2989 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a41cab81-f22e-4fd0-8151-ff2a8186038d-hubble-tls\") on node \"ci-4081-1-0-7-2461ba8d61\" DevicePath \"\"" Oct 8 20:17:23.478480 kubelet[2989]: I1008 20:17:23.476735 2989 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-host-proc-sys-net\") on node \"ci-4081-1-0-7-2461ba8d61\" DevicePath \"\"" Oct 8 20:17:23.479018 kubelet[2989]: I1008 20:17:23.476756 2989 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a41cab81-f22e-4fd0-8151-ff2a8186038d-cilium-config-path\") on node \"ci-4081-1-0-7-2461ba8d61\" DevicePath \"\"" Oct 8 20:17:23.479018 kubelet[2989]: I1008 20:17:23.476776 2989 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-host-proc-sys-kernel\") on node \"ci-4081-1-0-7-2461ba8d61\" DevicePath \"\"" Oct 8 20:17:23.479018 kubelet[2989]: I1008 20:17:23.476797 2989 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-bpf-maps\") on node \"ci-4081-1-0-7-2461ba8d61\" DevicePath \"\"" Oct 8 20:17:23.479018 kubelet[2989]: I1008 20:17:23.476818 2989 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f502ddf9-ed33-470d-b6a5-3d14c016c73f-cilium-config-path\") on node \"ci-4081-1-0-7-2461ba8d61\" DevicePath \"\"" Oct 8 20:17:23.479018 kubelet[2989]: I1008 20:17:23.476879 2989 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-etc-cni-netd\") on node \"ci-4081-1-0-7-2461ba8d61\" DevicePath \"\"" Oct 8 20:17:23.479018 kubelet[2989]: I1008 20:17:23.476903 2989 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-cilium-run\") on node \"ci-4081-1-0-7-2461ba8d61\" DevicePath \"\"" Oct 8 20:17:23.479018 kubelet[2989]: I1008 20:17:23.476922 2989 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-hostproc\") on node \"ci-4081-1-0-7-2461ba8d61\" DevicePath \"\"" Oct 8 20:17:23.479018 kubelet[2989]: I1008 20:17:23.476941 2989 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a41cab81-f22e-4fd0-8151-ff2a8186038d-cni-path\") on node \"ci-4081-1-0-7-2461ba8d61\" DevicePath \"\"" Oct 8 20:17:23.478987 systemd[1]: Removed slice kubepods-burstable-poda41cab81_f22e_4fd0_8151_ff2a8186038d.slice - libcontainer container kubepods-burstable-poda41cab81_f22e_4fd0_8151_ff2a8186038d.slice. Oct 8 20:17:23.479165 systemd[1]: kubepods-burstable-poda41cab81_f22e_4fd0_8151_ff2a8186038d.slice: Consumed 8.666s CPU time. Oct 8 20:17:23.502611 containerd[1496]: time="2024-10-08T20:17:23.474450858Z" level=error msg="ContainerStatus for \"8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d\": not found" Oct 8 20:17:23.522321 kubelet[2989]: E1008 20:17:23.521846 2989 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d\": not found" containerID="8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d" Oct 8 20:17:23.532078 kubelet[2989]: I1008 20:17:23.521928 2989 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d"} err="failed to get container status \"8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d\": rpc error: code = NotFound desc = an error occurred when try to find container \"8be6fbe4a38084217285c7ad19fc1bcb19f0f2662f119b420f4e51768abf4c4d\": not found" Oct 8 20:17:23.532078 kubelet[2989]: I1008 20:17:23.532077 2989 scope.go:117] "RemoveContainer" containerID="acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1" Oct 8 20:17:23.533314 containerd[1496]: time="2024-10-08T20:17:23.533258943Z" level=info msg="RemoveContainer for \"acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1\"" Oct 8 20:17:23.536865 containerd[1496]: time="2024-10-08T20:17:23.536802986Z" level=info msg="RemoveContainer for \"acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1\" returns successfully" Oct 8 20:17:23.537074 kubelet[2989]: I1008 20:17:23.537046 2989 scope.go:117] "RemoveContainer" containerID="f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769" Oct 8 20:17:23.538969 containerd[1496]: time="2024-10-08T20:17:23.538920114Z" level=info msg="RemoveContainer for \"f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769\"" Oct 8 20:17:23.542624 containerd[1496]: time="2024-10-08T20:17:23.542564415Z" level=info msg="RemoveContainer for \"f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769\" returns successfully" Oct 8 20:17:23.542913 kubelet[2989]: I1008 20:17:23.542860 2989 scope.go:117] "RemoveContainer" containerID="11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d" Oct 8 20:17:23.544292 containerd[1496]: time="2024-10-08T20:17:23.543993063Z" level=info msg="RemoveContainer for \"11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d\"" Oct 8 20:17:23.547196 containerd[1496]: time="2024-10-08T20:17:23.547175399Z" level=info msg="RemoveContainer for \"11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d\" returns successfully" Oct 8 20:17:23.547465 kubelet[2989]: I1008 20:17:23.547413 2989 scope.go:117] "RemoveContainer" containerID="58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733" Oct 8 20:17:23.548711 containerd[1496]: time="2024-10-08T20:17:23.548613183Z" level=info msg="RemoveContainer for \"58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733\"" Oct 8 20:17:23.552135 containerd[1496]: time="2024-10-08T20:17:23.552117352Z" level=info msg="RemoveContainer for \"58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733\" returns successfully" Oct 8 20:17:23.552416 kubelet[2989]: I1008 20:17:23.552353 2989 scope.go:117] "RemoveContainer" containerID="d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e" Oct 8 20:17:23.553648 containerd[1496]: time="2024-10-08T20:17:23.553611723Z" level=info msg="RemoveContainer for \"d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e\"" Oct 8 20:17:23.559119 containerd[1496]: time="2024-10-08T20:17:23.559090403Z" level=info msg="RemoveContainer for \"d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e\" returns successfully" Oct 8 20:17:23.559277 kubelet[2989]: I1008 20:17:23.559232 2989 scope.go:117] "RemoveContainer" containerID="acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1" Oct 8 20:17:23.559612 containerd[1496]: time="2024-10-08T20:17:23.559549814Z" level=error msg="ContainerStatus for \"acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1\": not found" Oct 8 20:17:23.559710 kubelet[2989]: E1008 20:17:23.559683 2989 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1\": not found" containerID="acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1" Oct 8 20:17:23.559754 kubelet[2989]: I1008 20:17:23.559708 2989 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1"} err="failed to get container status \"acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"acf4e5d45f4eb2bc6d532c2b4a1a0f86654662f64cb9aa3e5ccd1b4b090801a1\": not found" Oct 8 20:17:23.559754 kubelet[2989]: I1008 20:17:23.559728 2989 scope.go:117] "RemoveContainer" containerID="f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769" Oct 8 20:17:23.559891 containerd[1496]: time="2024-10-08T20:17:23.559860676Z" level=error msg="ContainerStatus for \"f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769\": not found" Oct 8 20:17:23.560064 kubelet[2989]: E1008 20:17:23.559987 2989 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769\": not found" containerID="f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769" Oct 8 20:17:23.560105 kubelet[2989]: I1008 20:17:23.560059 2989 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769"} err="failed to get container status \"f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769\": rpc error: code = NotFound desc = an error occurred when try to find container \"f80c90893a8f5eb907e01005c5be4b3318a565eb661f661859c43ebb4946d769\": not found" Oct 8 20:17:23.560105 kubelet[2989]: I1008 20:17:23.560073 2989 scope.go:117] "RemoveContainer" containerID="11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d" Oct 8 20:17:23.560343 containerd[1496]: time="2024-10-08T20:17:23.560300401Z" level=error msg="ContainerStatus for \"11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d\": not found" Oct 8 20:17:23.560386 kubelet[2989]: E1008 20:17:23.560375 2989 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d\": not found" containerID="11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d" Oct 8 20:17:23.560421 kubelet[2989]: I1008 20:17:23.560389 2989 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d"} err="failed to get container status \"11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d\": rpc error: code = NotFound desc = an error occurred when try to find container \"11e40afb4ede528cf4525528933186d84ee909dec0b31c0639c4d08941e8287d\": not found" Oct 8 20:17:23.560421 kubelet[2989]: I1008 20:17:23.560400 2989 scope.go:117] "RemoveContainer" containerID="58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733" Oct 8 20:17:23.560639 containerd[1496]: time="2024-10-08T20:17:23.560541112Z" level=error msg="ContainerStatus for \"58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733\": not found" Oct 8 20:17:23.560733 kubelet[2989]: E1008 20:17:23.560712 2989 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733\": not found" containerID="58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733" Oct 8 20:17:23.560780 kubelet[2989]: I1008 20:17:23.560731 2989 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733"} err="failed to get container status \"58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733\": rpc error: code = NotFound desc = an error occurred when try to find container \"58a56a16035ee72fde735de6739912670d096222e8fc302cea2797226dd15733\": not found" Oct 8 20:17:23.560780 kubelet[2989]: I1008 20:17:23.560742 2989 scope.go:117] "RemoveContainer" containerID="d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e" Oct 8 20:17:23.560889 containerd[1496]: time="2024-10-08T20:17:23.560862946Z" level=error msg="ContainerStatus for \"d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e\": not found" Oct 8 20:17:23.560973 kubelet[2989]: E1008 20:17:23.560934 2989 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e\": not found" containerID="d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e" Oct 8 20:17:23.561024 kubelet[2989]: I1008 20:17:23.560970 2989 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e"} err="failed to get container status \"d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e\": rpc error: code = NotFound desc = an error occurred when try to find container \"d63c97050bfd756ffd5d4c06ceba676bd2fe7dae02de498d63f40444c14d019e\": not found" Oct 8 20:17:23.961020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6-rootfs.mount: Deactivated successfully. Oct 8 20:17:23.961228 systemd[1]: var-lib-kubelet-pods-f502ddf9\x2ded33\x2d470d\x2db6a5\x2d3d14c016c73f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d59zzr.mount: Deactivated successfully. Oct 8 20:17:23.961428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935-rootfs.mount: Deactivated successfully. Oct 8 20:17:23.961610 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935-shm.mount: Deactivated successfully. Oct 8 20:17:23.961777 systemd[1]: var-lib-kubelet-pods-a41cab81\x2df22e\x2d4fd0\x2d8151\x2dff2a8186038d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvdqt9.mount: Deactivated successfully. Oct 8 20:17:23.962012 systemd[1]: var-lib-kubelet-pods-a41cab81\x2df22e\x2d4fd0\x2d8151\x2dff2a8186038d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 8 20:17:23.962178 systemd[1]: var-lib-kubelet-pods-a41cab81\x2df22e\x2d4fd0\x2d8151\x2dff2a8186038d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 8 20:17:24.614749 kubelet[2989]: I1008 20:17:24.614678 2989 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a41cab81-f22e-4fd0-8151-ff2a8186038d" path="/var/lib/kubelet/pods/a41cab81-f22e-4fd0-8151-ff2a8186038d/volumes" Oct 8 20:17:24.616273 kubelet[2989]: I1008 20:17:24.616220 2989 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f502ddf9-ed33-470d-b6a5-3d14c016c73f" path="/var/lib/kubelet/pods/f502ddf9-ed33-470d-b6a5-3d14c016c73f/volumes" Oct 8 20:17:24.951579 sshd[4575]: pam_unix(sshd:session): session closed for user core Oct 8 20:17:24.958105 systemd[1]: sshd@21-157.90.145.6:22-147.75.109.163:60432.service: Deactivated successfully. Oct 8 20:17:24.962496 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 20:17:24.965626 systemd-logind[1474]: Session 20 logged out. Waiting for processes to exit. Oct 8 20:17:24.967574 systemd-logind[1474]: Removed session 20. Oct 8 20:17:25.133221 systemd[1]: Started sshd@22-157.90.145.6:22-147.75.109.163:60448.service - OpenSSH per-connection server daemon (147.75.109.163:60448). Oct 8 20:17:25.773724 kubelet[2989]: E1008 20:17:25.773628 2989 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 8 20:17:26.147188 sshd[4740]: Accepted publickey for core from 147.75.109.163 port 60448 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:17:26.149314 sshd[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:17:26.156728 systemd-logind[1474]: New session 21 of user core. Oct 8 20:17:26.166107 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 20:17:27.596541 kubelet[2989]: I1008 20:17:27.594439 2989 topology_manager.go:215] "Topology Admit Handler" podUID="a1937810-8edb-44fb-986d-e564e3476c5f" podNamespace="kube-system" podName="cilium-b52wm" Oct 8 20:17:27.596541 kubelet[2989]: E1008 20:17:27.594515 2989 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a41cab81-f22e-4fd0-8151-ff2a8186038d" containerName="mount-cgroup" Oct 8 20:17:27.596541 kubelet[2989]: E1008 20:17:27.594529 2989 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a41cab81-f22e-4fd0-8151-ff2a8186038d" containerName="apply-sysctl-overwrites" Oct 8 20:17:27.596541 kubelet[2989]: E1008 20:17:27.594540 2989 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a41cab81-f22e-4fd0-8151-ff2a8186038d" containerName="clean-cilium-state" Oct 8 20:17:27.596541 kubelet[2989]: E1008 20:17:27.594550 2989 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f502ddf9-ed33-470d-b6a5-3d14c016c73f" containerName="cilium-operator" Oct 8 20:17:27.596541 kubelet[2989]: E1008 20:17:27.594560 2989 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a41cab81-f22e-4fd0-8151-ff2a8186038d" containerName="mount-bpf-fs" Oct 8 20:17:27.596541 kubelet[2989]: E1008 20:17:27.594569 2989 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a41cab81-f22e-4fd0-8151-ff2a8186038d" containerName="cilium-agent" Oct 8 20:17:27.596541 kubelet[2989]: I1008 20:17:27.594624 2989 memory_manager.go:354] "RemoveStaleState removing state" podUID="f502ddf9-ed33-470d-b6a5-3d14c016c73f" containerName="cilium-operator" Oct 8 20:17:27.596541 kubelet[2989]: I1008 20:17:27.594637 2989 memory_manager.go:354] "RemoveStaleState removing state" podUID="a41cab81-f22e-4fd0-8151-ff2a8186038d" containerName="cilium-agent" Oct 8 20:17:27.616573 systemd[1]: Created slice kubepods-burstable-poda1937810_8edb_44fb_986d_e564e3476c5f.slice - libcontainer container kubepods-burstable-poda1937810_8edb_44fb_986d_e564e3476c5f.slice. Oct 8 20:17:27.702031 kubelet[2989]: I1008 20:17:27.701963 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1937810-8edb-44fb-986d-e564e3476c5f-xtables-lock\") pod \"cilium-b52wm\" (UID: \"a1937810-8edb-44fb-986d-e564e3476c5f\") " pod="kube-system/cilium-b52wm" Oct 8 20:17:27.702031 kubelet[2989]: I1008 20:17:27.702022 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1937810-8edb-44fb-986d-e564e3476c5f-hubble-tls\") pod \"cilium-b52wm\" (UID: \"a1937810-8edb-44fb-986d-e564e3476c5f\") " pod="kube-system/cilium-b52wm" Oct 8 20:17:27.702031 kubelet[2989]: I1008 20:17:27.702040 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1937810-8edb-44fb-986d-e564e3476c5f-bpf-maps\") pod \"cilium-b52wm\" (UID: \"a1937810-8edb-44fb-986d-e564e3476c5f\") " pod="kube-system/cilium-b52wm" Oct 8 20:17:27.702233 kubelet[2989]: I1008 20:17:27.702056 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1937810-8edb-44fb-986d-e564e3476c5f-cilium-cgroup\") pod \"cilium-b52wm\" (UID: \"a1937810-8edb-44fb-986d-e564e3476c5f\") " pod="kube-system/cilium-b52wm" Oct 8 20:17:27.702233 kubelet[2989]: I1008 20:17:27.702077 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-652cm\" (UniqueName: \"kubernetes.io/projected/a1937810-8edb-44fb-986d-e564e3476c5f-kube-api-access-652cm\") pod \"cilium-b52wm\" (UID: \"a1937810-8edb-44fb-986d-e564e3476c5f\") " pod="kube-system/cilium-b52wm" Oct 8 20:17:27.702233 kubelet[2989]: I1008 20:17:27.702095 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a1937810-8edb-44fb-986d-e564e3476c5f-cilium-ipsec-secrets\") pod \"cilium-b52wm\" (UID: \"a1937810-8edb-44fb-986d-e564e3476c5f\") " pod="kube-system/cilium-b52wm" Oct 8 20:17:27.702233 kubelet[2989]: I1008 20:17:27.702114 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1937810-8edb-44fb-986d-e564e3476c5f-cilium-config-path\") pod \"cilium-b52wm\" (UID: \"a1937810-8edb-44fb-986d-e564e3476c5f\") " pod="kube-system/cilium-b52wm" Oct 8 20:17:27.702233 kubelet[2989]: I1008 20:17:27.702132 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1937810-8edb-44fb-986d-e564e3476c5f-lib-modules\") pod \"cilium-b52wm\" (UID: \"a1937810-8edb-44fb-986d-e564e3476c5f\") " pod="kube-system/cilium-b52wm" Oct 8 20:17:27.702349 kubelet[2989]: I1008 20:17:27.702148 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1937810-8edb-44fb-986d-e564e3476c5f-etc-cni-netd\") pod \"cilium-b52wm\" (UID: \"a1937810-8edb-44fb-986d-e564e3476c5f\") " pod="kube-system/cilium-b52wm" Oct 8 20:17:27.702349 kubelet[2989]: I1008 20:17:27.702165 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1937810-8edb-44fb-986d-e564e3476c5f-host-proc-sys-kernel\") pod \"cilium-b52wm\" (UID: \"a1937810-8edb-44fb-986d-e564e3476c5f\") " pod="kube-system/cilium-b52wm" Oct 8 20:17:27.702349 kubelet[2989]: I1008 20:17:27.702186 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1937810-8edb-44fb-986d-e564e3476c5f-cilium-run\") pod \"cilium-b52wm\" (UID: \"a1937810-8edb-44fb-986d-e564e3476c5f\") " pod="kube-system/cilium-b52wm" Oct 8 20:17:27.702349 kubelet[2989]: I1008 20:17:27.702203 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1937810-8edb-44fb-986d-e564e3476c5f-hostproc\") pod \"cilium-b52wm\" (UID: \"a1937810-8edb-44fb-986d-e564e3476c5f\") " pod="kube-system/cilium-b52wm" Oct 8 20:17:27.702349 kubelet[2989]: I1008 20:17:27.702221 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1937810-8edb-44fb-986d-e564e3476c5f-cni-path\") pod \"cilium-b52wm\" (UID: \"a1937810-8edb-44fb-986d-e564e3476c5f\") " pod="kube-system/cilium-b52wm" Oct 8 20:17:27.702349 kubelet[2989]: I1008 20:17:27.702239 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1937810-8edb-44fb-986d-e564e3476c5f-clustermesh-secrets\") pod \"cilium-b52wm\" (UID: \"a1937810-8edb-44fb-986d-e564e3476c5f\") " pod="kube-system/cilium-b52wm" Oct 8 20:17:27.702493 kubelet[2989]: I1008 20:17:27.702256 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1937810-8edb-44fb-986d-e564e3476c5f-host-proc-sys-net\") pod \"cilium-b52wm\" (UID: \"a1937810-8edb-44fb-986d-e564e3476c5f\") " pod="kube-system/cilium-b52wm" Oct 8 20:17:27.813171 sshd[4740]: pam_unix(sshd:session): session closed for user core Oct 8 20:17:27.873601 systemd[1]: sshd@22-157.90.145.6:22-147.75.109.163:60448.service: Deactivated successfully. Oct 8 20:17:27.877459 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 20:17:27.880962 systemd-logind[1474]: Session 21 logged out. Waiting for processes to exit. Oct 8 20:17:27.885297 systemd-logind[1474]: Removed session 21. Oct 8 20:17:27.924568 containerd[1496]: time="2024-10-08T20:17:27.924525497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b52wm,Uid:a1937810-8edb-44fb-986d-e564e3476c5f,Namespace:kube-system,Attempt:0,}" Oct 8 20:17:27.949601 containerd[1496]: time="2024-10-08T20:17:27.949288274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:17:27.949601 containerd[1496]: time="2024-10-08T20:17:27.949350971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:17:27.949601 containerd[1496]: time="2024-10-08T20:17:27.949362773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:17:27.949601 containerd[1496]: time="2024-10-08T20:17:27.949441561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:17:27.966037 systemd[1]: Started cri-containerd-3f857a47fbd8fc728ef5e9530e7286ab7b909791ff106b14dcebb842b77e5cd4.scope - libcontainer container 3f857a47fbd8fc728ef5e9530e7286ab7b909791ff106b14dcebb842b77e5cd4. Oct 8 20:17:27.984282 systemd[1]: Started sshd@23-157.90.145.6:22-147.75.109.163:59400.service - OpenSSH per-connection server daemon (147.75.109.163:59400). Oct 8 20:17:28.003420 containerd[1496]: time="2024-10-08T20:17:28.003364949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b52wm,Uid:a1937810-8edb-44fb-986d-e564e3476c5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f857a47fbd8fc728ef5e9530e7286ab7b909791ff106b14dcebb842b77e5cd4\"" Oct 8 20:17:28.007732 containerd[1496]: time="2024-10-08T20:17:28.007633701Z" level=info msg="CreateContainer within sandbox \"3f857a47fbd8fc728ef5e9530e7286ab7b909791ff106b14dcebb842b77e5cd4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 20:17:28.020572 containerd[1496]: time="2024-10-08T20:17:28.020518563Z" level=info msg="CreateContainer within sandbox \"3f857a47fbd8fc728ef5e9530e7286ab7b909791ff106b14dcebb842b77e5cd4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"da5043a41c4edee35f775237c1ee5f046ce3fa82e86476cb029e62a68adca9e0\"" Oct 8 20:17:28.022055 containerd[1496]: time="2024-10-08T20:17:28.021431605Z" level=info msg="StartContainer for \"da5043a41c4edee35f775237c1ee5f046ce3fa82e86476cb029e62a68adca9e0\"" Oct 8 20:17:28.051069 systemd[1]: Started cri-containerd-da5043a41c4edee35f775237c1ee5f046ce3fa82e86476cb029e62a68adca9e0.scope - libcontainer container da5043a41c4edee35f775237c1ee5f046ce3fa82e86476cb029e62a68adca9e0. Oct 8 20:17:28.055845 kubelet[2989]: I1008 20:17:28.055775 2989 setters.go:580] "Node became not ready" node="ci-4081-1-0-7-2461ba8d61" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-08T20:17:28Z","lastTransitionTime":"2024-10-08T20:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 8 20:17:28.087340 containerd[1496]: time="2024-10-08T20:17:28.086817678Z" level=info msg="StartContainer for \"da5043a41c4edee35f775237c1ee5f046ce3fa82e86476cb029e62a68adca9e0\" returns successfully" Oct 8 20:17:28.101985 systemd[1]: cri-containerd-da5043a41c4edee35f775237c1ee5f046ce3fa82e86476cb029e62a68adca9e0.scope: Deactivated successfully. Oct 8 20:17:28.146341 containerd[1496]: time="2024-10-08T20:17:28.146148523Z" level=info msg="shim disconnected" id=da5043a41c4edee35f775237c1ee5f046ce3fa82e86476cb029e62a68adca9e0 namespace=k8s.io Oct 8 20:17:28.146341 containerd[1496]: time="2024-10-08T20:17:28.146257608Z" level=warning msg="cleaning up after shim disconnected" id=da5043a41c4edee35f775237c1ee5f046ce3fa82e86476cb029e62a68adca9e0 namespace=k8s.io Oct 8 20:17:28.146341 containerd[1496]: time="2024-10-08T20:17:28.146274800Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:17:28.478279 containerd[1496]: time="2024-10-08T20:17:28.478194092Z" level=info msg="CreateContainer within sandbox \"3f857a47fbd8fc728ef5e9530e7286ab7b909791ff106b14dcebb842b77e5cd4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 20:17:28.496775 containerd[1496]: time="2024-10-08T20:17:28.496680174Z" level=info msg="CreateContainer within sandbox \"3f857a47fbd8fc728ef5e9530e7286ab7b909791ff106b14dcebb842b77e5cd4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9e007951c1d2656688904cf7e4392b0434380d65f8c3196919f109674d5601b9\"" Oct 8 20:17:28.499541 containerd[1496]: time="2024-10-08T20:17:28.499466427Z" level=info msg="StartContainer for \"9e007951c1d2656688904cf7e4392b0434380d65f8c3196919f109674d5601b9\"" Oct 8 20:17:28.556011 systemd[1]: Started cri-containerd-9e007951c1d2656688904cf7e4392b0434380d65f8c3196919f109674d5601b9.scope - libcontainer container 9e007951c1d2656688904cf7e4392b0434380d65f8c3196919f109674d5601b9. Oct 8 20:17:28.601103 containerd[1496]: time="2024-10-08T20:17:28.600925339Z" level=info msg="StartContainer for \"9e007951c1d2656688904cf7e4392b0434380d65f8c3196919f109674d5601b9\" returns successfully" Oct 8 20:17:28.612613 systemd[1]: cri-containerd-9e007951c1d2656688904cf7e4392b0434380d65f8c3196919f109674d5601b9.scope: Deactivated successfully. Oct 8 20:17:28.647218 containerd[1496]: time="2024-10-08T20:17:28.647124658Z" level=info msg="shim disconnected" id=9e007951c1d2656688904cf7e4392b0434380d65f8c3196919f109674d5601b9 namespace=k8s.io Oct 8 20:17:28.647218 containerd[1496]: time="2024-10-08T20:17:28.647183278Z" level=warning msg="cleaning up after shim disconnected" id=9e007951c1d2656688904cf7e4392b0434380d65f8c3196919f109674d5601b9 namespace=k8s.io Oct 8 20:17:28.647218 containerd[1496]: time="2024-10-08T20:17:28.647192265Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:17:29.001039 sshd[4792]: Accepted publickey for core from 147.75.109.163 port 59400 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:17:29.003620 sshd[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:17:29.009886 systemd-logind[1474]: New session 22 of user core. Oct 8 20:17:29.014979 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 20:17:29.489170 containerd[1496]: time="2024-10-08T20:17:29.489113751Z" level=info msg="CreateContainer within sandbox \"3f857a47fbd8fc728ef5e9530e7286ab7b909791ff106b14dcebb842b77e5cd4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 20:17:29.520651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount150134530.mount: Deactivated successfully. Oct 8 20:17:29.527782 containerd[1496]: time="2024-10-08T20:17:29.527531835Z" level=info msg="CreateContainer within sandbox \"3f857a47fbd8fc728ef5e9530e7286ab7b909791ff106b14dcebb842b77e5cd4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"af0b8b06d0413ef2a50483eaa23d263e41c3a1edc113383fa65790de07eb6a22\"" Oct 8 20:17:29.530467 containerd[1496]: time="2024-10-08T20:17:29.528385926Z" level=info msg="StartContainer for \"af0b8b06d0413ef2a50483eaa23d263e41c3a1edc113383fa65790de07eb6a22\"" Oct 8 20:17:29.570275 systemd[1]: Started cri-containerd-af0b8b06d0413ef2a50483eaa23d263e41c3a1edc113383fa65790de07eb6a22.scope - libcontainer container af0b8b06d0413ef2a50483eaa23d263e41c3a1edc113383fa65790de07eb6a22. Oct 8 20:17:29.605516 containerd[1496]: time="2024-10-08T20:17:29.605322911Z" level=info msg="StartContainer for \"af0b8b06d0413ef2a50483eaa23d263e41c3a1edc113383fa65790de07eb6a22\" returns successfully" Oct 8 20:17:29.612771 systemd[1]: cri-containerd-af0b8b06d0413ef2a50483eaa23d263e41c3a1edc113383fa65790de07eb6a22.scope: Deactivated successfully. Oct 8 20:17:29.653442 containerd[1496]: time="2024-10-08T20:17:29.653380694Z" level=info msg="shim disconnected" id=af0b8b06d0413ef2a50483eaa23d263e41c3a1edc113383fa65790de07eb6a22 namespace=k8s.io Oct 8 20:17:29.653442 containerd[1496]: time="2024-10-08T20:17:29.653430699Z" level=warning msg="cleaning up after shim disconnected" id=af0b8b06d0413ef2a50483eaa23d263e41c3a1edc113383fa65790de07eb6a22 namespace=k8s.io Oct 8 20:17:29.653442 containerd[1496]: time="2024-10-08T20:17:29.653438613Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:17:29.700352 sshd[4792]: pam_unix(sshd:session): session closed for user core Oct 8 20:17:29.704109 systemd[1]: sshd@23-157.90.145.6:22-147.75.109.163:59400.service: Deactivated successfully. Oct 8 20:17:29.705995 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 20:17:29.707347 systemd-logind[1474]: Session 22 logged out. Waiting for processes to exit. Oct 8 20:17:29.708586 systemd-logind[1474]: Removed session 22. Oct 8 20:17:29.821769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af0b8b06d0413ef2a50483eaa23d263e41c3a1edc113383fa65790de07eb6a22-rootfs.mount: Deactivated successfully. Oct 8 20:17:29.870190 systemd[1]: Started sshd@24-157.90.145.6:22-147.75.109.163:59410.service - OpenSSH per-connection server daemon (147.75.109.163:59410). Oct 8 20:17:30.483983 containerd[1496]: time="2024-10-08T20:17:30.483713418Z" level=info msg="CreateContainer within sandbox \"3f857a47fbd8fc728ef5e9530e7286ab7b909791ff106b14dcebb842b77e5cd4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 20:17:30.501677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3223235988.mount: Deactivated successfully. Oct 8 20:17:30.503044 containerd[1496]: time="2024-10-08T20:17:30.502995602Z" level=info msg="CreateContainer within sandbox \"3f857a47fbd8fc728ef5e9530e7286ab7b909791ff106b14dcebb842b77e5cd4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6545fe25b5c2fa0e8a05977de47b286cda41da5d057fb88a6aa15c90e7a91773\"" Oct 8 20:17:30.504505 containerd[1496]: time="2024-10-08T20:17:30.503622998Z" level=info msg="StartContainer for \"6545fe25b5c2fa0e8a05977de47b286cda41da5d057fb88a6aa15c90e7a91773\"" Oct 8 20:17:30.532112 systemd[1]: Started cri-containerd-6545fe25b5c2fa0e8a05977de47b286cda41da5d057fb88a6aa15c90e7a91773.scope - libcontainer container 6545fe25b5c2fa0e8a05977de47b286cda41da5d057fb88a6aa15c90e7a91773. Oct 8 20:17:30.559297 systemd[1]: cri-containerd-6545fe25b5c2fa0e8a05977de47b286cda41da5d057fb88a6aa15c90e7a91773.scope: Deactivated successfully. Oct 8 20:17:30.561214 containerd[1496]: time="2024-10-08T20:17:30.561124352Z" level=info msg="StartContainer for \"6545fe25b5c2fa0e8a05977de47b286cda41da5d057fb88a6aa15c90e7a91773\" returns successfully" Oct 8 20:17:30.583744 containerd[1496]: time="2024-10-08T20:17:30.583653643Z" level=info msg="shim disconnected" id=6545fe25b5c2fa0e8a05977de47b286cda41da5d057fb88a6aa15c90e7a91773 namespace=k8s.io Oct 8 20:17:30.583744 containerd[1496]: time="2024-10-08T20:17:30.583730598Z" level=warning msg="cleaning up after shim disconnected" id=6545fe25b5c2fa0e8a05977de47b286cda41da5d057fb88a6aa15c90e7a91773 namespace=k8s.io Oct 8 20:17:30.583744 containerd[1496]: time="2024-10-08T20:17:30.583743612Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:17:30.774882 kubelet[2989]: E1008 20:17:30.774716 2989 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 8 20:17:30.821583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6545fe25b5c2fa0e8a05977de47b286cda41da5d057fb88a6aa15c90e7a91773-rootfs.mount: Deactivated successfully. Oct 8 20:17:30.832509 sshd[4986]: Accepted publickey for core from 147.75.109.163 port 59410 ssh2: RSA SHA256:8pb/X5i1efUvJi8sgU2/AQBt50OQJsXEcuFpDNAus+I Oct 8 20:17:30.834520 sshd[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:17:30.839567 systemd-logind[1474]: New session 23 of user core. Oct 8 20:17:30.845039 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 20:17:31.488333 containerd[1496]: time="2024-10-08T20:17:31.488180482Z" level=info msg="CreateContainer within sandbox \"3f857a47fbd8fc728ef5e9530e7286ab7b909791ff106b14dcebb842b77e5cd4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 20:17:31.507767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount582789887.mount: Deactivated successfully. Oct 8 20:17:31.510104 containerd[1496]: time="2024-10-08T20:17:31.510050536Z" level=info msg="CreateContainer within sandbox \"3f857a47fbd8fc728ef5e9530e7286ab7b909791ff106b14dcebb842b77e5cd4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fe0c0a63ceffac2ffbdf927ac069ee04ff6c3752dc0101a08a6bf85a1a676608\"" Oct 8 20:17:31.511145 containerd[1496]: time="2024-10-08T20:17:31.511112888Z" level=info msg="StartContainer for \"fe0c0a63ceffac2ffbdf927ac069ee04ff6c3752dc0101a08a6bf85a1a676608\"" Oct 8 20:17:31.546184 systemd[1]: Started cri-containerd-fe0c0a63ceffac2ffbdf927ac069ee04ff6c3752dc0101a08a6bf85a1a676608.scope - libcontainer container fe0c0a63ceffac2ffbdf927ac069ee04ff6c3752dc0101a08a6bf85a1a676608. Oct 8 20:17:31.580220 containerd[1496]: time="2024-10-08T20:17:31.580178018Z" level=info msg="StartContainer for \"fe0c0a63ceffac2ffbdf927ac069ee04ff6c3752dc0101a08a6bf85a1a676608\" returns successfully" Oct 8 20:17:32.184987 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Oct 8 20:17:32.221888 kernel: jitterentropy: Initialization failed with host not compliant with requirements: 9 Oct 8 20:17:32.242987 kernel: DRBG: Continuing without Jitter RNG Oct 8 20:17:32.509958 kubelet[2989]: I1008 20:17:32.508646 2989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b52wm" podStartSLOduration=5.508623601 podStartE2EDuration="5.508623601s" podCreationTimestamp="2024-10-08 20:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:17:32.507279952 +0000 UTC m=+352.035393952" watchObservedRunningTime="2024-10-08 20:17:32.508623601 +0000 UTC m=+352.036737601" Oct 8 20:17:33.863634 systemd[1]: run-containerd-runc-k8s.io-fe0c0a63ceffac2ffbdf927ac069ee04ff6c3752dc0101a08a6bf85a1a676608-runc.Sk0zk1.mount: Deactivated successfully. Oct 8 20:17:33.944873 kubelet[2989]: E1008 20:17:33.944638 2989 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:38558->127.0.0.1:45071: write tcp 127.0.0.1:38558->127.0.0.1:45071: write: connection reset by peer Oct 8 20:17:35.554766 systemd-networkd[1395]: lxc_health: Link UP Oct 8 20:17:35.569971 systemd-networkd[1395]: lxc_health: Gained carrier Oct 8 20:17:36.106580 systemd[1]: run-containerd-runc-k8s.io-fe0c0a63ceffac2ffbdf927ac069ee04ff6c3752dc0101a08a6bf85a1a676608-runc.g6Gbgi.mount: Deactivated successfully. Oct 8 20:17:37.081071 systemd-networkd[1395]: lxc_health: Gained IPv6LL Oct 8 20:17:37.523984 systemd[1]: Started sshd@25-157.90.145.6:22-194.65.144.243:57123.service - OpenSSH per-connection server daemon (194.65.144.243:57123). Oct 8 20:17:37.880075 sshd[5678]: Invalid user alekseymop from 194.65.144.243 port 57123 Oct 8 20:17:37.938878 sshd[5678]: Received disconnect from 194.65.144.243 port 57123:11: Bye Bye [preauth] Oct 8 20:17:37.938878 sshd[5678]: Disconnected from invalid user alekseymop 194.65.144.243 port 57123 [preauth] Oct 8 20:17:37.944316 systemd[1]: sshd@25-157.90.145.6:22-194.65.144.243:57123.service: Deactivated successfully. Oct 8 20:17:40.604259 containerd[1496]: time="2024-10-08T20:17:40.604084720Z" level=info msg="StopPodSandbox for \"033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6\"" Oct 8 20:17:40.604259 containerd[1496]: time="2024-10-08T20:17:40.604168348Z" level=info msg="TearDown network for sandbox \"033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6\" successfully" Oct 8 20:17:40.604259 containerd[1496]: time="2024-10-08T20:17:40.604184378Z" level=info msg="StopPodSandbox for \"033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6\" returns successfully" Oct 8 20:17:40.611258 containerd[1496]: time="2024-10-08T20:17:40.611168318Z" level=info msg="RemovePodSandbox for \"033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6\"" Oct 8 20:17:40.613861 containerd[1496]: time="2024-10-08T20:17:40.613391616Z" level=info msg="Forcibly stopping sandbox \"033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6\"" Oct 8 20:17:40.613861 containerd[1496]: time="2024-10-08T20:17:40.613457008Z" level=info msg="TearDown network for sandbox \"033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6\" successfully" Oct 8 20:17:40.618085 containerd[1496]: time="2024-10-08T20:17:40.618057411Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:17:40.618148 containerd[1496]: time="2024-10-08T20:17:40.618099080Z" level=info msg="RemovePodSandbox \"033415dc293951089fb29de4585e13aa99255c599f7d41d7578ac39ecbd158d6\" returns successfully" Oct 8 20:17:40.618420 containerd[1496]: time="2024-10-08T20:17:40.618404833Z" level=info msg="StopPodSandbox for \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\"" Oct 8 20:17:40.618474 containerd[1496]: time="2024-10-08T20:17:40.618459886Z" level=info msg="TearDown network for sandbox \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\" successfully" Oct 8 20:17:40.618474 containerd[1496]: time="2024-10-08T20:17:40.618470516Z" level=info msg="StopPodSandbox for \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\" returns successfully" Oct 8 20:17:40.618735 containerd[1496]: time="2024-10-08T20:17:40.618695579Z" level=info msg="RemovePodSandbox for \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\"" Oct 8 20:17:40.618735 containerd[1496]: time="2024-10-08T20:17:40.618714755Z" level=info msg="Forcibly stopping sandbox \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\"" Oct 8 20:17:40.618795 containerd[1496]: time="2024-10-08T20:17:40.618757335Z" level=info msg="TearDown network for sandbox \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\" successfully" Oct 8 20:17:40.622069 containerd[1496]: time="2024-10-08T20:17:40.622032213Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:17:40.622126 containerd[1496]: time="2024-10-08T20:17:40.622066257Z" level=info msg="RemovePodSandbox \"2a2221c792abfc0dda9cdded8ee9073096d54c18b16de006ef9a23a96bef3935\" returns successfully" Oct 8 20:17:42.582420 systemd[1]: run-containerd-runc-k8s.io-fe0c0a63ceffac2ffbdf927ac069ee04ff6c3752dc0101a08a6bf85a1a676608-runc.fND9mk.mount: Deactivated successfully. Oct 8 20:17:42.794790 sshd[4986]: pam_unix(sshd:session): session closed for user core Oct 8 20:17:42.798901 systemd[1]: sshd@24-157.90.145.6:22-147.75.109.163:59410.service: Deactivated successfully. Oct 8 20:17:42.801039 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 20:17:42.802799 systemd-logind[1474]: Session 23 logged out. Waiting for processes to exit. Oct 8 20:17:42.804024 systemd-logind[1474]: Removed session 23.