Jan 17 12:27:13.949156 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:27:13.949190 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:27:13.949203 kernel: BIOS-provided physical RAM map: Jan 17 12:27:13.949214 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:27:13.949222 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:27:13.949230 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:27:13.949240 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jan 17 12:27:13.949249 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jan 17 12:27:13.949263 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 12:27:13.949272 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 17 12:27:13.949282 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 12:27:13.949292 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:27:13.949301 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 17 12:27:13.949311 kernel: NX (Execute Disable) protection: active Jan 17 12:27:13.949327 kernel: APIC: Static calls initialized Jan 17 12:27:13.949338 kernel: SMBIOS 3.0.0 present. Jan 17 12:27:13.949348 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 17 12:27:13.949357 kernel: Hypervisor detected: KVM Jan 17 12:27:13.949367 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:27:13.949376 kernel: kvm-clock: using sched offset of 3039252466 cycles Jan 17 12:27:13.949387 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:27:13.949397 kernel: tsc: Detected 2445.404 MHz processor Jan 17 12:27:13.949408 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:27:13.949422 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:27:13.949432 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jan 17 12:27:13.949442 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:27:13.949452 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:27:13.949462 kernel: Using GB pages for direct mapping Jan 17 12:27:13.949471 kernel: ACPI: Early table checksum verification disabled Jan 17 12:27:13.949481 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Jan 17 12:27:13.949490 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:27:13.949500 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:27:13.949513 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:27:13.949522 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jan 17 12:27:13.949532 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:27:13.949542 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:27:13.949552 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:27:13.949563 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:27:13.949574 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Jan 17 12:27:13.949584 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Jan 17 12:27:13.949603 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jan 17 12:27:13.949614 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Jan 17 12:27:13.949624 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Jan 17 12:27:13.949635 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Jan 17 12:27:13.949645 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Jan 17 12:27:13.949656 kernel: No NUMA configuration found Jan 17 12:27:13.949670 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jan 17 12:27:13.949680 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jan 17 12:27:13.949691 kernel: Zone ranges: Jan 17 12:27:13.949702 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:27:13.949713 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jan 17 12:27:13.949723 kernel: Normal empty Jan 17 12:27:13.949734 kernel: Movable zone start for each node Jan 17 12:27:13.949744 kernel: Early memory node ranges Jan 17 12:27:13.949755 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:27:13.949765 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jan 17 12:27:13.949780 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jan 17 12:27:13.949791 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:27:13.949801 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:27:13.949809 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 17 12:27:13.949815 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:27:13.949821 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:27:13.949826 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:27:13.949832 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:27:13.949843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:27:13.949858 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:27:13.949870 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:27:13.949880 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:27:13.949888 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:27:13.949894 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:27:13.949900 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:27:13.949906 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:27:13.949938 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 17 12:27:13.949946 kernel: Booting paravirtualized kernel on KVM Jan 17 12:27:13.949961 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:27:13.949973 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:27:13.949984 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:27:13.949994 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:27:13.950004 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:27:13.950010 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 12:27:13.950017 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:27:13.950023 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:27:13.950035 kernel: random: crng init done Jan 17 12:27:13.950057 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:27:13.950064 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:27:13.950070 kernel: Fallback order for Node 0: 0 Jan 17 12:27:13.950076 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jan 17 12:27:13.950082 kernel: Policy zone: DMA32 Jan 17 12:27:13.950093 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:27:13.950104 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 125152K reserved, 0K cma-reserved) Jan 17 12:27:13.950116 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:27:13.950127 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:27:13.950143 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:27:13.950153 kernel: Dynamic Preempt: voluntary Jan 17 12:27:13.950164 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:27:13.950175 kernel: rcu: RCU event tracing is enabled. Jan 17 12:27:13.950187 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:27:13.950198 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:27:13.950209 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:27:13.950220 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:27:13.950231 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:27:13.950242 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:27:13.950248 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:27:13.950254 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:27:13.950260 kernel: Console: colour VGA+ 80x25 Jan 17 12:27:13.950266 kernel: printk: console [tty0] enabled Jan 17 12:27:13.950276 kernel: printk: console [ttyS0] enabled Jan 17 12:27:13.950287 kernel: ACPI: Core revision 20230628 Jan 17 12:27:13.950298 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:27:13.950309 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:27:13.950323 kernel: x2apic enabled Jan 17 12:27:13.950334 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:27:13.950345 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:27:13.950356 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 12:27:13.950366 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Jan 17 12:27:13.950377 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 12:27:13.950388 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 12:27:13.950400 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 12:27:13.950427 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:27:13.950439 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:27:13.950450 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:27:13.950462 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:27:13.950478 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 17 12:27:13.950484 kernel: RETBleed: Mitigation: untrained return thunk Jan 17 12:27:13.950491 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:27:13.950497 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:27:13.950503 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 12:27:13.950513 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 12:27:13.950519 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 12:27:13.950526 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:27:13.950536 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:27:13.950547 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:27:13.950559 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:27:13.950570 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 12:27:13.950582 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:27:13.950596 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:27:13.950607 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:27:13.950618 kernel: landlock: Up and running. Jan 17 12:27:13.950629 kernel: SELinux: Initializing. Jan 17 12:27:13.950641 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:27:13.950653 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:27:13.950662 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 17 12:27:13.950673 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:27:13.950684 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:27:13.950698 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:27:13.950709 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 17 12:27:13.950720 kernel: ... version: 0 Jan 17 12:27:13.950732 kernel: ... bit width: 48 Jan 17 12:27:13.950741 kernel: ... generic registers: 6 Jan 17 12:27:13.950747 kernel: ... value mask: 0000ffffffffffff Jan 17 12:27:13.950754 kernel: ... max period: 00007fffffffffff Jan 17 12:27:13.950760 kernel: ... fixed-purpose events: 0 Jan 17 12:27:13.950766 kernel: ... event mask: 000000000000003f Jan 17 12:27:13.950780 kernel: signal: max sigframe size: 1776 Jan 17 12:27:13.950792 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:27:13.950804 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:27:13.950816 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:27:13.950827 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:27:13.950838 kernel: .... node #0, CPUs: #1 Jan 17 12:27:13.950850 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:27:13.950861 kernel: smpboot: Max logical packages: 1 Jan 17 12:27:13.950874 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Jan 17 12:27:13.950891 kernel: devtmpfs: initialized Jan 17 12:27:13.950901 kernel: x86/mm: Memory block size: 128MB Jan 17 12:27:13.951973 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:27:13.952014 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:27:13.952036 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:27:13.952060 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:27:13.952072 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:27:13.952084 kernel: audit: type=2000 audit(1737116832.414:1): state=initialized audit_enabled=0 res=1 Jan 17 12:27:13.952093 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:27:13.952104 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:27:13.952111 kernel: cpuidle: using governor menu Jan 17 12:27:13.952117 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:27:13.952123 kernel: dca service started, version 1.12.1 Jan 17 12:27:13.952130 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 12:27:13.952137 kernel: PCI: Using configuration type 1 for base access Jan 17 12:27:13.952143 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:27:13.952153 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:27:13.952164 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:27:13.952180 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:27:13.952191 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:27:13.952202 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:27:13.952211 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:27:13.952222 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:27:13.952233 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:27:13.952244 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:27:13.952256 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:27:13.952266 kernel: ACPI: Interpreter enabled Jan 17 12:27:13.952281 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:27:13.952293 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:27:13.952304 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:27:13.952316 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:27:13.952327 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 12:27:13.952337 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:27:13.952547 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:27:13.952705 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 12:27:13.952879 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 12:27:13.952897 kernel: PCI host bridge to bus 0000:00 Jan 17 12:27:13.954079 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:27:13.954241 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:27:13.954364 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:27:13.954511 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jan 17 12:27:13.954668 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:27:13.954831 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 17 12:27:13.955009 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:27:13.955219 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 12:27:13.955409 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 17 12:27:13.955559 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jan 17 12:27:13.955720 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jan 17 12:27:13.955864 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jan 17 12:27:13.958063 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jan 17 12:27:13.958236 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:27:13.958452 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 17 12:27:13.958618 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jan 17 12:27:13.958760 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 17 12:27:13.958870 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jan 17 12:27:13.961102 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 17 12:27:13.961267 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jan 17 12:27:13.961433 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 17 12:27:13.961583 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jan 17 12:27:13.961741 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 17 12:27:13.961883 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jan 17 12:27:13.962037 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 17 12:27:13.962184 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jan 17 12:27:13.962367 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 17 12:27:13.962502 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jan 17 12:27:13.962687 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 17 12:27:13.962857 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jan 17 12:27:13.965036 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 17 12:27:13.965217 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jan 17 12:27:13.965382 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 12:27:13.965498 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 12:27:13.965636 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 12:27:13.965783 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jan 17 12:27:13.965966 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jan 17 12:27:13.966141 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 12:27:13.966270 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 17 12:27:13.966397 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 17 12:27:13.966559 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jan 17 12:27:13.966738 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 17 12:27:13.966935 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jan 17 12:27:13.967084 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 17 12:27:13.967267 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 17 12:27:13.967378 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 17 12:27:13.967732 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 17 12:27:13.970066 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jan 17 12:27:13.970203 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 17 12:27:13.970369 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 17 12:27:13.970548 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 12:27:13.970731 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 17 12:27:13.971001 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jan 17 12:27:13.971143 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jan 17 12:27:13.971250 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 17 12:27:13.971390 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 17 12:27:13.971575 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 12:27:13.971785 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 17 12:27:13.971983 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 17 12:27:13.972192 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 17 12:27:13.972331 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 17 12:27:13.972443 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 12:27:13.972561 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 17 12:27:13.972759 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jan 17 12:27:13.972958 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jan 17 12:27:13.973165 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 17 12:27:13.973333 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 17 12:27:13.973483 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 12:27:13.973649 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 17 12:27:13.973806 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jan 17 12:27:13.973970 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jan 17 12:27:13.974150 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 17 12:27:13.974364 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 17 12:27:13.974513 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 12:27:13.974525 kernel: acpiphp: Slot [0] registered Jan 17 12:27:13.974765 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 17 12:27:13.974964 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jan 17 12:27:13.975153 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jan 17 12:27:13.975322 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jan 17 12:27:13.975471 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 17 12:27:13.975625 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 17 12:27:13.975770 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 12:27:13.975788 kernel: acpiphp: Slot [0-2] registered Jan 17 12:27:13.975996 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 17 12:27:13.976163 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 17 12:27:13.976313 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 12:27:13.976336 kernel: acpiphp: Slot [0-3] registered Jan 17 12:27:13.976512 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 17 12:27:13.976665 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 17 12:27:13.976772 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 12:27:13.976783 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:27:13.976790 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:27:13.976802 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:27:13.976815 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:27:13.976826 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 12:27:13.976842 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 12:27:13.976849 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 12:27:13.976855 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 12:27:13.976862 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 12:27:13.976868 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 12:27:13.976874 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 12:27:13.976881 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 12:27:13.976887 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 12:27:13.976895 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 12:27:13.976973 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 12:27:13.976987 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 12:27:13.976998 kernel: iommu: Default domain type: Translated Jan 17 12:27:13.977010 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:27:13.977020 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:27:13.977026 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:27:13.977034 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:27:13.977056 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jan 17 12:27:13.977238 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 12:27:13.977458 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 12:27:13.977615 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:27:13.977633 kernel: vgaarb: loaded Jan 17 12:27:13.977645 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:27:13.977656 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:27:13.977668 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:27:13.977679 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:27:13.977691 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:27:13.977702 kernel: pnp: PnP ACPI init Jan 17 12:27:13.977862 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 12:27:13.977882 kernel: pnp: PnP ACPI: found 5 devices Jan 17 12:27:13.977894 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:27:13.977905 kernel: NET: Registered PF_INET protocol family Jan 17 12:27:13.977933 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:27:13.977940 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:27:13.977946 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:27:13.977953 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:27:13.977964 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:27:13.977970 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:27:13.977985 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:27:13.977992 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:27:13.977998 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:27:13.978005 kernel: NET: Registered PF_XDP protocol family Jan 17 12:27:13.978148 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 12:27:13.978275 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 12:27:13.978399 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 12:27:13.978545 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 17 12:27:13.978687 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 17 12:27:13.978829 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 17 12:27:13.979041 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 17 12:27:13.979212 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 17 12:27:13.979355 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 17 12:27:13.979463 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 17 12:27:13.979619 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 17 12:27:13.979746 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 12:27:13.979870 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 17 12:27:13.980063 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 17 12:27:13.980178 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 12:27:13.980340 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 17 12:27:13.980509 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 17 12:27:13.980655 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 12:27:13.980797 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 17 12:27:13.981024 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 17 12:27:13.981193 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 12:27:13.981361 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 17 12:27:13.981483 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 17 12:27:13.981589 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 12:27:13.981738 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 17 12:27:13.981875 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 17 12:27:13.982015 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 17 12:27:13.982164 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 12:27:13.982271 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 17 12:27:13.982417 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 17 12:27:13.982577 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 17 12:27:13.982743 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 12:27:13.982876 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 17 12:27:13.983074 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 17 12:27:13.983229 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 17 12:27:13.983393 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 12:27:13.983531 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:27:13.983689 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:27:13.983826 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:27:13.983952 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jan 17 12:27:13.984110 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 12:27:13.984259 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 17 12:27:13.984395 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 17 12:27:13.984533 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 17 12:27:13.984709 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 17 12:27:13.984869 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 12:27:13.985145 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 17 12:27:13.985292 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 12:27:13.985438 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 17 12:27:13.985565 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 12:27:13.985709 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 17 12:27:13.985833 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 12:27:13.986003 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jan 17 12:27:13.986154 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 12:27:13.986288 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 17 12:27:13.986412 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 17 12:27:13.986541 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 12:27:13.986671 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 17 12:27:13.986794 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jan 17 12:27:13.986973 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 12:27:13.987131 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 17 12:27:13.987232 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 17 12:27:13.987351 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 12:27:13.987368 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 12:27:13.987375 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:27:13.987382 kernel: Initialise system trusted keyrings Jan 17 12:27:13.987389 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:27:13.987396 kernel: Key type asymmetric registered Jan 17 12:27:13.987403 kernel: Asymmetric key parser 'x509' registered Jan 17 12:27:13.987410 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:27:13.987417 kernel: io scheduler mq-deadline registered Jan 17 12:27:13.987423 kernel: io scheduler kyber registered Jan 17 12:27:13.987430 kernel: io scheduler bfq registered Jan 17 12:27:13.987539 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 17 12:27:13.987644 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 17 12:27:13.987760 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 17 12:27:13.987866 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 17 12:27:13.988029 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 17 12:27:13.988154 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 17 12:27:13.988259 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 17 12:27:13.988389 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 17 12:27:13.988501 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 17 12:27:13.988604 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 17 12:27:13.988707 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 17 12:27:13.988810 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 17 12:27:13.988959 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 17 12:27:13.989080 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 17 12:27:13.989184 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 17 12:27:13.989285 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 17 12:27:13.989299 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 12:27:13.989399 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 17 12:27:13.989500 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 17 12:27:13.989509 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:27:13.989516 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 17 12:27:13.989523 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:27:13.989530 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:27:13.989536 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:27:13.989543 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:27:13.989552 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:27:13.989658 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 12:27:13.989668 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:27:13.989767 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 12:27:13.989862 kernel: rtc_cmos 00:03: setting system clock to 2025-01-17T12:27:13 UTC (1737116833) Jan 17 12:27:13.989989 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 12:27:13.990000 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 12:27:13.990010 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:27:13.990017 kernel: Segment Routing with IPv6 Jan 17 12:27:13.990023 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:27:13.990030 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:27:13.990036 kernel: Key type dns_resolver registered Jan 17 12:27:13.990054 kernel: IPI shorthand broadcast: enabled Jan 17 12:27:13.990061 kernel: sched_clock: Marking stable (1146008204, 134550967)->(1287972782, -7413611) Jan 17 12:27:13.990068 kernel: registered taskstats version 1 Jan 17 12:27:13.990074 kernel: Loading compiled-in X.509 certificates Jan 17 12:27:13.990081 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:27:13.990091 kernel: Key type .fscrypt registered Jan 17 12:27:13.990097 kernel: Key type fscrypt-provisioning registered Jan 17 12:27:13.990106 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:27:13.990112 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:27:13.990119 kernel: ima: No architecture policies found Jan 17 12:27:13.990125 kernel: clk: Disabling unused clocks Jan 17 12:27:13.990132 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:27:13.990139 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:27:13.990147 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:27:13.990154 kernel: Run /init as init process Jan 17 12:27:13.990160 kernel: with arguments: Jan 17 12:27:13.990167 kernel: /init Jan 17 12:27:13.990173 kernel: with environment: Jan 17 12:27:13.990180 kernel: HOME=/ Jan 17 12:27:13.990186 kernel: TERM=linux Jan 17 12:27:13.990192 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:27:13.990201 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:27:13.990212 systemd[1]: Detected virtualization kvm. Jan 17 12:27:13.990219 systemd[1]: Detected architecture x86-64. Jan 17 12:27:13.990225 systemd[1]: Running in initrd. Jan 17 12:27:13.990232 systemd[1]: No hostname configured, using default hostname. Jan 17 12:27:13.990239 systemd[1]: Hostname set to . Jan 17 12:27:13.990246 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:27:13.990253 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:27:13.990260 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:27:13.990269 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:27:13.990276 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:27:13.990283 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:27:13.990290 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:27:13.990298 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:27:13.990306 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:27:13.990315 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:27:13.990322 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:27:13.990330 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:27:13.990337 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:27:13.990343 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:27:13.990350 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:27:13.990357 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:27:13.990364 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:27:13.990371 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:27:13.990380 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:27:13.990387 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:27:13.990394 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:27:13.990401 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:27:13.990408 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:27:13.990415 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:27:13.990422 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:27:13.990429 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:27:13.990438 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:27:13.990445 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:27:13.990452 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:27:13.990459 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:27:13.990466 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:27:13.990473 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:27:13.990480 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:27:13.990487 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:27:13.990515 systemd-journald[187]: Collecting audit messages is disabled. Jan 17 12:27:13.990547 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:27:13.990561 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:27:13.990573 kernel: Bridge firewalling registered Jan 17 12:27:13.990580 systemd-journald[187]: Journal started Jan 17 12:27:13.990596 systemd-journald[187]: Runtime Journal (/run/log/journal/7d853b771fa1400393ef17a553565e2c) is 4.8M, max 38.4M, 33.6M free. Jan 17 12:27:13.945684 systemd-modules-load[188]: Inserted module 'overlay' Jan 17 12:27:13.970206 systemd-modules-load[188]: Inserted module 'br_netfilter' Jan 17 12:27:14.005138 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:27:14.005119 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:27:14.006455 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:27:14.008409 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:27:14.014088 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:27:14.015986 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:27:14.019068 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:27:14.022377 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:27:14.032549 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:27:14.037230 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:27:14.040172 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:27:14.045090 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:27:14.045704 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:27:14.050031 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:27:14.056274 dracut-cmdline[220]: dracut-dracut-053 Jan 17 12:27:14.059641 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:27:14.079404 systemd-resolved[224]: Positive Trust Anchors: Jan 17 12:27:14.079419 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:27:14.079444 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:27:14.085334 systemd-resolved[224]: Defaulting to hostname 'linux'. Jan 17 12:27:14.086356 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:27:14.087153 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:27:14.131939 kernel: SCSI subsystem initialized Jan 17 12:27:14.139933 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:27:14.149958 kernel: iscsi: registered transport (tcp) Jan 17 12:27:14.168964 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:27:14.168999 kernel: QLogic iSCSI HBA Driver Jan 17 12:27:14.208138 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:27:14.213056 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:27:14.234942 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:27:14.234976 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:27:14.237948 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:27:14.274936 kernel: raid6: avx2x4 gen() 31778 MB/s Jan 17 12:27:14.291936 kernel: raid6: avx2x2 gen() 29102 MB/s Jan 17 12:27:14.309031 kernel: raid6: avx2x1 gen() 24802 MB/s Jan 17 12:27:14.309069 kernel: raid6: using algorithm avx2x4 gen() 31778 MB/s Jan 17 12:27:14.327128 kernel: raid6: .... xor() 4447 MB/s, rmw enabled Jan 17 12:27:14.327152 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:27:14.345939 kernel: xor: automatically using best checksumming function avx Jan 17 12:27:14.472954 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:27:14.484303 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:27:14.490079 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:27:14.502942 systemd-udevd[407]: Using default interface naming scheme 'v255'. Jan 17 12:27:14.506812 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:27:14.514101 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:27:14.525358 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jan 17 12:27:14.552635 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:27:14.557067 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:27:14.621997 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:27:14.632116 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:27:14.648995 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:27:14.650403 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:27:14.652410 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:27:14.653652 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:27:14.663077 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:27:14.675016 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:27:14.741160 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:27:14.745103 kernel: scsi host0: Virtio SCSI HBA Jan 17 12:27:14.752140 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 17 12:27:14.749848 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:27:14.749989 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:27:14.755425 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:27:14.756850 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:27:14.757627 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:27:14.759486 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:27:14.761752 kernel: libata version 3.00 loaded. Jan 17 12:27:14.765343 kernel: ACPI: bus type USB registered Jan 17 12:27:14.765368 kernel: usbcore: registered new interface driver usbfs Jan 17 12:27:14.767082 kernel: usbcore: registered new interface driver hub Jan 17 12:27:14.768412 kernel: usbcore: registered new device driver usb Jan 17 12:27:14.771138 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:27:14.794952 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 12:27:14.828005 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 12:27:14.828021 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:27:14.828030 kernel: AES CTR mode by8 optimization enabled Jan 17 12:27:14.828038 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 12:27:14.828203 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 12:27:14.828328 kernel: scsi host1: ahci Jan 17 12:27:14.828461 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 17 12:27:14.828599 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 17 12:27:14.828725 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 17 12:27:14.828847 kernel: scsi host2: ahci Jan 17 12:27:14.829002 kernel: scsi host3: ahci Jan 17 12:27:14.829150 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 17 12:27:14.829276 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 17 12:27:14.829397 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 17 12:27:14.829536 kernel: hub 1-0:1.0: USB hub found Jan 17 12:27:14.829684 kernel: hub 1-0:1.0: 4 ports detected Jan 17 12:27:14.829817 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 17 12:27:14.831135 kernel: hub 2-0:1.0: USB hub found Jan 17 12:27:14.831298 kernel: hub 2-0:1.0: 4 ports detected Jan 17 12:27:14.831435 kernel: scsi host4: ahci Jan 17 12:27:14.831568 kernel: scsi host5: ahci Jan 17 12:27:14.831692 kernel: scsi host6: ahci Jan 17 12:27:14.831812 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Jan 17 12:27:14.831822 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Jan 17 12:27:14.831830 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Jan 17 12:27:14.831838 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Jan 17 12:27:14.831846 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Jan 17 12:27:14.831857 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Jan 17 12:27:14.878362 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:27:14.884070 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:27:14.898700 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:27:15.057952 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 17 12:27:15.145329 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 12:27:15.145419 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 12:27:15.145441 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 12:27:15.145460 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 12:27:15.149108 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 12:27:15.149947 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 17 12:27:15.150941 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 12:27:15.153170 kernel: ata1.00: applying bridge limits Jan 17 12:27:15.153266 kernel: ata1.00: configured for UDMA/100 Jan 17 12:27:15.155344 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 12:27:15.189767 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 17 12:27:15.211853 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 17 12:27:15.212040 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 12:27:15.212190 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 17 12:27:15.212320 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 12:27:15.212455 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:27:15.212465 kernel: GPT:17805311 != 80003071 Jan 17 12:27:15.212473 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:27:15.212482 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:27:15.212491 kernel: GPT:17805311 != 80003071 Jan 17 12:27:15.212499 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:27:15.212506 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:27:15.212514 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 12:27:15.218954 kernel: usbcore: registered new interface driver usbhid Jan 17 12:27:15.218980 kernel: usbhid: USB HID core driver Jan 17 12:27:15.224951 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 17 12:27:15.224977 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 12:27:15.235615 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 17 12:27:15.235786 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:27:15.235797 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 17 12:27:15.252934 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (465) Jan 17 12:27:15.258937 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (466) Jan 17 12:27:15.265474 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 17 12:27:15.276035 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 17 12:27:15.281493 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 17 12:27:15.286014 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 17 12:27:15.287263 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 17 12:27:15.294038 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:27:15.299275 disk-uuid[576]: Primary Header is updated. Jan 17 12:27:15.299275 disk-uuid[576]: Secondary Entries is updated. Jan 17 12:27:15.299275 disk-uuid[576]: Secondary Header is updated. Jan 17 12:27:15.303936 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:27:15.310937 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:27:15.316955 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:27:16.318051 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:27:16.319746 disk-uuid[577]: The operation has completed successfully. Jan 17 12:27:16.378298 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:27:16.378406 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:27:16.384101 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:27:16.387277 sh[596]: Success Jan 17 12:27:16.399015 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 12:27:16.448037 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:27:16.456994 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:27:16.458834 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:27:16.484031 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:27:16.484102 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:27:16.484123 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:27:16.487896 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:27:16.487948 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:27:16.497941 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:27:16.499659 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:27:16.500962 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:27:16.507045 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:27:16.509088 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:27:16.527369 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:27:16.527411 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:27:16.527432 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:27:16.533158 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:27:16.533190 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:27:16.546933 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:27:16.545322 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:27:16.553260 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:27:16.561164 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:27:16.622340 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:27:16.632273 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:27:16.649768 ignition[698]: Ignition 2.19.0 Jan 17 12:27:16.649782 ignition[698]: Stage: fetch-offline Jan 17 12:27:16.652097 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:27:16.649818 ignition[698]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:27:16.649828 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 12:27:16.650459 ignition[698]: parsed url from cmdline: "" Jan 17 12:27:16.650467 ignition[698]: no config URL provided Jan 17 12:27:16.650473 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:27:16.650484 ignition[698]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:27:16.650489 ignition[698]: failed to fetch config: resource requires networking Jan 17 12:27:16.650653 ignition[698]: Ignition finished successfully Jan 17 12:27:16.669160 systemd-networkd[778]: lo: Link UP Jan 17 12:27:16.669173 systemd-networkd[778]: lo: Gained carrier Jan 17 12:27:16.672804 systemd-networkd[778]: Enumeration completed Jan 17 12:27:16.672907 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:27:16.674182 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:27:16.674187 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:27:16.675251 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:27:16.675256 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:27:16.675294 systemd[1]: Reached target network.target - Network. Jan 17 12:27:16.675988 systemd-networkd[778]: eth0: Link UP Jan 17 12:27:16.675993 systemd-networkd[778]: eth0: Gained carrier Jan 17 12:27:16.676002 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:27:16.682098 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:27:16.683313 systemd-networkd[778]: eth1: Link UP Jan 17 12:27:16.683319 systemd-networkd[778]: eth1: Gained carrier Jan 17 12:27:16.683329 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:27:16.698787 ignition[786]: Ignition 2.19.0 Jan 17 12:27:16.698800 ignition[786]: Stage: fetch Jan 17 12:27:16.699012 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:27:16.699025 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 12:27:16.699153 ignition[786]: parsed url from cmdline: "" Jan 17 12:27:16.699158 ignition[786]: no config URL provided Jan 17 12:27:16.699167 ignition[786]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:27:16.699178 ignition[786]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:27:16.699203 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 17 12:27:16.699412 ignition[786]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 17 12:27:16.713973 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:27:16.741957 systemd-networkd[778]: eth0: DHCPv4 address 138.199.154.203/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 17 12:27:16.900046 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 17 12:27:16.904832 ignition[786]: GET result: OK Jan 17 12:27:16.904896 ignition[786]: parsing config with SHA512: c06c4b9b9cfc078233afb7182a5ea65d582dd8a3a588c4333dfda5801bd0dc4146ddb5b233d817f2eade3d48a02ed382c1ef0d2f811cd3cff932598dbeb10441 Jan 17 12:27:16.908819 unknown[786]: fetched base config from "system" Jan 17 12:27:16.908830 unknown[786]: fetched base config from "system" Jan 17 12:27:16.909184 ignition[786]: fetch: fetch complete Jan 17 12:27:16.908844 unknown[786]: fetched user config from "hetzner" Jan 17 12:27:16.909190 ignition[786]: fetch: fetch passed Jan 17 12:27:16.911991 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:27:16.909234 ignition[786]: Ignition finished successfully Jan 17 12:27:16.919081 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:27:16.933997 ignition[793]: Ignition 2.19.0 Jan 17 12:27:16.934005 ignition[793]: Stage: kargs Jan 17 12:27:16.934230 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:27:16.934249 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 12:27:16.935085 ignition[793]: kargs: kargs passed Jan 17 12:27:16.937214 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:27:16.935133 ignition[793]: Ignition finished successfully Jan 17 12:27:16.948092 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:27:16.960384 ignition[799]: Ignition 2.19.0 Jan 17 12:27:16.960406 ignition[799]: Stage: disks Jan 17 12:27:16.960607 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:27:16.963441 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:27:16.960627 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 12:27:16.964156 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:27:16.961545 ignition[799]: disks: disks passed Jan 17 12:27:16.964603 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:27:16.961592 ignition[799]: Ignition finished successfully Jan 17 12:27:16.965103 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:27:16.966284 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:27:16.967710 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:27:16.975165 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:27:16.989479 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 12:27:16.992591 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:27:16.998017 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:27:17.077187 kernel: EXT4-fs (sda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:27:17.077616 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:27:17.078533 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:27:17.083973 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:27:17.086710 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:27:17.089513 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:27:17.090977 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:27:17.092075 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:27:17.097930 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (815) Jan 17 12:27:17.102288 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:27:17.106471 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:27:17.106489 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:27:17.106497 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:27:17.111714 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:27:17.111739 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:27:17.118045 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:27:17.120341 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:27:17.155490 coreos-metadata[817]: Jan 17 12:27:17.155 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 17 12:27:17.157700 coreos-metadata[817]: Jan 17 12:27:17.156 INFO Fetch successful Jan 17 12:27:17.157700 coreos-metadata[817]: Jan 17 12:27:17.156 INFO wrote hostname ci-4081-3-0-0-e492bbae02 to /sysroot/etc/hostname Jan 17 12:27:17.159948 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:27:17.160018 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:27:17.163126 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:27:17.167823 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:27:17.170989 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:27:17.253351 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:27:17.256994 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:27:17.261875 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:27:17.268938 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:27:17.291724 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:27:17.294758 ignition[931]: INFO : Ignition 2.19.0 Jan 17 12:27:17.294758 ignition[931]: INFO : Stage: mount Jan 17 12:27:17.296144 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:27:17.296144 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 12:27:17.296144 ignition[931]: INFO : mount: mount passed Jan 17 12:27:17.296144 ignition[931]: INFO : Ignition finished successfully Jan 17 12:27:17.297486 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:27:17.304055 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:27:17.482740 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:27:17.492476 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:27:17.520971 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (944) Jan 17 12:27:17.526529 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:27:17.526579 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:27:17.529521 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:27:17.540459 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:27:17.540507 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:27:17.545465 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:27:17.579309 ignition[960]: INFO : Ignition 2.19.0 Jan 17 12:27:17.579309 ignition[960]: INFO : Stage: files Jan 17 12:27:17.581677 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:27:17.581677 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 12:27:17.581677 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:27:17.585811 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:27:17.585811 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:27:17.588586 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:27:17.588586 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:27:17.588586 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:27:17.587044 unknown[960]: wrote ssh authorized keys file for user: core Jan 17 12:27:17.593993 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:27:17.593993 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:27:17.681181 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:27:18.006480 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:27:18.006480 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:27:18.010295 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 17 12:27:18.518213 systemd-networkd[778]: eth1: Gained IPv6LL Jan 17 12:27:18.552426 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:27:18.661866 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:27:18.664519 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:27:18.664519 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:27:18.664519 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:27:18.670211 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:27:18.670211 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:27:18.670211 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:27:18.670211 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:27:18.670211 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:27:18.670211 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:27:18.670211 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:27:18.670211 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:27:18.670211 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:27:18.670211 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:27:18.670211 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 17 12:27:18.710173 systemd-networkd[778]: eth0: Gained IPv6LL Jan 17 12:27:19.179788 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:27:19.569577 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:27:19.569577 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 12:27:19.572739 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:27:19.572739 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:27:19.572739 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 12:27:19.572739 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 17 12:27:19.572739 ignition[960]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 17 12:27:19.572739 ignition[960]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 17 12:27:19.572739 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 17 12:27:19.572739 ignition[960]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:27:19.572739 ignition[960]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:27:19.572739 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:27:19.572739 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:27:19.572739 ignition[960]: INFO : files: files passed Jan 17 12:27:19.572739 ignition[960]: INFO : Ignition finished successfully Jan 17 12:27:19.574775 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:27:19.587888 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:27:19.590463 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:27:19.591522 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:27:19.591640 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:27:19.614252 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:27:19.614252 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:27:19.617010 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:27:19.618567 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:27:19.620291 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:27:19.626083 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:27:19.656046 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:27:19.656221 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:27:19.657816 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:27:19.658483 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:27:19.659669 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:27:19.666053 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:27:19.681287 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:27:19.687138 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:27:19.698038 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:27:19.699249 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:27:19.700470 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:27:19.701481 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:27:19.701596 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:27:19.702818 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:27:19.703999 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:27:19.704996 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:27:19.706003 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:27:19.707157 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:27:19.708235 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:27:19.709664 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:27:19.710897 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:27:19.712148 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:27:19.713282 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:27:19.714224 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:27:19.714470 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:27:19.715695 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:27:19.717123 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:27:19.718257 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:27:19.718512 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:27:19.719554 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:27:19.719774 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:27:19.721139 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:27:19.721307 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:27:19.722524 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:27:19.722670 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:27:19.723770 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:27:19.723991 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:27:19.734434 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:27:19.738113 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:27:19.738591 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:27:19.738699 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:27:19.739621 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:27:19.739715 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:27:19.751353 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:27:19.751470 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:27:19.757750 ignition[1014]: INFO : Ignition 2.19.0 Jan 17 12:27:19.758878 ignition[1014]: INFO : Stage: umount Jan 17 12:27:19.759655 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:27:19.760837 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 12:27:19.762998 ignition[1014]: INFO : umount: umount passed Jan 17 12:27:19.762998 ignition[1014]: INFO : Ignition finished successfully Jan 17 12:27:19.764624 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:27:19.765302 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:27:19.766397 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:27:19.766484 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:27:19.768101 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:27:19.768162 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:27:19.770314 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:27:19.770359 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:27:19.777457 systemd[1]: Stopped target network.target - Network. Jan 17 12:27:19.778063 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:27:19.778137 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:27:19.778629 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:27:19.779039 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:27:19.780984 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:27:19.782017 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:27:19.788789 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:27:19.789298 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:27:19.789356 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:27:19.791929 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:27:19.791973 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:27:19.797029 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:27:19.797106 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:27:19.804051 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:27:19.804131 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:27:19.805656 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:27:19.806217 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:27:19.808977 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:27:19.809728 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:27:19.809834 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:27:19.811132 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:27:19.811210 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:27:19.812008 systemd-networkd[778]: eth0: DHCPv6 lease lost Jan 17 12:27:19.815667 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:27:19.815820 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:27:19.816198 systemd-networkd[778]: eth1: DHCPv6 lease lost Jan 17 12:27:19.818331 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:27:19.818462 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:27:19.820701 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:27:19.820762 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:27:19.826050 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:27:19.826616 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:27:19.826681 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:27:19.827230 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:27:19.827280 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:27:19.827747 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:27:19.827794 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:27:19.828315 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:27:19.828359 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:27:19.831889 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:27:19.847633 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:27:19.847844 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:27:19.849509 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:27:19.849732 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:27:19.851479 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:27:19.851552 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:27:19.852279 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:27:19.852334 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:27:19.853458 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:27:19.853524 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:27:19.855209 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:27:19.855270 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:27:19.856510 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:27:19.856573 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:27:19.864101 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:27:19.865354 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:27:19.866006 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:27:19.866522 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:27:19.866567 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:27:19.868994 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:27:19.869040 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:27:19.869615 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:27:19.869659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:27:19.873222 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:27:19.873340 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:27:19.875224 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:27:19.885192 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:27:19.892860 systemd[1]: Switching root. Jan 17 12:27:19.921610 systemd-journald[187]: Journal stopped Jan 17 12:27:20.906706 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jan 17 12:27:20.906783 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:27:20.906797 kernel: SELinux: policy capability open_perms=1 Jan 17 12:27:20.906807 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:27:20.906821 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:27:20.906830 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:27:20.906839 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:27:20.906852 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:27:20.906861 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:27:20.906871 kernel: audit: type=1403 audit(1737116840.059:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:27:20.906881 systemd[1]: Successfully loaded SELinux policy in 45.788ms. Jan 17 12:27:20.906902 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.246ms. Jan 17 12:27:20.907130 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:27:20.907152 systemd[1]: Detected virtualization kvm. Jan 17 12:27:20.907163 systemd[1]: Detected architecture x86-64. Jan 17 12:27:20.907176 systemd[1]: Detected first boot. Jan 17 12:27:20.907187 systemd[1]: Hostname set to . Jan 17 12:27:20.907197 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:27:20.907207 zram_generator::config[1056]: No configuration found. Jan 17 12:27:20.907218 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:27:20.907233 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:27:20.907243 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:27:20.907253 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:27:20.907266 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:27:20.907276 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:27:20.907287 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:27:20.909015 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:27:20.909032 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:27:20.909044 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:27:20.909060 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:27:20.909071 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:27:20.909095 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:27:20.909109 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:27:20.909119 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:27:20.909130 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:27:20.909140 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:27:20.909150 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:27:20.909160 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:27:20.909170 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:27:20.909181 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:27:20.909193 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:27:20.909203 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:27:20.909213 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:27:20.909224 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:27:20.909234 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:27:20.909244 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:27:20.909254 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:27:20.909267 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:27:20.909277 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:27:20.909292 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:27:20.909304 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:27:20.909314 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:27:20.909324 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:27:20.909336 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:27:20.909346 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:27:20.909356 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:27:20.909366 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:27:20.909376 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:27:20.909387 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:27:20.909397 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:27:20.909407 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:27:20.909419 systemd[1]: Reached target machines.target - Containers. Jan 17 12:27:20.909429 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:27:20.909439 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:27:20.909450 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:27:20.909460 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:27:20.909471 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:27:20.909481 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:27:20.909491 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:27:20.909501 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:27:20.909513 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:27:20.909524 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:27:20.909534 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:27:20.909544 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:27:20.909554 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:27:20.909564 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:27:20.909574 kernel: fuse: init (API version 7.39) Jan 17 12:27:20.909584 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:27:20.909594 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:27:20.909606 kernel: loop: module loaded Jan 17 12:27:20.909634 systemd-journald[1146]: Collecting audit messages is disabled. Jan 17 12:27:20.909654 kernel: ACPI: bus type drm_connector registered Jan 17 12:27:20.909665 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:27:20.909676 systemd-journald[1146]: Journal started Jan 17 12:27:20.909694 systemd-journald[1146]: Runtime Journal (/run/log/journal/7d853b771fa1400393ef17a553565e2c) is 4.8M, max 38.4M, 33.6M free. Jan 17 12:27:20.628529 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:27:20.657606 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 12:27:20.658089 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:27:20.921973 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:27:20.922018 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:27:20.922036 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:27:20.923472 systemd[1]: Stopped verity-setup.service. Jan 17 12:27:20.927946 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:27:20.931620 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:27:20.931829 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:27:20.932523 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:27:20.933198 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:27:20.933831 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:27:20.934495 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:27:20.935119 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:27:20.935959 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:27:20.936735 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:27:20.937556 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:27:20.937769 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:27:20.938771 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:27:20.938931 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:27:20.939730 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:27:20.939957 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:27:20.940712 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:27:20.941006 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:27:20.941811 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:27:20.942045 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:27:20.943057 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:27:20.943276 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:27:20.944092 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:27:20.944825 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:27:20.945838 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:27:20.960484 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:27:20.966950 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:27:20.969980 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:27:20.972006 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:27:20.972034 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:27:20.973938 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:27:20.979602 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:27:20.984016 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:27:20.985548 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:27:20.994627 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:27:20.998051 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:27:20.998687 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:27:21.001139 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:27:21.002460 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:27:21.004121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:27:21.015134 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:27:21.018994 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:27:21.021698 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:27:21.023451 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:27:21.024761 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:27:21.038045 systemd-journald[1146]: Time spent on flushing to /var/log/journal/7d853b771fa1400393ef17a553565e2c is 29.346ms for 1142 entries. Jan 17 12:27:21.038045 systemd-journald[1146]: System Journal (/var/log/journal/7d853b771fa1400393ef17a553565e2c) is 8.0M, max 584.8M, 576.8M free. Jan 17 12:27:21.108698 systemd-journald[1146]: Received client request to flush runtime journal. Jan 17 12:27:21.108742 kernel: loop0: detected capacity change from 0 to 8 Jan 17 12:27:21.108761 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:27:21.065477 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:27:21.067210 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:27:21.079499 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:27:21.110877 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:27:21.115162 kernel: loop1: detected capacity change from 0 to 210664 Jan 17 12:27:21.129103 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:27:21.132027 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:27:21.135825 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:27:21.140177 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 17 12:27:21.140191 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 17 12:27:21.156543 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:27:21.164061 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:27:21.165331 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:27:21.176949 kernel: loop2: detected capacity change from 0 to 142488 Jan 17 12:27:21.179116 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:27:21.200020 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:27:21.209610 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:27:21.218028 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:27:21.231006 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 12:27:21.233340 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jan 17 12:27:21.233355 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jan 17 12:27:21.238883 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:27:21.282734 kernel: loop4: detected capacity change from 0 to 8 Jan 17 12:27:21.285959 kernel: loop5: detected capacity change from 0 to 210664 Jan 17 12:27:21.304947 kernel: loop6: detected capacity change from 0 to 142488 Jan 17 12:27:21.320977 kernel: loop7: detected capacity change from 0 to 140768 Jan 17 12:27:21.343009 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 17 12:27:21.343711 (sd-merge)[1205]: Merged extensions into '/usr'. Jan 17 12:27:21.353021 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:27:21.353041 systemd[1]: Reloading... Jan 17 12:27:21.449021 zram_generator::config[1232]: No configuration found. Jan 17 12:27:21.558937 ldconfig[1171]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:27:21.578604 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:27:21.622668 systemd[1]: Reloading finished in 269 ms. Jan 17 12:27:21.647746 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:27:21.648638 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:27:21.649565 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:27:21.659050 systemd[1]: Starting ensure-sysext.service... Jan 17 12:27:21.661072 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:27:21.669033 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:27:21.672714 systemd[1]: Reloading requested from client PID 1275 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:27:21.672723 systemd[1]: Reloading... Jan 17 12:27:21.697436 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:27:21.697760 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:27:21.699057 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:27:21.699433 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Jan 17 12:27:21.699560 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Jan 17 12:27:21.702582 systemd-udevd[1277]: Using default interface naming scheme 'v255'. Jan 17 12:27:21.705248 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:27:21.705255 systemd-tmpfiles[1276]: Skipping /boot Jan 17 12:27:21.719207 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:27:21.719220 systemd-tmpfiles[1276]: Skipping /boot Jan 17 12:27:21.751949 zram_generator::config[1304]: No configuration found. Jan 17 12:27:21.889946 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 17 12:27:21.898936 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:27:21.910338 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:27:21.939937 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:27:21.975019 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:27:21.975429 systemd[1]: Reloading finished in 302 ms. Jan 17 12:27:21.992971 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 17 12:27:21.993536 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:27:21.994522 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:27:21.996850 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 17 12:27:22.000042 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 12:27:22.000278 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 12:27:22.000452 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 12:27:22.000934 kernel: Console: switching to colour dummy device 80x25 Jan 17 12:27:22.006570 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 12:27:22.006750 kernel: [drm] features: -context_init Jan 17 12:27:22.006773 kernel: [drm] number of scanouts: 1 Jan 17 12:27:22.006793 kernel: [drm] number of cap sets: 0 Jan 17 12:27:22.014947 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 17 12:27:22.025788 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 17 12:27:22.032260 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:27:22.035008 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jan 17 12:27:22.038270 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:27:22.055430 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:27:22.057002 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1313) Jan 17 12:27:22.057168 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:27:22.057352 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:27:22.059848 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:27:22.062855 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:27:22.064843 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:27:22.065197 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:27:22.068103 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:27:22.077202 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:27:22.084165 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:27:22.087154 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:27:22.087261 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:27:22.102541 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 17 12:27:22.105401 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:27:22.105561 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:27:22.105706 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:27:22.115839 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:27:22.120003 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:27:22.120099 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:27:22.129077 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 12:27:22.130759 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:27:22.131018 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:27:22.135895 kernel: Console: switching to colour frame buffer device 160x50 Jan 17 12:27:22.137215 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:27:22.199548 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 12:27:22.203415 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:27:22.203808 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:27:22.206515 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:27:22.210117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:27:22.211257 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:27:22.213907 augenrules[1416]: No rules Jan 17 12:27:22.217231 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:27:22.217405 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:27:22.220898 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:27:22.221760 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:27:22.224052 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:27:22.225323 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:27:22.245396 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:27:22.245598 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:27:22.278973 systemd[1]: Finished ensure-sysext.service. Jan 17 12:27:22.284677 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:27:22.295333 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:27:22.300262 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:27:22.300352 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:27:22.307146 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:27:22.315129 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:27:22.324218 systemd-networkd[1391]: lo: Link UP Jan 17 12:27:22.324231 systemd-networkd[1391]: lo: Gained carrier Jan 17 12:27:22.326215 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:27:22.329902 systemd-resolved[1392]: Positive Trust Anchors: Jan 17 12:27:22.331017 systemd-resolved[1392]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:27:22.331161 systemd-resolved[1392]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:27:22.334095 systemd-networkd[1391]: Enumeration completed Jan 17 12:27:22.334513 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:27:22.337093 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:27:22.337104 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:27:22.338502 systemd-networkd[1391]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:27:22.338512 systemd-networkd[1391]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:27:22.341620 systemd-networkd[1391]: eth0: Link UP Jan 17 12:27:22.341627 systemd-networkd[1391]: eth0: Gained carrier Jan 17 12:27:22.341650 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:27:22.344546 systemd-resolved[1392]: Using system hostname 'ci-4081-3-0-0-e492bbae02'. Jan 17 12:27:22.346209 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:27:22.347031 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:27:22.347494 systemd-networkd[1391]: eth1: Link UP Jan 17 12:27:22.347635 systemd-networkd[1391]: eth1: Gained carrier Jan 17 12:27:22.347715 systemd-networkd[1391]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:27:22.349613 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:27:22.355537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:27:22.355872 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:27:22.357993 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:27:22.358222 systemd[1]: Reached target network.target - Network. Jan 17 12:27:22.358305 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:27:22.366075 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:27:22.372972 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:27:22.378180 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:27:22.384007 systemd-networkd[1391]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:27:22.386022 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:27:22.407218 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:27:22.426005 systemd-networkd[1391]: eth0: DHCPv4 address 138.199.154.203/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 17 12:27:22.432830 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:27:22.433554 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:27:22.437327 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:27:22.440564 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:27:22.441619 systemd-timesyncd[1434]: Contacted time server 178.63.67.56:123 (0.flatcar.pool.ntp.org). Jan 17 12:27:22.442044 systemd-timesyncd[1434]: Initial clock synchronization to Fri 2025-01-17 12:27:22.398023 UTC. Jan 17 12:27:22.442274 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:27:22.453625 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:27:22.466388 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:27:22.469811 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:27:22.470627 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:27:22.471263 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:27:22.472407 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:27:22.474824 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:27:22.475357 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:27:22.475826 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:27:22.475869 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:27:22.478607 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:27:22.482685 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:27:22.485329 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:27:22.491814 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:27:22.493270 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:27:22.494422 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:27:22.497550 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:27:22.501104 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:27:22.501610 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:27:22.501636 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:27:22.508005 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:27:22.511728 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:27:22.517110 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:27:22.520881 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:27:22.526073 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:27:22.528224 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:27:22.529727 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:27:22.535022 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:27:22.538135 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 17 12:27:22.540994 jq[1460]: false Jan 17 12:27:22.541057 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:27:22.545834 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:27:22.552477 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:27:22.559375 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:27:22.559853 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:27:22.563067 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:27:22.569793 coreos-metadata[1456]: Jan 17 12:27:22.565 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 17 12:27:22.566658 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:27:22.571666 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:27:22.574954 coreos-metadata[1456]: Jan 17 12:27:22.572 INFO Fetch successful Jan 17 12:27:22.574954 coreos-metadata[1456]: Jan 17 12:27:22.573 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 17 12:27:22.573023 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:27:22.580038 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:27:22.580307 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:27:22.599538 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:27:22.603028 coreos-metadata[1456]: Jan 17 12:27:22.600 INFO Fetch successful Jan 17 12:27:22.600988 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:27:22.603985 extend-filesystems[1461]: Found loop4 Jan 17 12:27:22.604739 extend-filesystems[1461]: Found loop5 Jan 17 12:27:22.605285 extend-filesystems[1461]: Found loop6 Jan 17 12:27:22.607569 extend-filesystems[1461]: Found loop7 Jan 17 12:27:22.607569 extend-filesystems[1461]: Found sda Jan 17 12:27:22.607569 extend-filesystems[1461]: Found sda1 Jan 17 12:27:22.607569 extend-filesystems[1461]: Found sda2 Jan 17 12:27:22.607569 extend-filesystems[1461]: Found sda3 Jan 17 12:27:22.607569 extend-filesystems[1461]: Found usr Jan 17 12:27:22.607569 extend-filesystems[1461]: Found sda4 Jan 17 12:27:22.607569 extend-filesystems[1461]: Found sda6 Jan 17 12:27:22.607569 extend-filesystems[1461]: Found sda7 Jan 17 12:27:22.607569 extend-filesystems[1461]: Found sda9 Jan 17 12:27:22.607569 extend-filesystems[1461]: Checking size of /dev/sda9 Jan 17 12:27:22.630495 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:27:22.656555 jq[1471]: true Jan 17 12:27:22.674345 tar[1483]: linux-amd64/helm Jan 17 12:27:22.677107 extend-filesystems[1461]: Resized partition /dev/sda9 Jan 17 12:27:22.684865 dbus-daemon[1457]: [system] SELinux support is enabled Jan 17 12:27:22.691190 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:27:22.703693 jq[1493]: true Jan 17 12:27:22.707776 extend-filesystems[1499]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:27:22.722465 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:27:22.722518 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:27:22.725473 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:27:22.725513 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:27:22.740931 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 17 12:27:22.742882 update_engine[1469]: I20250117 12:27:22.742784 1469 main.cc:92] Flatcar Update Engine starting Jan 17 12:27:22.760006 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:27:22.771543 update_engine[1469]: I20250117 12:27:22.761073 1469 update_check_scheduler.cc:74] Next update check in 9m57s Jan 17 12:27:22.768106 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:27:22.794621 systemd-logind[1467]: New seat seat0. Jan 17 12:27:22.810464 systemd-logind[1467]: Watching system buttons on /dev/input/event2 (Power Button) Jan 17 12:27:22.810491 systemd-logind[1467]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:27:22.810720 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:27:22.873555 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:27:22.887416 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1323) Jan 17 12:27:22.906763 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:27:22.926506 bash[1524]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:27:22.927953 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:27:22.948251 systemd[1]: Starting sshkeys.service... Jan 17 12:27:22.961141 sshd_keygen[1487]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:27:22.970227 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:27:22.987391 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:27:23.042154 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 17 12:27:23.061185 extend-filesystems[1499]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 17 12:27:23.061185 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 17 12:27:23.061185 extend-filesystems[1499]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 17 12:27:23.075181 extend-filesystems[1461]: Resized filesystem in /dev/sda9 Jan 17 12:27:23.075181 extend-filesystems[1461]: Found sr0 Jan 17 12:27:23.086732 coreos-metadata[1536]: Jan 17 12:27:23.065 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 17 12:27:23.086732 coreos-metadata[1536]: Jan 17 12:27:23.067 INFO Fetch successful Jan 17 12:27:23.087088 containerd[1485]: time="2025-01-17T12:27:23.063447038Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:27:23.064293 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:27:23.064727 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:27:23.079549 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:27:23.091394 unknown[1536]: wrote ssh authorized keys file for user: core Jan 17 12:27:23.097325 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:27:23.109061 locksmithd[1508]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:27:23.115370 containerd[1485]: time="2025-01-17T12:27:23.115015802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:27:23.118996 containerd[1485]: time="2025-01-17T12:27:23.117734407Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:27:23.118996 containerd[1485]: time="2025-01-17T12:27:23.117762273Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:27:23.118996 containerd[1485]: time="2025-01-17T12:27:23.117788891Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:27:23.118996 containerd[1485]: time="2025-01-17T12:27:23.117986214Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:27:23.118996 containerd[1485]: time="2025-01-17T12:27:23.118002670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:27:23.118996 containerd[1485]: time="2025-01-17T12:27:23.118065637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:27:23.118996 containerd[1485]: time="2025-01-17T12:27:23.118077167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:27:23.118996 containerd[1485]: time="2025-01-17T12:27:23.118241908Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:27:23.118996 containerd[1485]: time="2025-01-17T12:27:23.118255906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:27:23.118996 containerd[1485]: time="2025-01-17T12:27:23.118267017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:27:23.118996 containerd[1485]: time="2025-01-17T12:27:23.118275121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:27:23.119208 containerd[1485]: time="2025-01-17T12:27:23.118355892Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:27:23.119208 containerd[1485]: time="2025-01-17T12:27:23.118559111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:27:23.119208 containerd[1485]: time="2025-01-17T12:27:23.118662603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:27:23.119208 containerd[1485]: time="2025-01-17T12:27:23.118674284Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:27:23.119208 containerd[1485]: time="2025-01-17T12:27:23.118758881Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:27:23.119208 containerd[1485]: time="2025-01-17T12:27:23.118830082Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:27:23.120296 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:27:23.120496 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:27:23.128079 containerd[1485]: time="2025-01-17T12:27:23.127954798Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:27:23.128079 containerd[1485]: time="2025-01-17T12:27:23.128027627Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:27:23.128275 containerd[1485]: time="2025-01-17T12:27:23.128257782Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:27:23.128463 containerd[1485]: time="2025-01-17T12:27:23.128360666Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:27:23.128463 containerd[1485]: time="2025-01-17T12:27:23.128382137Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:27:23.128606 containerd[1485]: time="2025-01-17T12:27:23.128590073Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:27:23.129069 containerd[1485]: time="2025-01-17T12:27:23.128860045Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:27:23.131212 containerd[1485]: time="2025-01-17T12:27:23.129690883Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:27:23.131212 containerd[1485]: time="2025-01-17T12:27:23.129711825Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:27:23.131212 containerd[1485]: time="2025-01-17T12:27:23.129723775Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:27:23.131212 containerd[1485]: time="2025-01-17T12:27:23.129747484Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:27:23.131212 containerd[1485]: time="2025-01-17T12:27:23.129758955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:27:23.131212 containerd[1485]: time="2025-01-17T12:27:23.129769446Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:27:23.131212 containerd[1485]: time="2025-01-17T12:27:23.129780557Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:27:23.131212 containerd[1485]: time="2025-01-17T12:27:23.129792457Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:27:23.131212 containerd[1485]: time="2025-01-17T12:27:23.129803637Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:27:23.131212 containerd[1485]: time="2025-01-17T12:27:23.129813489Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:27:23.131212 containerd[1485]: time="2025-01-17T12:27:23.129823031Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:27:23.131212 containerd[1485]: time="2025-01-17T12:27:23.129838998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:27:23.131212 containerd[1485]: time="2025-01-17T12:27:23.129850169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:27:23.131212 containerd[1485]: time="2025-01-17T12:27:23.129864286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:27:23.131497 containerd[1485]: time="2025-01-17T12:27:23.129875966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:27:23.131358 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:27:23.134468 containerd[1485]: time="2025-01-17T12:27:23.129886358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:27:23.134468 containerd[1485]: time="2025-01-17T12:27:23.133060448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:27:23.134468 containerd[1485]: time="2025-01-17T12:27:23.133075255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:27:23.134468 containerd[1485]: time="2025-01-17T12:27:23.133087845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:27:23.134468 containerd[1485]: time="2025-01-17T12:27:23.133106289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:27:23.134468 containerd[1485]: time="2025-01-17T12:27:23.133120557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:27:23.134468 containerd[1485]: time="2025-01-17T12:27:23.133131218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:27:23.134468 containerd[1485]: time="2025-01-17T12:27:23.133141310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:27:23.134468 containerd[1485]: time="2025-01-17T12:27:23.133153110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:27:23.134468 containerd[1485]: time="2025-01-17T12:27:23.133166339Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:27:23.134468 containerd[1485]: time="2025-01-17T12:27:23.133184834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:27:23.134468 containerd[1485]: time="2025-01-17T12:27:23.133195764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:27:23.134468 containerd[1485]: time="2025-01-17T12:27:23.133206634Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:27:23.134468 containerd[1485]: time="2025-01-17T12:27:23.133266674Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:27:23.136270 containerd[1485]: time="2025-01-17T12:27:23.133287836Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:27:23.136270 containerd[1485]: time="2025-01-17T12:27:23.133297099Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:27:23.136270 containerd[1485]: time="2025-01-17T12:27:23.133308358Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:27:23.136270 containerd[1485]: time="2025-01-17T12:27:23.133316871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:27:23.136270 containerd[1485]: time="2025-01-17T12:27:23.133328272Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:27:23.136270 containerd[1485]: time="2025-01-17T12:27:23.133337684Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:27:23.136270 containerd[1485]: time="2025-01-17T12:27:23.133347576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:27:23.136384 containerd[1485]: time="2025-01-17T12:27:23.133586534Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:27:23.136384 containerd[1485]: time="2025-01-17T12:27:23.133659673Z" level=info msg="Connect containerd service" Jan 17 12:27:23.136384 containerd[1485]: time="2025-01-17T12:27:23.133710280Z" level=info msg="using legacy CRI server" Jan 17 12:27:23.136384 containerd[1485]: time="2025-01-17T12:27:23.133719272Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:27:23.136384 containerd[1485]: time="2025-01-17T12:27:23.133849362Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:27:23.137283 containerd[1485]: time="2025-01-17T12:27:23.137261202Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:27:23.137774 update-ssh-keys[1554]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:27:23.140508 containerd[1485]: time="2025-01-17T12:27:23.137639292Z" level=info msg="Start subscribing containerd event" Jan 17 12:27:23.140508 containerd[1485]: time="2025-01-17T12:27:23.137688401Z" level=info msg="Start recovering state" Jan 17 12:27:23.140508 containerd[1485]: time="2025-01-17T12:27:23.137762877Z" level=info msg="Start event monitor" Jan 17 12:27:23.140508 containerd[1485]: time="2025-01-17T12:27:23.137781901Z" level=info msg="Start snapshots syncer" Jan 17 12:27:23.140508 containerd[1485]: time="2025-01-17T12:27:23.137791005Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:27:23.140508 containerd[1485]: time="2025-01-17T12:27:23.137798628Z" level=info msg="Start streaming server" Jan 17 12:27:23.140508 containerd[1485]: time="2025-01-17T12:27:23.138004135Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:27:23.140508 containerd[1485]: time="2025-01-17T12:27:23.138059958Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:27:23.140508 containerd[1485]: time="2025-01-17T12:27:23.138104509Z" level=info msg="containerd successfully booted in 0.078505s" Jan 17 12:27:23.140451 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:27:23.142979 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:27:23.147485 systemd[1]: Finished sshkeys.service. Jan 17 12:27:23.165315 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:27:23.173314 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:27:23.183251 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:27:23.185112 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:27:23.433219 tar[1483]: linux-amd64/LICENSE Jan 17 12:27:23.433219 tar[1483]: linux-amd64/README.md Jan 17 12:27:23.446045 systemd-networkd[1391]: eth1: Gained IPv6LL Jan 17 12:27:23.449405 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:27:23.453272 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:27:23.457507 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:27:23.467121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:27:23.471222 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:27:23.498201 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:27:23.958452 systemd-networkd[1391]: eth0: Gained IPv6LL Jan 17 12:27:24.279478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:27:24.284008 (kubelet)[1586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:27:24.284612 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:27:24.287719 systemd[1]: Startup finished in 1.279s (kernel) + 6.351s (initrd) + 4.272s (userspace) = 11.903s. Jan 17 12:27:24.886303 kubelet[1586]: E0117 12:27:24.886222 1586 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:27:24.890704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:27:24.890897 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:27:35.141522 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:27:35.149106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:27:35.286958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:27:35.292308 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:27:35.341118 kubelet[1606]: E0117 12:27:35.341037 1606 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:27:35.347727 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:27:35.348069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:27:45.420628 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:27:45.427374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:27:45.564457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:27:45.577190 (kubelet)[1622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:27:45.617174 kubelet[1622]: E0117 12:27:45.617103 1622 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:27:45.621259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:27:45.621445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:27:55.670654 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 12:27:55.676172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:27:55.810960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:27:55.822204 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:27:55.858853 kubelet[1638]: E0117 12:27:55.858794 1638 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:27:55.862959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:27:55.863144 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:28:05.920728 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 12:28:05.926137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:28:06.055672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:28:06.059743 (kubelet)[1654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:28:06.096222 kubelet[1654]: E0117 12:28:06.096154 1654 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:28:06.099945 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:28:06.100174 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:28:08.426247 update_engine[1469]: I20250117 12:28:08.426146 1469 update_attempter.cc:509] Updating boot flags... Jan 17 12:28:08.467958 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1672) Jan 17 12:28:08.529736 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1668) Jan 17 12:28:08.572985 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1668) Jan 17 12:28:16.170512 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 12:28:16.176112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:28:16.301968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:28:16.305943 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:28:16.339198 kubelet[1692]: E0117 12:28:16.339121 1692 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:28:16.342732 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:28:16.342999 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:28:26.420888 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 17 12:28:26.426338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:28:26.572394 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:28:26.576594 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:28:26.612622 kubelet[1708]: E0117 12:28:26.612581 1708 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:28:26.616976 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:28:26.617164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:28:36.670679 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 17 12:28:36.676111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:28:36.814282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:28:36.818404 (kubelet)[1725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:28:36.856541 kubelet[1725]: E0117 12:28:36.856474 1725 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:28:36.860597 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:28:36.860792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:28:46.920585 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 17 12:28:46.926098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:28:47.074175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:28:47.078504 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:28:47.114399 kubelet[1741]: E0117 12:28:47.114338 1741 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:28:47.118738 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:28:47.118946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:28:57.170691 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 17 12:28:57.177120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:28:57.310494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:28:57.314505 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:28:57.348318 kubelet[1758]: E0117 12:28:57.348278 1758 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:28:57.352295 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:28:57.352472 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:29:07.420663 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 17 12:29:07.426449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:29:07.562621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:29:07.573224 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:29:07.611597 kubelet[1774]: E0117 12:29:07.611556 1774 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:29:07.615417 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:29:07.615600 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:29:17.671019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 17 12:29:17.684654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:29:17.836436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:29:17.849204 (kubelet)[1790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:29:17.882772 kubelet[1790]: E0117 12:29:17.882699 1790 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:29:17.886992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:29:17.887178 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:29:22.426834 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:29:22.433339 systemd[1]: Started sshd@0-138.199.154.203:22-139.178.89.65:43608.service - OpenSSH per-connection server daemon (139.178.89.65:43608). Jan 17 12:29:23.412116 sshd[1799]: Accepted publickey for core from 139.178.89.65 port 43608 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:29:23.414362 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:29:23.423266 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:29:23.435175 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:29:23.437237 systemd-logind[1467]: New session 1 of user core. Jan 17 12:29:23.448299 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:29:23.454299 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:29:23.469764 (systemd)[1803]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:29:23.564929 systemd[1803]: Queued start job for default target default.target. Jan 17 12:29:23.576077 systemd[1803]: Created slice app.slice - User Application Slice. Jan 17 12:29:23.576103 systemd[1803]: Reached target paths.target - Paths. Jan 17 12:29:23.576115 systemd[1803]: Reached target timers.target - Timers. Jan 17 12:29:23.577486 systemd[1803]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:29:23.591165 systemd[1803]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:29:23.591301 systemd[1803]: Reached target sockets.target - Sockets. Jan 17 12:29:23.591319 systemd[1803]: Reached target basic.target - Basic System. Jan 17 12:29:23.591365 systemd[1803]: Reached target default.target - Main User Target. Jan 17 12:29:23.591402 systemd[1803]: Startup finished in 114ms. Jan 17 12:29:23.591566 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:29:23.601045 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:29:24.289175 systemd[1]: Started sshd@1-138.199.154.203:22-139.178.89.65:43618.service - OpenSSH per-connection server daemon (139.178.89.65:43618). Jan 17 12:29:25.251890 sshd[1814]: Accepted publickey for core from 139.178.89.65 port 43618 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:29:25.254664 sshd[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:29:25.262145 systemd-logind[1467]: New session 2 of user core. Jan 17 12:29:25.269228 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:29:25.928498 sshd[1814]: pam_unix(sshd:session): session closed for user core Jan 17 12:29:25.933376 systemd[1]: sshd@1-138.199.154.203:22-139.178.89.65:43618.service: Deactivated successfully. Jan 17 12:29:25.937470 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:29:25.940028 systemd-logind[1467]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:29:25.942378 systemd-logind[1467]: Removed session 2. Jan 17 12:29:26.108434 systemd[1]: Started sshd@2-138.199.154.203:22-139.178.89.65:43634.service - OpenSSH per-connection server daemon (139.178.89.65:43634). Jan 17 12:29:27.086436 sshd[1821]: Accepted publickey for core from 139.178.89.65 port 43634 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:29:27.088599 sshd[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:29:27.093397 systemd-logind[1467]: New session 3 of user core. Jan 17 12:29:27.103092 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:29:27.765265 sshd[1821]: pam_unix(sshd:session): session closed for user core Jan 17 12:29:27.770115 systemd[1]: sshd@2-138.199.154.203:22-139.178.89.65:43634.service: Deactivated successfully. Jan 17 12:29:27.772812 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:29:27.773646 systemd-logind[1467]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:29:27.775272 systemd-logind[1467]: Removed session 3. Jan 17 12:29:27.920654 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 17 12:29:27.927430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:29:27.938090 systemd[1]: Started sshd@3-138.199.154.203:22-139.178.89.65:43642.service - OpenSSH per-connection server daemon (139.178.89.65:43642). Jan 17 12:29:28.064085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:29:28.065603 (kubelet)[1838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:29:28.097973 kubelet[1838]: E0117 12:29:28.097903 1838 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:29:28.101818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:29:28.102018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:29:28.926463 sshd[1831]: Accepted publickey for core from 139.178.89.65 port 43642 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:29:28.928732 sshd[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:29:28.934024 systemd-logind[1467]: New session 4 of user core. Jan 17 12:29:28.945095 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:29:29.605618 sshd[1831]: pam_unix(sshd:session): session closed for user core Jan 17 12:29:29.608525 systemd[1]: sshd@3-138.199.154.203:22-139.178.89.65:43642.service: Deactivated successfully. Jan 17 12:29:29.610958 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:29:29.612471 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:29:29.613645 systemd-logind[1467]: Removed session 4. Jan 17 12:29:29.776339 systemd[1]: Started sshd@4-138.199.154.203:22-139.178.89.65:43644.service - OpenSSH per-connection server daemon (139.178.89.65:43644). Jan 17 12:29:30.765390 sshd[1851]: Accepted publickey for core from 139.178.89.65 port 43644 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:29:30.767160 sshd[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:29:30.772222 systemd-logind[1467]: New session 5 of user core. Jan 17 12:29:30.781070 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:29:31.303043 sudo[1854]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:29:31.303442 sudo[1854]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:29:31.317117 sudo[1854]: pam_unix(sudo:session): session closed for user root Jan 17 12:29:31.478610 sshd[1851]: pam_unix(sshd:session): session closed for user core Jan 17 12:29:31.482979 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:29:31.483846 systemd[1]: sshd@4-138.199.154.203:22-139.178.89.65:43644.service: Deactivated successfully. Jan 17 12:29:31.486143 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:29:31.487083 systemd-logind[1467]: Removed session 5. Jan 17 12:29:31.650172 systemd[1]: Started sshd@5-138.199.154.203:22-139.178.89.65:51762.service - OpenSSH per-connection server daemon (139.178.89.65:51762). Jan 17 12:29:32.622822 sshd[1859]: Accepted publickey for core from 139.178.89.65 port 51762 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:29:32.624700 sshd[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:29:32.629835 systemd-logind[1467]: New session 6 of user core. Jan 17 12:29:32.640102 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:29:33.144980 sudo[1863]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:29:33.145480 sudo[1863]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:29:33.150537 sudo[1863]: pam_unix(sudo:session): session closed for user root Jan 17 12:29:33.158026 sudo[1862]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:29:33.158402 sudo[1862]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:29:33.172149 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:29:33.176445 auditctl[1866]: No rules Jan 17 12:29:33.176894 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:29:33.177203 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:29:33.183226 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:29:33.214183 augenrules[1884]: No rules Jan 17 12:29:33.215137 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:29:33.216575 sudo[1862]: pam_unix(sudo:session): session closed for user root Jan 17 12:29:33.375431 sshd[1859]: pam_unix(sshd:session): session closed for user core Jan 17 12:29:33.379771 systemd[1]: sshd@5-138.199.154.203:22-139.178.89.65:51762.service: Deactivated successfully. Jan 17 12:29:33.382033 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:29:33.382827 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:29:33.384239 systemd-logind[1467]: Removed session 6. Jan 17 12:29:33.545228 systemd[1]: Started sshd@6-138.199.154.203:22-139.178.89.65:51768.service - OpenSSH per-connection server daemon (139.178.89.65:51768). Jan 17 12:29:34.516074 sshd[1892]: Accepted publickey for core from 139.178.89.65 port 51768 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:29:34.517862 sshd[1892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:29:34.523010 systemd-logind[1467]: New session 7 of user core. Jan 17 12:29:34.533114 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:29:35.034433 sudo[1895]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:29:35.034811 sudo[1895]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:29:35.301133 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:29:35.303380 (dockerd)[1911]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:29:35.542490 dockerd[1911]: time="2025-01-17T12:29:35.542423938Z" level=info msg="Starting up" Jan 17 12:29:35.624232 systemd[1]: var-lib-docker-metacopy\x2dcheck300521644-merged.mount: Deactivated successfully. Jan 17 12:29:35.645420 dockerd[1911]: time="2025-01-17T12:29:35.645153751Z" level=info msg="Loading containers: start." Jan 17 12:29:35.744946 kernel: Initializing XFRM netlink socket Jan 17 12:29:35.827468 systemd-networkd[1391]: docker0: Link UP Jan 17 12:29:35.842352 dockerd[1911]: time="2025-01-17T12:29:35.842315721Z" level=info msg="Loading containers: done." Jan 17 12:29:35.854938 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck538582735-merged.mount: Deactivated successfully. Jan 17 12:29:35.858539 dockerd[1911]: time="2025-01-17T12:29:35.858490663Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:29:35.858666 dockerd[1911]: time="2025-01-17T12:29:35.858599195Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:29:35.858742 dockerd[1911]: time="2025-01-17T12:29:35.858715763Z" level=info msg="Daemon has completed initialization" Jan 17 12:29:35.891713 dockerd[1911]: time="2025-01-17T12:29:35.890411283Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:29:35.890528 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:29:37.042373 containerd[1485]: time="2025-01-17T12:29:37.042329837Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 17 12:29:37.625729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408618360.mount: Deactivated successfully. Jan 17 12:29:38.170653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 17 12:29:38.180453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:29:38.319112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:29:38.324481 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:29:38.370675 kubelet[2116]: E0117 12:29:38.370257 2116 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:29:38.374194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:29:38.374378 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:29:38.741078 containerd[1485]: time="2025-01-17T12:29:38.741022766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:38.742016 containerd[1485]: time="2025-01-17T12:29:38.741945680Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677104" Jan 17 12:29:38.742723 containerd[1485]: time="2025-01-17T12:29:38.742663231Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:38.745071 containerd[1485]: time="2025-01-17T12:29:38.745033791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:38.746393 containerd[1485]: time="2025-01-17T12:29:38.746150788Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.703781558s" Jan 17 12:29:38.746393 containerd[1485]: time="2025-01-17T12:29:38.746191164Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 17 12:29:38.766158 containerd[1485]: time="2025-01-17T12:29:38.766131818Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 17 12:29:40.235575 containerd[1485]: time="2025-01-17T12:29:40.235513050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:40.236796 containerd[1485]: time="2025-01-17T12:29:40.236746466Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605765" Jan 17 12:29:40.238088 containerd[1485]: time="2025-01-17T12:29:40.237945967Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:40.241570 containerd[1485]: time="2025-01-17T12:29:40.241479632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:40.242496 containerd[1485]: time="2025-01-17T12:29:40.242258957Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.476002306s" Jan 17 12:29:40.242496 containerd[1485]: time="2025-01-17T12:29:40.242292981Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 17 12:29:40.266884 containerd[1485]: time="2025-01-17T12:29:40.266841873Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 17 12:29:41.339590 containerd[1485]: time="2025-01-17T12:29:41.339538423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:41.340534 containerd[1485]: time="2025-01-17T12:29:41.340484031Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783084" Jan 17 12:29:41.341430 containerd[1485]: time="2025-01-17T12:29:41.341386930Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:41.343862 containerd[1485]: time="2025-01-17T12:29:41.343813224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:41.344997 containerd[1485]: time="2025-01-17T12:29:41.344855182Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.077976451s" Jan 17 12:29:41.344997 containerd[1485]: time="2025-01-17T12:29:41.344884196Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 17 12:29:41.367576 containerd[1485]: time="2025-01-17T12:29:41.367462555Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 17 12:29:42.342988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1638568489.mount: Deactivated successfully. Jan 17 12:29:42.665301 containerd[1485]: time="2025-01-17T12:29:42.665160226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:42.666424 containerd[1485]: time="2025-01-17T12:29:42.666371870Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058363" Jan 17 12:29:42.667295 containerd[1485]: time="2025-01-17T12:29:42.667242488Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:42.668955 containerd[1485]: time="2025-01-17T12:29:42.668902190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:42.669438 containerd[1485]: time="2025-01-17T12:29:42.669393218Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.301595705s" Jan 17 12:29:42.669478 containerd[1485]: time="2025-01-17T12:29:42.669438833Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 17 12:29:42.691783 containerd[1485]: time="2025-01-17T12:29:42.691748753Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:29:43.255885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3653916161.mount: Deactivated successfully. Jan 17 12:29:43.885928 containerd[1485]: time="2025-01-17T12:29:43.885857119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:43.886939 containerd[1485]: time="2025-01-17T12:29:43.886885642Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Jan 17 12:29:43.887851 containerd[1485]: time="2025-01-17T12:29:43.887812694Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:43.890365 containerd[1485]: time="2025-01-17T12:29:43.890319691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:43.891276 containerd[1485]: time="2025-01-17T12:29:43.891170250Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.199387574s" Jan 17 12:29:43.891276 containerd[1485]: time="2025-01-17T12:29:43.891196590Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:29:43.912300 containerd[1485]: time="2025-01-17T12:29:43.912229415Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:29:44.405139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3432334648.mount: Deactivated successfully. Jan 17 12:29:44.411458 containerd[1485]: time="2025-01-17T12:29:44.411398909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:44.412332 containerd[1485]: time="2025-01-17T12:29:44.412282061Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Jan 17 12:29:44.413199 containerd[1485]: time="2025-01-17T12:29:44.413152146Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:44.415377 containerd[1485]: time="2025-01-17T12:29:44.415332302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:44.416520 containerd[1485]: time="2025-01-17T12:29:44.416338903Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 503.925884ms" Jan 17 12:29:44.416520 containerd[1485]: time="2025-01-17T12:29:44.416384679Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:29:44.442298 containerd[1485]: time="2025-01-17T12:29:44.441531377Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 17 12:29:45.027156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2911576126.mount: Deactivated successfully. Jan 17 12:29:48.209178 containerd[1485]: time="2025-01-17T12:29:48.209093725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:48.210346 containerd[1485]: time="2025-01-17T12:29:48.210309488Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238651" Jan 17 12:29:48.211484 containerd[1485]: time="2025-01-17T12:29:48.211432748Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:48.213848 containerd[1485]: time="2025-01-17T12:29:48.213826905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:29:48.215106 containerd[1485]: time="2025-01-17T12:29:48.214940525Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.77338189s" Jan 17 12:29:48.215106 containerd[1485]: time="2025-01-17T12:29:48.214971835Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 17 12:29:48.420459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 17 12:29:48.426369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:29:48.581140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:29:48.589212 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:29:48.637311 kubelet[2281]: E0117 12:29:48.637255 2281 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:29:48.641607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:29:48.641848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:29:50.765948 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:29:50.774124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:29:50.795747 systemd[1]: Reloading requested from client PID 2346 ('systemctl') (unit session-7.scope)... Jan 17 12:29:50.795865 systemd[1]: Reloading... Jan 17 12:29:50.927943 zram_generator::config[2389]: No configuration found. Jan 17 12:29:51.024805 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:29:51.093306 systemd[1]: Reloading finished in 296 ms. Jan 17 12:29:51.143776 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:29:51.143887 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:29:51.144260 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:29:51.150191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:29:51.274440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:29:51.280570 (kubelet)[2440]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:29:51.324719 kubelet[2440]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:29:51.324719 kubelet[2440]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:29:51.324719 kubelet[2440]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:29:51.325056 kubelet[2440]: I0117 12:29:51.324762 2440 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:29:51.629564 kubelet[2440]: I0117 12:29:51.629441 2440 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:29:51.629564 kubelet[2440]: I0117 12:29:51.629471 2440 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:29:51.629721 kubelet[2440]: I0117 12:29:51.629684 2440 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:29:51.649282 kubelet[2440]: I0117 12:29:51.649133 2440 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:29:51.651880 kubelet[2440]: E0117 12:29:51.651784 2440 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://138.199.154.203:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 138.199.154.203:6443: connect: connection refused Jan 17 12:29:51.663349 kubelet[2440]: I0117 12:29:51.663311 2440 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:29:51.664586 kubelet[2440]: I0117 12:29:51.664543 2440 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:29:51.665865 kubelet[2440]: I0117 12:29:51.664572 2440 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-0-e492bbae02","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:29:51.666329 kubelet[2440]: I0117 12:29:51.666290 2440 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:29:51.666329 kubelet[2440]: I0117 12:29:51.666310 2440 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:29:51.666447 kubelet[2440]: I0117 12:29:51.666426 2440 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:29:51.668546 kubelet[2440]: I0117 12:29:51.668423 2440 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:29:51.668546 kubelet[2440]: I0117 12:29:51.668445 2440 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:29:51.668546 kubelet[2440]: I0117 12:29:51.668467 2440 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:29:51.670291 kubelet[2440]: I0117 12:29:51.670063 2440 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:29:51.671492 kubelet[2440]: W0117 12:29:51.670659 2440 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.154.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-0-e492bbae02&limit=500&resourceVersion=0": dial tcp 138.199.154.203:6443: connect: connection refused Jan 17 12:29:51.671492 kubelet[2440]: E0117 12:29:51.670717 2440 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.154.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-0-e492bbae02&limit=500&resourceVersion=0": dial tcp 138.199.154.203:6443: connect: connection refused Jan 17 12:29:51.673741 kubelet[2440]: W0117 12:29:51.673370 2440 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.154.203:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.154.203:6443: connect: connection refused Jan 17 12:29:51.673741 kubelet[2440]: E0117 12:29:51.673429 2440 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.154.203:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.154.203:6443: connect: connection refused Jan 17 12:29:51.675431 kubelet[2440]: I0117 12:29:51.675410 2440 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:29:51.677346 kubelet[2440]: I0117 12:29:51.677327 2440 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:29:51.677485 kubelet[2440]: W0117 12:29:51.677469 2440 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:29:51.678257 kubelet[2440]: I0117 12:29:51.678218 2440 server.go:1264] "Started kubelet" Jan 17 12:29:51.681844 kubelet[2440]: I0117 12:29:51.681389 2440 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:29:51.686264 kubelet[2440]: I0117 12:29:51.686198 2440 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:29:51.686704 kubelet[2440]: I0117 12:29:51.686607 2440 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:29:51.687144 kubelet[2440]: I0117 12:29:51.687105 2440 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:29:51.688081 kubelet[2440]: E0117 12:29:51.687894 2440 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.154.203:6443/api/v1/namespaces/default/events\": dial tcp 138.199.154.203:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-0-e492bbae02.181b7ab3464638e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-0-e492bbae02,UID:ci-4081-3-0-0-e492bbae02,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-0-e492bbae02,},FirstTimestamp:2025-01-17 12:29:51.678200039 +0000 UTC m=+0.394103455,LastTimestamp:2025-01-17 12:29:51.678200039 +0000 UTC m=+0.394103455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-0-e492bbae02,}" Jan 17 12:29:51.689608 kubelet[2440]: I0117 12:29:51.689568 2440 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:29:51.694413 kubelet[2440]: I0117 12:29:51.693612 2440 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:29:51.698167 kubelet[2440]: E0117 12:29:51.698050 2440 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.154.203:6443/api/v1/namespaces/default/events\": dial tcp 138.199.154.203:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-0-e492bbae02.181b7ab3464638e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-0-e492bbae02,UID:ci-4081-3-0-0-e492bbae02,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-0-e492bbae02,},FirstTimestamp:2025-01-17 12:29:51.678200039 +0000 UTC m=+0.394103455,LastTimestamp:2025-01-17 12:29:51.678200039 +0000 UTC m=+0.394103455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-0-e492bbae02,}" Jan 17 12:29:51.698246 kubelet[2440]: I0117 12:29:51.698210 2440 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:29:51.698341 kubelet[2440]: I0117 12:29:51.698308 2440 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:29:51.699853 kubelet[2440]: E0117 12:29:51.698734 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.154.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-0-e492bbae02?timeout=10s\": dial tcp 138.199.154.203:6443: connect: connection refused" interval="200ms" Jan 17 12:29:51.699943 kubelet[2440]: I0117 12:29:51.699893 2440 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:29:51.700994 kubelet[2440]: W0117 12:29:51.700900 2440 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.154.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.154.203:6443: connect: connection refused Jan 17 12:29:51.700994 kubelet[2440]: E0117 12:29:51.700988 2440 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.154.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.154.203:6443: connect: connection refused Jan 17 12:29:51.701268 kubelet[2440]: I0117 12:29:51.701250 2440 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:29:51.702982 kubelet[2440]: I0117 12:29:51.702955 2440 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:29:51.715013 kubelet[2440]: I0117 12:29:51.714964 2440 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:29:51.716426 kubelet[2440]: I0117 12:29:51.716393 2440 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:29:51.716478 kubelet[2440]: I0117 12:29:51.716431 2440 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:29:51.716478 kubelet[2440]: I0117 12:29:51.716455 2440 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:29:51.716523 kubelet[2440]: E0117 12:29:51.716501 2440 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:29:51.724951 kubelet[2440]: W0117 12:29:51.724864 2440 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.154.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.154.203:6443: connect: connection refused Jan 17 12:29:51.724951 kubelet[2440]: E0117 12:29:51.724941 2440 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.154.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.154.203:6443: connect: connection refused Jan 17 12:29:51.729649 kubelet[2440]: E0117 12:29:51.729531 2440 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:29:51.734842 kubelet[2440]: I0117 12:29:51.734794 2440 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:29:51.734842 kubelet[2440]: I0117 12:29:51.734808 2440 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:29:51.735042 kubelet[2440]: I0117 12:29:51.734875 2440 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:29:51.736339 kubelet[2440]: I0117 12:29:51.736310 2440 policy_none.go:49] "None policy: Start" Jan 17 12:29:51.736923 kubelet[2440]: I0117 12:29:51.736890 2440 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:29:51.736971 kubelet[2440]: I0117 12:29:51.736956 2440 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:29:51.742566 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:29:51.752132 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:29:51.765532 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:29:51.767190 kubelet[2440]: I0117 12:29:51.767162 2440 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:29:51.767367 kubelet[2440]: I0117 12:29:51.767319 2440 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:29:51.769838 kubelet[2440]: E0117 12:29:51.769739 2440 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-0-e492bbae02\" not found" Jan 17 12:29:51.770840 kubelet[2440]: I0117 12:29:51.770814 2440 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:29:51.795285 kubelet[2440]: I0117 12:29:51.795247 2440 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-0-e492bbae02" Jan 17 12:29:51.795553 kubelet[2440]: E0117 12:29:51.795514 2440 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.154.203:6443/api/v1/nodes\": dial tcp 138.199.154.203:6443: connect: connection refused" node="ci-4081-3-0-0-e492bbae02" Jan 17 12:29:51.817248 kubelet[2440]: I0117 12:29:51.817185 2440 topology_manager.go:215] "Topology Admit Handler" podUID="14b19a8e952fc29107e822864b425fe8" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:51.819260 kubelet[2440]: I0117 12:29:51.819076 2440 topology_manager.go:215] "Topology Admit Handler" podUID="d34720d7a968bc1e7f41b8ea1ec9c748" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:51.821445 kubelet[2440]: I0117 12:29:51.821398 2440 topology_manager.go:215] "Topology Admit Handler" podUID="7ab56a5aa14a2e11c4012cb2164c84d3" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:51.832490 systemd[1]: Created slice kubepods-burstable-pod14b19a8e952fc29107e822864b425fe8.slice - libcontainer container kubepods-burstable-pod14b19a8e952fc29107e822864b425fe8.slice. Jan 17 12:29:51.858449 systemd[1]: Created slice kubepods-burstable-podd34720d7a968bc1e7f41b8ea1ec9c748.slice - libcontainer container kubepods-burstable-podd34720d7a968bc1e7f41b8ea1ec9c748.slice. Jan 17 12:29:51.877638 systemd[1]: Created slice kubepods-burstable-pod7ab56a5aa14a2e11c4012cb2164c84d3.slice - libcontainer container kubepods-burstable-pod7ab56a5aa14a2e11c4012cb2164c84d3.slice. Jan 17 12:29:51.899729 kubelet[2440]: E0117 12:29:51.899579 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.154.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-0-e492bbae02?timeout=10s\": dial tcp 138.199.154.203:6443: connect: connection refused" interval="400ms" Jan 17 12:29:51.902282 kubelet[2440]: I0117 12:29:51.902212 2440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14b19a8e952fc29107e822864b425fe8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-0-e492bbae02\" (UID: \"14b19a8e952fc29107e822864b425fe8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:51.902282 kubelet[2440]: I0117 12:29:51.902249 2440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14b19a8e952fc29107e822864b425fe8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-0-e492bbae02\" (UID: \"14b19a8e952fc29107e822864b425fe8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:51.902282 kubelet[2440]: I0117 12:29:51.902273 2440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d34720d7a968bc1e7f41b8ea1ec9c748-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-0-e492bbae02\" (UID: \"d34720d7a968bc1e7f41b8ea1ec9c748\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:51.902546 kubelet[2440]: I0117 12:29:51.902316 2440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d34720d7a968bc1e7f41b8ea1ec9c748-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-0-e492bbae02\" (UID: \"d34720d7a968bc1e7f41b8ea1ec9c748\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:51.902546 kubelet[2440]: I0117 12:29:51.902343 2440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14b19a8e952fc29107e822864b425fe8-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-0-e492bbae02\" (UID: \"14b19a8e952fc29107e822864b425fe8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:51.902546 kubelet[2440]: I0117 12:29:51.902373 2440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d34720d7a968bc1e7f41b8ea1ec9c748-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-0-e492bbae02\" (UID: \"d34720d7a968bc1e7f41b8ea1ec9c748\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:51.902644 kubelet[2440]: I0117 12:29:51.902598 2440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d34720d7a968bc1e7f41b8ea1ec9c748-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-0-e492bbae02\" (UID: \"d34720d7a968bc1e7f41b8ea1ec9c748\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:51.902644 kubelet[2440]: I0117 12:29:51.902633 2440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d34720d7a968bc1e7f41b8ea1ec9c748-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-0-e492bbae02\" (UID: \"d34720d7a968bc1e7f41b8ea1ec9c748\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:51.902698 kubelet[2440]: I0117 12:29:51.902651 2440 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7ab56a5aa14a2e11c4012cb2164c84d3-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-0-e492bbae02\" (UID: \"7ab56a5aa14a2e11c4012cb2164c84d3\") " pod="kube-system/kube-scheduler-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:51.998630 kubelet[2440]: I0117 12:29:51.998559 2440 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-0-e492bbae02" Jan 17 12:29:51.999090 kubelet[2440]: E0117 12:29:51.999047 2440 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.154.203:6443/api/v1/nodes\": dial tcp 138.199.154.203:6443: connect: connection refused" node="ci-4081-3-0-0-e492bbae02" Jan 17 12:29:52.156120 containerd[1485]: time="2025-01-17T12:29:52.155990516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-0-e492bbae02,Uid:14b19a8e952fc29107e822864b425fe8,Namespace:kube-system,Attempt:0,}" Jan 17 12:29:52.173839 containerd[1485]: time="2025-01-17T12:29:52.173768650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-0-e492bbae02,Uid:d34720d7a968bc1e7f41b8ea1ec9c748,Namespace:kube-system,Attempt:0,}" Jan 17 12:29:52.182213 containerd[1485]: time="2025-01-17T12:29:52.181763029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-0-e492bbae02,Uid:7ab56a5aa14a2e11c4012cb2164c84d3,Namespace:kube-system,Attempt:0,}" Jan 17 12:29:52.300401 kubelet[2440]: E0117 12:29:52.300308 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.154.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-0-e492bbae02?timeout=10s\": dial tcp 138.199.154.203:6443: connect: connection refused" interval="800ms" Jan 17 12:29:52.402874 kubelet[2440]: I0117 12:29:52.402801 2440 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-0-e492bbae02" Jan 17 12:29:52.403550 kubelet[2440]: E0117 12:29:52.403397 2440 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.154.203:6443/api/v1/nodes\": dial tcp 138.199.154.203:6443: connect: connection refused" node="ci-4081-3-0-0-e492bbae02" Jan 17 12:29:52.681066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2631445549.mount: Deactivated successfully. Jan 17 12:29:52.686866 containerd[1485]: time="2025-01-17T12:29:52.686799124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:29:52.688452 containerd[1485]: time="2025-01-17T12:29:52.688304018Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:29:52.688452 containerd[1485]: time="2025-01-17T12:29:52.688397173Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:29:52.689472 containerd[1485]: time="2025-01-17T12:29:52.689407812Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:29:52.691083 containerd[1485]: time="2025-01-17T12:29:52.690792712Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:29:52.691083 containerd[1485]: time="2025-01-17T12:29:52.690968400Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:29:52.691956 containerd[1485]: time="2025-01-17T12:29:52.691906284Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Jan 17 12:29:52.695412 containerd[1485]: time="2025-01-17T12:29:52.695387374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:29:52.698902 containerd[1485]: time="2025-01-17T12:29:52.698871108Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 517.014935ms" Jan 17 12:29:52.703771 containerd[1485]: time="2025-01-17T12:29:52.703438378Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 529.572377ms" Jan 17 12:29:52.714794 containerd[1485]: time="2025-01-17T12:29:52.714535048Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 558.461096ms" Jan 17 12:29:52.742794 kubelet[2440]: W0117 12:29:52.742602 2440 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.154.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.154.203:6443: connect: connection refused Jan 17 12:29:52.742794 kubelet[2440]: E0117 12:29:52.742665 2440 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.154.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.154.203:6443: connect: connection refused Jan 17 12:29:52.755327 kubelet[2440]: W0117 12:29:52.755299 2440 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.154.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.154.203:6443: connect: connection refused Jan 17 12:29:52.755499 kubelet[2440]: E0117 12:29:52.755486 2440 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.154.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.154.203:6443: connect: connection refused Jan 17 12:29:52.841606 containerd[1485]: time="2025-01-17T12:29:52.841309145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:29:52.841606 containerd[1485]: time="2025-01-17T12:29:52.841350623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:29:52.841606 containerd[1485]: time="2025-01-17T12:29:52.841371482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:52.841606 containerd[1485]: time="2025-01-17T12:29:52.841446733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:52.848136 containerd[1485]: time="2025-01-17T12:29:52.847898948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:29:52.848136 containerd[1485]: time="2025-01-17T12:29:52.847955513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:29:52.848136 containerd[1485]: time="2025-01-17T12:29:52.847973396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:52.848136 containerd[1485]: time="2025-01-17T12:29:52.848042396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:52.858574 containerd[1485]: time="2025-01-17T12:29:52.858386779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:29:52.858574 containerd[1485]: time="2025-01-17T12:29:52.858427535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:29:52.858574 containerd[1485]: time="2025-01-17T12:29:52.858437804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:52.858574 containerd[1485]: time="2025-01-17T12:29:52.858497246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:29:52.870059 systemd[1]: Started cri-containerd-a8e8fe258d70582366df9716d281ac261240afbe688a3dea56d5a73d05d7d196.scope - libcontainer container a8e8fe258d70582366df9716d281ac261240afbe688a3dea56d5a73d05d7d196. Jan 17 12:29:52.887270 systemd[1]: Started cri-containerd-6ad757793bd1c88b335235daf40a68f7e826645cc388fe1588b231f2b5f95b3a.scope - libcontainer container 6ad757793bd1c88b335235daf40a68f7e826645cc388fe1588b231f2b5f95b3a. Jan 17 12:29:52.895126 systemd[1]: Started cri-containerd-a6a84b650c6334ce0dae4fe3d62760e9696083062f2325f61850615caeb8a767.scope - libcontainer container a6a84b650c6334ce0dae4fe3d62760e9696083062f2325f61850615caeb8a767. Jan 17 12:29:52.905708 kubelet[2440]: W0117 12:29:52.905633 2440 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.154.203:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.154.203:6443: connect: connection refused Jan 17 12:29:52.905708 kubelet[2440]: E0117 12:29:52.905688 2440 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.154.203:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.154.203:6443: connect: connection refused Jan 17 12:29:52.947429 containerd[1485]: time="2025-01-17T12:29:52.946476549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-0-e492bbae02,Uid:d34720d7a968bc1e7f41b8ea1ec9c748,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8e8fe258d70582366df9716d281ac261240afbe688a3dea56d5a73d05d7d196\"" Jan 17 12:29:52.948972 containerd[1485]: time="2025-01-17T12:29:52.948891446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-0-e492bbae02,Uid:7ab56a5aa14a2e11c4012cb2164c84d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ad757793bd1c88b335235daf40a68f7e826645cc388fe1588b231f2b5f95b3a\"" Jan 17 12:29:52.957444 containerd[1485]: time="2025-01-17T12:29:52.957233063Z" level=info msg="CreateContainer within sandbox \"6ad757793bd1c88b335235daf40a68f7e826645cc388fe1588b231f2b5f95b3a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:29:52.960435 containerd[1485]: time="2025-01-17T12:29:52.960406788Z" level=info msg="CreateContainer within sandbox \"a8e8fe258d70582366df9716d281ac261240afbe688a3dea56d5a73d05d7d196\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:29:52.972978 containerd[1485]: time="2025-01-17T12:29:52.972895420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-0-e492bbae02,Uid:14b19a8e952fc29107e822864b425fe8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6a84b650c6334ce0dae4fe3d62760e9696083062f2325f61850615caeb8a767\"" Jan 17 12:29:52.976472 containerd[1485]: time="2025-01-17T12:29:52.976449768Z" level=info msg="CreateContainer within sandbox \"a6a84b650c6334ce0dae4fe3d62760e9696083062f2325f61850615caeb8a767\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:29:52.980057 containerd[1485]: time="2025-01-17T12:29:52.980028829Z" level=info msg="CreateContainer within sandbox \"6ad757793bd1c88b335235daf40a68f7e826645cc388fe1588b231f2b5f95b3a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"384bc2b16140e2bbfc412c2922b845c7299683550779ea4f29504b4453544ea3\"" Jan 17 12:29:52.980761 containerd[1485]: time="2025-01-17T12:29:52.980725683Z" level=info msg="StartContainer for \"384bc2b16140e2bbfc412c2922b845c7299683550779ea4f29504b4453544ea3\"" Jan 17 12:29:52.985878 containerd[1485]: time="2025-01-17T12:29:52.985810321Z" level=info msg="CreateContainer within sandbox \"a8e8fe258d70582366df9716d281ac261240afbe688a3dea56d5a73d05d7d196\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9eede87efd255e6aa794e76d8aac2c63460c4af66f79b2c766386af239409c4f\"" Jan 17 12:29:52.986238 containerd[1485]: time="2025-01-17T12:29:52.986217802Z" level=info msg="StartContainer for \"9eede87efd255e6aa794e76d8aac2c63460c4af66f79b2c766386af239409c4f\"" Jan 17 12:29:52.997445 containerd[1485]: time="2025-01-17T12:29:52.997422505Z" level=info msg="CreateContainer within sandbox \"a6a84b650c6334ce0dae4fe3d62760e9696083062f2325f61850615caeb8a767\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"883ca1018b4479d518bbff89329b49861bb1ba52870a52cad1ab3519201d6582\"" Jan 17 12:29:52.999935 containerd[1485]: time="2025-01-17T12:29:52.998428676Z" level=info msg="StartContainer for \"883ca1018b4479d518bbff89329b49861bb1ba52870a52cad1ab3519201d6582\"" Jan 17 12:29:53.011213 systemd[1]: Started cri-containerd-384bc2b16140e2bbfc412c2922b845c7299683550779ea4f29504b4453544ea3.scope - libcontainer container 384bc2b16140e2bbfc412c2922b845c7299683550779ea4f29504b4453544ea3. Jan 17 12:29:53.021316 systemd[1]: Started cri-containerd-9eede87efd255e6aa794e76d8aac2c63460c4af66f79b2c766386af239409c4f.scope - libcontainer container 9eede87efd255e6aa794e76d8aac2c63460c4af66f79b2c766386af239409c4f. Jan 17 12:29:53.054146 systemd[1]: Started cri-containerd-883ca1018b4479d518bbff89329b49861bb1ba52870a52cad1ab3519201d6582.scope - libcontainer container 883ca1018b4479d518bbff89329b49861bb1ba52870a52cad1ab3519201d6582. Jan 17 12:29:53.092612 kubelet[2440]: W0117 12:29:53.092532 2440 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.154.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-0-e492bbae02&limit=500&resourceVersion=0": dial tcp 138.199.154.203:6443: connect: connection refused Jan 17 12:29:53.092612 kubelet[2440]: E0117 12:29:53.092600 2440 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.154.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-0-e492bbae02&limit=500&resourceVersion=0": dial tcp 138.199.154.203:6443: connect: connection refused Jan 17 12:29:53.101036 kubelet[2440]: E0117 12:29:53.100945 2440 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.154.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-0-e492bbae02?timeout=10s\": dial tcp 138.199.154.203:6443: connect: connection refused" interval="1.6s" Jan 17 12:29:53.109516 containerd[1485]: time="2025-01-17T12:29:53.109470141Z" level=info msg="StartContainer for \"9eede87efd255e6aa794e76d8aac2c63460c4af66f79b2c766386af239409c4f\" returns successfully" Jan 17 12:29:53.113796 containerd[1485]: time="2025-01-17T12:29:53.113755916Z" level=info msg="StartContainer for \"384bc2b16140e2bbfc412c2922b845c7299683550779ea4f29504b4453544ea3\" returns successfully" Jan 17 12:29:53.138091 containerd[1485]: time="2025-01-17T12:29:53.138055455Z" level=info msg="StartContainer for \"883ca1018b4479d518bbff89329b49861bb1ba52870a52cad1ab3519201d6582\" returns successfully" Jan 17 12:29:53.205498 kubelet[2440]: I0117 12:29:53.205395 2440 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-0-e492bbae02" Jan 17 12:29:53.206496 kubelet[2440]: E0117 12:29:53.206405 2440 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.154.203:6443/api/v1/nodes\": dial tcp 138.199.154.203:6443: connect: connection refused" node="ci-4081-3-0-0-e492bbae02" Jan 17 12:29:54.703980 kubelet[2440]: E0117 12:29:54.703905 2440 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-0-e492bbae02\" not found" node="ci-4081-3-0-0-e492bbae02" Jan 17 12:29:54.808980 kubelet[2440]: I0117 12:29:54.808891 2440 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-0-e492bbae02" Jan 17 12:29:54.821729 kubelet[2440]: I0117 12:29:54.821687 2440 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-0-e492bbae02" Jan 17 12:29:54.831196 kubelet[2440]: E0117 12:29:54.831145 2440 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-0-0-e492bbae02\" not found" Jan 17 12:29:55.674359 kubelet[2440]: I0117 12:29:55.674322 2440 apiserver.go:52] "Watching apiserver" Jan 17 12:29:55.700666 kubelet[2440]: I0117 12:29:55.700606 2440 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:29:56.728428 systemd[1]: Reloading requested from client PID 2712 ('systemctl') (unit session-7.scope)... Jan 17 12:29:56.728461 systemd[1]: Reloading... Jan 17 12:29:56.852963 zram_generator::config[2755]: No configuration found. Jan 17 12:29:56.949335 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:29:57.036769 systemd[1]: Reloading finished in 307 ms. Jan 17 12:29:57.095513 kubelet[2440]: I0117 12:29:57.095318 2440 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:29:57.095382 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:29:57.115547 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:29:57.115810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:29:57.122630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:29:57.263504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:29:57.272513 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:29:57.331771 kubelet[2803]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:29:57.332137 kubelet[2803]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:29:57.332198 kubelet[2803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:29:57.332331 kubelet[2803]: I0117 12:29:57.332303 2803 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:29:57.337202 kubelet[2803]: I0117 12:29:57.337186 2803 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:29:57.337291 kubelet[2803]: I0117 12:29:57.337279 2803 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:29:57.337471 kubelet[2803]: I0117 12:29:57.337459 2803 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:29:57.338711 kubelet[2803]: I0117 12:29:57.338694 2803 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:29:57.339820 kubelet[2803]: I0117 12:29:57.339786 2803 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:29:57.347981 kubelet[2803]: I0117 12:29:57.346387 2803 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:29:57.347981 kubelet[2803]: I0117 12:29:57.346600 2803 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:29:57.347981 kubelet[2803]: I0117 12:29:57.346637 2803 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-0-e492bbae02","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:29:57.347981 kubelet[2803]: I0117 12:29:57.346842 2803 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:29:57.348173 kubelet[2803]: I0117 12:29:57.346851 2803 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:29:57.348173 kubelet[2803]: I0117 12:29:57.346895 2803 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:29:57.348173 kubelet[2803]: I0117 12:29:57.346991 2803 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:29:57.348173 kubelet[2803]: I0117 12:29:57.347002 2803 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:29:57.348173 kubelet[2803]: I0117 12:29:57.347025 2803 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:29:57.348173 kubelet[2803]: I0117 12:29:57.347041 2803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:29:57.353798 kubelet[2803]: I0117 12:29:57.353781 2803 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:29:57.355522 kubelet[2803]: I0117 12:29:57.355494 2803 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:29:57.356051 kubelet[2803]: I0117 12:29:57.356036 2803 server.go:1264] "Started kubelet" Jan 17 12:29:57.359468 kubelet[2803]: I0117 12:29:57.359369 2803 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:29:57.359560 kubelet[2803]: I0117 12:29:57.359535 2803 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:29:57.359837 kubelet[2803]: I0117 12:29:57.359810 2803 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:29:57.361036 kubelet[2803]: I0117 12:29:57.361010 2803 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:29:57.363393 kubelet[2803]: I0117 12:29:57.363379 2803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:29:57.368296 kubelet[2803]: I0117 12:29:57.368246 2803 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:29:57.368478 kubelet[2803]: I0117 12:29:57.368446 2803 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:29:57.369455 kubelet[2803]: I0117 12:29:57.369441 2803 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:29:57.370866 kubelet[2803]: I0117 12:29:57.370799 2803 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:29:57.371088 kubelet[2803]: I0117 12:29:57.371038 2803 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:29:57.371606 kubelet[2803]: E0117 12:29:57.371585 2803 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:29:57.375119 kubelet[2803]: I0117 12:29:57.375033 2803 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:29:57.378367 kubelet[2803]: I0117 12:29:57.378281 2803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:29:57.379689 kubelet[2803]: I0117 12:29:57.379674 2803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:29:57.380007 kubelet[2803]: I0117 12:29:57.379753 2803 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:29:57.380007 kubelet[2803]: I0117 12:29:57.379772 2803 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:29:57.380007 kubelet[2803]: E0117 12:29:57.379806 2803 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:29:57.434394 kubelet[2803]: I0117 12:29:57.434372 2803 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:29:57.434766 kubelet[2803]: I0117 12:29:57.434519 2803 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:29:57.434766 kubelet[2803]: I0117 12:29:57.434540 2803 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:29:57.434766 kubelet[2803]: I0117 12:29:57.434675 2803 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:29:57.434766 kubelet[2803]: I0117 12:29:57.434684 2803 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:29:57.434766 kubelet[2803]: I0117 12:29:57.434710 2803 policy_none.go:49] "None policy: Start" Jan 17 12:29:57.435553 kubelet[2803]: I0117 12:29:57.435393 2803 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:29:57.435553 kubelet[2803]: I0117 12:29:57.435443 2803 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:29:57.435553 kubelet[2803]: I0117 12:29:57.435538 2803 state_mem.go:75] "Updated machine memory state" Jan 17 12:29:57.443071 kubelet[2803]: I0117 12:29:57.443037 2803 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:29:57.443245 kubelet[2803]: I0117 12:29:57.443201 2803 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:29:57.445140 kubelet[2803]: I0117 12:29:57.444239 2803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:29:57.480390 kubelet[2803]: I0117 12:29:57.480331 2803 topology_manager.go:215] "Topology Admit Handler" podUID="14b19a8e952fc29107e822864b425fe8" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:57.480529 kubelet[2803]: I0117 12:29:57.480422 2803 topology_manager.go:215] "Topology Admit Handler" podUID="d34720d7a968bc1e7f41b8ea1ec9c748" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:57.480529 kubelet[2803]: I0117 12:29:57.480470 2803 topology_manager.go:215] "Topology Admit Handler" podUID="7ab56a5aa14a2e11c4012cb2164c84d3" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:57.548545 kubelet[2803]: I0117 12:29:57.548503 2803 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-0-e492bbae02" Jan 17 12:29:57.555668 kubelet[2803]: I0117 12:29:57.555632 2803 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-0-e492bbae02" Jan 17 12:29:57.555781 kubelet[2803]: I0117 12:29:57.555728 2803 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-0-e492bbae02" Jan 17 12:29:57.570546 kubelet[2803]: I0117 12:29:57.570477 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d34720d7a968bc1e7f41b8ea1ec9c748-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-0-e492bbae02\" (UID: \"d34720d7a968bc1e7f41b8ea1ec9c748\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:57.570546 kubelet[2803]: I0117 12:29:57.570514 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d34720d7a968bc1e7f41b8ea1ec9c748-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-0-e492bbae02\" (UID: \"d34720d7a968bc1e7f41b8ea1ec9c748\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:57.570546 kubelet[2803]: I0117 12:29:57.570534 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7ab56a5aa14a2e11c4012cb2164c84d3-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-0-e492bbae02\" (UID: \"7ab56a5aa14a2e11c4012cb2164c84d3\") " pod="kube-system/kube-scheduler-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:57.570546 kubelet[2803]: I0117 12:29:57.570553 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14b19a8e952fc29107e822864b425fe8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-0-e492bbae02\" (UID: \"14b19a8e952fc29107e822864b425fe8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:57.570954 kubelet[2803]: I0117 12:29:57.570576 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14b19a8e952fc29107e822864b425fe8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-0-e492bbae02\" (UID: \"14b19a8e952fc29107e822864b425fe8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:57.570954 kubelet[2803]: I0117 12:29:57.570593 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d34720d7a968bc1e7f41b8ea1ec9c748-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-0-e492bbae02\" (UID: \"d34720d7a968bc1e7f41b8ea1ec9c748\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:57.570954 kubelet[2803]: I0117 12:29:57.570608 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14b19a8e952fc29107e822864b425fe8-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-0-e492bbae02\" (UID: \"14b19a8e952fc29107e822864b425fe8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:57.570954 kubelet[2803]: I0117 12:29:57.570635 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d34720d7a968bc1e7f41b8ea1ec9c748-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-0-e492bbae02\" (UID: \"d34720d7a968bc1e7f41b8ea1ec9c748\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:57.570954 kubelet[2803]: I0117 12:29:57.570668 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d34720d7a968bc1e7f41b8ea1ec9c748-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-0-e492bbae02\" (UID: \"d34720d7a968bc1e7f41b8ea1ec9c748\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:57.732283 sudo[2837]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 12:29:57.732645 sudo[2837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 12:29:58.227312 sudo[2837]: pam_unix(sudo:session): session closed for user root Jan 17 12:29:58.350540 kubelet[2803]: I0117 12:29:58.350486 2803 apiserver.go:52] "Watching apiserver" Jan 17 12:29:58.368832 kubelet[2803]: I0117 12:29:58.368738 2803 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:29:58.416664 kubelet[2803]: E0117 12:29:58.416053 2803 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-0-e492bbae02\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-0-e492bbae02" Jan 17 12:29:58.445105 kubelet[2803]: I0117 12:29:58.444954 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-0-e492bbae02" podStartSLOduration=1.444939229 podStartE2EDuration="1.444939229s" podCreationTimestamp="2025-01-17 12:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:29:58.438135223 +0000 UTC m=+1.152169953" watchObservedRunningTime="2025-01-17 12:29:58.444939229 +0000 UTC m=+1.158973959" Jan 17 12:29:58.453337 kubelet[2803]: I0117 12:29:58.453284 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-0-e492bbae02" podStartSLOduration=1.453271312 podStartE2EDuration="1.453271312s" podCreationTimestamp="2025-01-17 12:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:29:58.445768539 +0000 UTC m=+1.159803270" watchObservedRunningTime="2025-01-17 12:29:58.453271312 +0000 UTC m=+1.167306042" Jan 17 12:29:58.453933 kubelet[2803]: I0117 12:29:58.453512 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-0-e492bbae02" podStartSLOduration=1.453507043 podStartE2EDuration="1.453507043s" podCreationTimestamp="2025-01-17 12:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:29:58.452591591 +0000 UTC m=+1.166626321" watchObservedRunningTime="2025-01-17 12:29:58.453507043 +0000 UTC m=+1.167541772" Jan 17 12:29:59.586889 sudo[1895]: pam_unix(sudo:session): session closed for user root Jan 17 12:29:59.746027 sshd[1892]: pam_unix(sshd:session): session closed for user core Jan 17 12:29:59.750280 systemd[1]: sshd@6-138.199.154.203:22-139.178.89.65:51768.service: Deactivated successfully. Jan 17 12:29:59.752891 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:29:59.753121 systemd[1]: session-7.scope: Consumed 4.201s CPU time, 189.0M memory peak, 0B memory swap peak. Jan 17 12:29:59.754792 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:29:59.756626 systemd-logind[1467]: Removed session 7. Jan 17 12:30:12.346442 kubelet[2803]: I0117 12:30:12.346404 2803 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:30:12.346849 containerd[1485]: time="2025-01-17T12:30:12.346766849Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:30:12.348255 kubelet[2803]: I0117 12:30:12.347321 2803 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:30:12.379011 kubelet[2803]: I0117 12:30:12.378588 2803 topology_manager.go:215] "Topology Admit Handler" podUID="9c150322-227c-4e0b-84ec-c7418e0cb6ec" podNamespace="kube-system" podName="cilium-operator-599987898-vjfcb" Jan 17 12:30:12.389237 systemd[1]: Created slice kubepods-besteffort-pod9c150322_227c_4e0b_84ec_c7418e0cb6ec.slice - libcontainer container kubepods-besteffort-pod9c150322_227c_4e0b_84ec_c7418e0cb6ec.slice. Jan 17 12:30:12.392176 kubelet[2803]: W0117 12:30:12.392077 2803 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-0-0-e492bbae02" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-0-e492bbae02' and this object Jan 17 12:30:12.392176 kubelet[2803]: E0117 12:30:12.392108 2803 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-0-0-e492bbae02" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-0-e492bbae02' and this object Jan 17 12:30:12.392176 kubelet[2803]: W0117 12:30:12.392138 2803 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-3-0-0-e492bbae02" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-0-e492bbae02' and this object Jan 17 12:30:12.392176 kubelet[2803]: E0117 12:30:12.392148 2803 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-3-0-0-e492bbae02" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-0-e492bbae02' and this object Jan 17 12:30:12.469404 kubelet[2803]: I0117 12:30:12.469345 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59s4s\" (UniqueName: \"kubernetes.io/projected/9c150322-227c-4e0b-84ec-c7418e0cb6ec-kube-api-access-59s4s\") pod \"cilium-operator-599987898-vjfcb\" (UID: \"9c150322-227c-4e0b-84ec-c7418e0cb6ec\") " pod="kube-system/cilium-operator-599987898-vjfcb" Jan 17 12:30:12.469404 kubelet[2803]: I0117 12:30:12.469399 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c150322-227c-4e0b-84ec-c7418e0cb6ec-cilium-config-path\") pod \"cilium-operator-599987898-vjfcb\" (UID: \"9c150322-227c-4e0b-84ec-c7418e0cb6ec\") " pod="kube-system/cilium-operator-599987898-vjfcb" Jan 17 12:30:12.551971 kubelet[2803]: I0117 12:30:12.550439 2803 topology_manager.go:215] "Topology Admit Handler" podUID="f90fe678-0f21-40b3-aecf-9a6a84c9f642" podNamespace="kube-system" podName="kube-proxy-t7692" Jan 17 12:30:12.561654 systemd[1]: Created slice kubepods-besteffort-podf90fe678_0f21_40b3_aecf_9a6a84c9f642.slice - libcontainer container kubepods-besteffort-podf90fe678_0f21_40b3_aecf_9a6a84c9f642.slice. Jan 17 12:30:12.566520 kubelet[2803]: I0117 12:30:12.566490 2803 topology_manager.go:215] "Topology Admit Handler" podUID="0cd4201d-6182-45c3-b96d-10ca1338b05b" podNamespace="kube-system" podName="cilium-cdhsv" Jan 17 12:30:12.572068 kubelet[2803]: I0117 12:30:12.572041 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f90fe678-0f21-40b3-aecf-9a6a84c9f642-xtables-lock\") pod \"kube-proxy-t7692\" (UID: \"f90fe678-0f21-40b3-aecf-9a6a84c9f642\") " pod="kube-system/kube-proxy-t7692" Jan 17 12:30:12.572068 kubelet[2803]: I0117 12:30:12.572074 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f90fe678-0f21-40b3-aecf-9a6a84c9f642-lib-modules\") pod \"kube-proxy-t7692\" (UID: \"f90fe678-0f21-40b3-aecf-9a6a84c9f642\") " pod="kube-system/kube-proxy-t7692" Jan 17 12:30:12.572196 kubelet[2803]: I0117 12:30:12.572111 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f90fe678-0f21-40b3-aecf-9a6a84c9f642-kube-proxy\") pod \"kube-proxy-t7692\" (UID: \"f90fe678-0f21-40b3-aecf-9a6a84c9f642\") " pod="kube-system/kube-proxy-t7692" Jan 17 12:30:12.572196 kubelet[2803]: I0117 12:30:12.572131 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q2sq\" (UniqueName: \"kubernetes.io/projected/f90fe678-0f21-40b3-aecf-9a6a84c9f642-kube-api-access-5q2sq\") pod \"kube-proxy-t7692\" (UID: \"f90fe678-0f21-40b3-aecf-9a6a84c9f642\") " pod="kube-system/kube-proxy-t7692" Jan 17 12:30:12.592821 systemd[1]: Created slice kubepods-burstable-pod0cd4201d_6182_45c3_b96d_10ca1338b05b.slice - libcontainer container kubepods-burstable-pod0cd4201d_6182_45c3_b96d_10ca1338b05b.slice. Jan 17 12:30:12.672345 kubelet[2803]: I0117 12:30:12.672283 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-bpf-maps\") pod \"cilium-cdhsv\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " pod="kube-system/cilium-cdhsv" Jan 17 12:30:12.672345 kubelet[2803]: I0117 12:30:12.672339 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-cni-path\") pod \"cilium-cdhsv\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " pod="kube-system/cilium-cdhsv" Jan 17 12:30:12.672498 kubelet[2803]: I0117 12:30:12.672356 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-lib-modules\") pod \"cilium-cdhsv\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " pod="kube-system/cilium-cdhsv" Jan 17 12:30:12.672498 kubelet[2803]: I0117 12:30:12.672378 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-cilium-run\") pod \"cilium-cdhsv\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " pod="kube-system/cilium-cdhsv" Jan 17 12:30:12.672498 kubelet[2803]: I0117 12:30:12.672400 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-host-proc-sys-kernel\") pod \"cilium-cdhsv\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " pod="kube-system/cilium-cdhsv" Jan 17 12:30:12.672498 kubelet[2803]: I0117 12:30:12.672414 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljp65\" (UniqueName: \"kubernetes.io/projected/0cd4201d-6182-45c3-b96d-10ca1338b05b-kube-api-access-ljp65\") pod \"cilium-cdhsv\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " pod="kube-system/cilium-cdhsv" Jan 17 12:30:12.672498 kubelet[2803]: I0117 12:30:12.672455 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-hostproc\") pod \"cilium-cdhsv\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " pod="kube-system/cilium-cdhsv" Jan 17 12:30:12.672498 kubelet[2803]: I0117 12:30:12.672471 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0cd4201d-6182-45c3-b96d-10ca1338b05b-clustermesh-secrets\") pod \"cilium-cdhsv\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " pod="kube-system/cilium-cdhsv" Jan 17 12:30:12.672863 kubelet[2803]: I0117 12:30:12.672485 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0cd4201d-6182-45c3-b96d-10ca1338b05b-cilium-config-path\") pod \"cilium-cdhsv\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " pod="kube-system/cilium-cdhsv" Jan 17 12:30:12.672863 kubelet[2803]: I0117 12:30:12.672498 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-host-proc-sys-net\") pod \"cilium-cdhsv\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " pod="kube-system/cilium-cdhsv" Jan 17 12:30:12.672863 kubelet[2803]: I0117 12:30:12.672524 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-cilium-cgroup\") pod \"cilium-cdhsv\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " pod="kube-system/cilium-cdhsv" Jan 17 12:30:12.672863 kubelet[2803]: I0117 12:30:12.672537 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-etc-cni-netd\") pod \"cilium-cdhsv\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " pod="kube-system/cilium-cdhsv" Jan 17 12:30:12.672863 kubelet[2803]: I0117 12:30:12.672581 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-xtables-lock\") pod \"cilium-cdhsv\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " pod="kube-system/cilium-cdhsv" Jan 17 12:30:12.672863 kubelet[2803]: I0117 12:30:12.672596 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0cd4201d-6182-45c3-b96d-10ca1338b05b-hubble-tls\") pod \"cilium-cdhsv\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " pod="kube-system/cilium-cdhsv" Jan 17 12:30:13.585032 kubelet[2803]: E0117 12:30:13.584976 2803 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:30:13.585579 kubelet[2803]: E0117 12:30:13.585099 2803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c150322-227c-4e0b-84ec-c7418e0cb6ec-cilium-config-path podName:9c150322-227c-4e0b-84ec-c7418e0cb6ec nodeName:}" failed. No retries permitted until 2025-01-17 12:30:14.085073223 +0000 UTC m=+16.799107974 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/9c150322-227c-4e0b-84ec-c7418e0cb6ec-cilium-config-path") pod "cilium-operator-599987898-vjfcb" (UID: "9c150322-227c-4e0b-84ec-c7418e0cb6ec") : failed to sync configmap cache: timed out waiting for the condition Jan 17 12:30:13.598973 kubelet[2803]: E0117 12:30:13.598895 2803 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:30:13.598973 kubelet[2803]: E0117 12:30:13.598960 2803 projected.go:200] Error preparing data for projected volume kube-api-access-59s4s for pod kube-system/cilium-operator-599987898-vjfcb: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:30:13.599167 kubelet[2803]: E0117 12:30:13.599036 2803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c150322-227c-4e0b-84ec-c7418e0cb6ec-kube-api-access-59s4s podName:9c150322-227c-4e0b-84ec-c7418e0cb6ec nodeName:}" failed. No retries permitted until 2025-01-17 12:30:14.0990162 +0000 UTC m=+16.813050929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-59s4s" (UniqueName: "kubernetes.io/projected/9c150322-227c-4e0b-84ec-c7418e0cb6ec-kube-api-access-59s4s") pod "cilium-operator-599987898-vjfcb" (UID: "9c150322-227c-4e0b-84ec-c7418e0cb6ec") : failed to sync configmap cache: timed out waiting for the condition Jan 17 12:30:13.678668 kubelet[2803]: E0117 12:30:13.678593 2803 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:30:13.678668 kubelet[2803]: E0117 12:30:13.678662 2803 projected.go:200] Error preparing data for projected volume kube-api-access-5q2sq for pod kube-system/kube-proxy-t7692: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:30:13.678864 kubelet[2803]: E0117 12:30:13.678729 2803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f90fe678-0f21-40b3-aecf-9a6a84c9f642-kube-api-access-5q2sq podName:f90fe678-0f21-40b3-aecf-9a6a84c9f642 nodeName:}" failed. No retries permitted until 2025-01-17 12:30:14.178710944 +0000 UTC m=+16.892745685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5q2sq" (UniqueName: "kubernetes.io/projected/f90fe678-0f21-40b3-aecf-9a6a84c9f642-kube-api-access-5q2sq") pod "kube-proxy-t7692" (UID: "f90fe678-0f21-40b3-aecf-9a6a84c9f642") : failed to sync configmap cache: timed out waiting for the condition Jan 17 12:30:13.774367 kubelet[2803]: E0117 12:30:13.774327 2803 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:30:13.774480 kubelet[2803]: E0117 12:30:13.774398 2803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0cd4201d-6182-45c3-b96d-10ca1338b05b-cilium-config-path podName:0cd4201d-6182-45c3-b96d-10ca1338b05b nodeName:}" failed. No retries permitted until 2025-01-17 12:30:14.274381148 +0000 UTC m=+16.988415878 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/0cd4201d-6182-45c3-b96d-10ca1338b05b-cilium-config-path") pod "cilium-cdhsv" (UID: "0cd4201d-6182-45c3-b96d-10ca1338b05b") : failed to sync configmap cache: timed out waiting for the condition Jan 17 12:30:14.199572 containerd[1485]: time="2025-01-17T12:30:14.199505716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-vjfcb,Uid:9c150322-227c-4e0b-84ec-c7418e0cb6ec,Namespace:kube-system,Attempt:0,}" Jan 17 12:30:14.226718 containerd[1485]: time="2025-01-17T12:30:14.226118999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:30:14.226718 containerd[1485]: time="2025-01-17T12:30:14.226179462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:30:14.226718 containerd[1485]: time="2025-01-17T12:30:14.226192417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:30:14.226718 containerd[1485]: time="2025-01-17T12:30:14.226262147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:30:14.248043 systemd[1]: Started cri-containerd-97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4.scope - libcontainer container 97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4. Jan 17 12:30:14.283811 containerd[1485]: time="2025-01-17T12:30:14.283754795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-vjfcb,Uid:9c150322-227c-4e0b-84ec-c7418e0cb6ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4\"" Jan 17 12:30:14.286946 containerd[1485]: time="2025-01-17T12:30:14.286906902Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 12:30:14.365536 containerd[1485]: time="2025-01-17T12:30:14.365465333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t7692,Uid:f90fe678-0f21-40b3-aecf-9a6a84c9f642,Namespace:kube-system,Attempt:0,}" Jan 17 12:30:14.401942 containerd[1485]: time="2025-01-17T12:30:14.395328837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:30:14.401942 containerd[1485]: time="2025-01-17T12:30:14.397028927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:30:14.401942 containerd[1485]: time="2025-01-17T12:30:14.397046329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:30:14.401942 containerd[1485]: time="2025-01-17T12:30:14.397154191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:30:14.401942 containerd[1485]: time="2025-01-17T12:30:14.401121674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cdhsv,Uid:0cd4201d-6182-45c3-b96d-10ca1338b05b,Namespace:kube-system,Attempt:0,}" Jan 17 12:30:14.423275 systemd[1]: Started cri-containerd-6025449a04974d04756af555a932fa2d1adb369931f92e8ed8f80a610de919e1.scope - libcontainer container 6025449a04974d04756af555a932fa2d1adb369931f92e8ed8f80a610de919e1. Jan 17 12:30:14.435268 containerd[1485]: time="2025-01-17T12:30:14.434930790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:30:14.435268 containerd[1485]: time="2025-01-17T12:30:14.434991543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:30:14.435268 containerd[1485]: time="2025-01-17T12:30:14.435015409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:30:14.435268 containerd[1485]: time="2025-01-17T12:30:14.435101910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:30:14.457041 systemd[1]: Started cri-containerd-3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43.scope - libcontainer container 3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43. Jan 17 12:30:14.467049 containerd[1485]: time="2025-01-17T12:30:14.466813102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t7692,Uid:f90fe678-0f21-40b3-aecf-9a6a84c9f642,Namespace:kube-system,Attempt:0,} returns sandbox id \"6025449a04974d04756af555a932fa2d1adb369931f92e8ed8f80a610de919e1\"" Jan 17 12:30:14.474656 containerd[1485]: time="2025-01-17T12:30:14.474587457Z" level=info msg="CreateContainer within sandbox \"6025449a04974d04756af555a932fa2d1adb369931f92e8ed8f80a610de919e1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:30:14.493002 containerd[1485]: time="2025-01-17T12:30:14.492484092Z" level=info msg="CreateContainer within sandbox \"6025449a04974d04756af555a932fa2d1adb369931f92e8ed8f80a610de919e1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bf9a1d3c8933bda709df5799f6ff742dd7739ac98d4b232763a4ba21a5ed5d94\"" Jan 17 12:30:14.494308 containerd[1485]: time="2025-01-17T12:30:14.493215660Z" level=info msg="StartContainer for \"bf9a1d3c8933bda709df5799f6ff742dd7739ac98d4b232763a4ba21a5ed5d94\"" Jan 17 12:30:14.499138 containerd[1485]: time="2025-01-17T12:30:14.499112684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cdhsv,Uid:0cd4201d-6182-45c3-b96d-10ca1338b05b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\"" Jan 17 12:30:14.526070 systemd[1]: Started cri-containerd-bf9a1d3c8933bda709df5799f6ff742dd7739ac98d4b232763a4ba21a5ed5d94.scope - libcontainer container bf9a1d3c8933bda709df5799f6ff742dd7739ac98d4b232763a4ba21a5ed5d94. Jan 17 12:30:14.555857 containerd[1485]: time="2025-01-17T12:30:14.555769556Z" level=info msg="StartContainer for \"bf9a1d3c8933bda709df5799f6ff742dd7739ac98d4b232763a4ba21a5ed5d94\" returns successfully" Jan 17 12:30:15.456017 kubelet[2803]: I0117 12:30:15.455467 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t7692" podStartSLOduration=3.455453126 podStartE2EDuration="3.455453126s" podCreationTimestamp="2025-01-17 12:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:30:15.455240197 +0000 UTC m=+18.169274937" watchObservedRunningTime="2025-01-17 12:30:15.455453126 +0000 UTC m=+18.169487856" Jan 17 12:30:17.936689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2534501720.mount: Deactivated successfully. Jan 17 12:30:18.712898 containerd[1485]: time="2025-01-17T12:30:18.712836930Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:30:18.713924 containerd[1485]: time="2025-01-17T12:30:18.713860094Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907257" Jan 17 12:30:18.714938 containerd[1485]: time="2025-01-17T12:30:18.714865256Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:30:18.716294 containerd[1485]: time="2025-01-17T12:30:18.716163134Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.42912656s" Jan 17 12:30:18.716294 containerd[1485]: time="2025-01-17T12:30:18.716206014Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 12:30:18.717466 containerd[1485]: time="2025-01-17T12:30:18.717305261Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 12:30:18.718659 containerd[1485]: time="2025-01-17T12:30:18.718584635Z" level=info msg="CreateContainer within sandbox \"97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 12:30:18.745387 containerd[1485]: time="2025-01-17T12:30:18.745348031Z" level=info msg="CreateContainer within sandbox \"97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c\"" Jan 17 12:30:18.749951 containerd[1485]: time="2025-01-17T12:30:18.749898446Z" level=info msg="StartContainer for \"d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c\"" Jan 17 12:30:18.776054 systemd[1]: Started cri-containerd-d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c.scope - libcontainer container d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c. Jan 17 12:30:18.798801 containerd[1485]: time="2025-01-17T12:30:18.798717331Z" level=info msg="StartContainer for \"d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c\" returns successfully" Jan 17 12:30:24.838372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4043795609.mount: Deactivated successfully. Jan 17 12:30:30.311341 containerd[1485]: time="2025-01-17T12:30:30.311284084Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:30:30.312235 containerd[1485]: time="2025-01-17T12:30:30.311867255Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735299" Jan 17 12:30:30.313870 containerd[1485]: time="2025-01-17T12:30:30.313354768Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:30:30.315000 containerd[1485]: time="2025-01-17T12:30:30.314962136Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.597628152s" Jan 17 12:30:30.315000 containerd[1485]: time="2025-01-17T12:30:30.314999437Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 12:30:30.318351 containerd[1485]: time="2025-01-17T12:30:30.318311755Z" level=info msg="CreateContainer within sandbox \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:30:30.414821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4231734743.mount: Deactivated successfully. Jan 17 12:30:30.416797 containerd[1485]: time="2025-01-17T12:30:30.416684838Z" level=info msg="CreateContainer within sandbox \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6\"" Jan 17 12:30:30.417273 containerd[1485]: time="2025-01-17T12:30:30.417251961Z" level=info msg="StartContainer for \"438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6\"" Jan 17 12:30:30.558060 systemd[1]: Started cri-containerd-438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6.scope - libcontainer container 438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6. Jan 17 12:30:30.583075 containerd[1485]: time="2025-01-17T12:30:30.582427678Z" level=info msg="StartContainer for \"438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6\" returns successfully" Jan 17 12:30:30.593370 systemd[1]: cri-containerd-438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6.scope: Deactivated successfully. Jan 17 12:30:30.674889 containerd[1485]: time="2025-01-17T12:30:30.662839150Z" level=info msg="shim disconnected" id=438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6 namespace=k8s.io Jan 17 12:30:30.674889 containerd[1485]: time="2025-01-17T12:30:30.674879602Z" level=warning msg="cleaning up after shim disconnected" id=438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6 namespace=k8s.io Jan 17 12:30:30.674889 containerd[1485]: time="2025-01-17T12:30:30.674893588Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:30:31.411047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6-rootfs.mount: Deactivated successfully. Jan 17 12:30:31.482017 containerd[1485]: time="2025-01-17T12:30:31.481936320Z" level=info msg="CreateContainer within sandbox \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:30:31.498266 containerd[1485]: time="2025-01-17T12:30:31.498157165Z" level=info msg="CreateContainer within sandbox \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2\"" Jan 17 12:30:31.498809 containerd[1485]: time="2025-01-17T12:30:31.498770884Z" level=info msg="StartContainer for \"c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2\"" Jan 17 12:30:31.508286 kubelet[2803]: I0117 12:30:31.506702 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-vjfcb" podStartSLOduration=15.076035836 podStartE2EDuration="19.506686226s" podCreationTimestamp="2025-01-17 12:30:12 +0000 UTC" firstStartedPulling="2025-01-17 12:30:14.286523474 +0000 UTC m=+17.000558205" lastFinishedPulling="2025-01-17 12:30:18.717173865 +0000 UTC m=+21.431208595" observedRunningTime="2025-01-17 12:30:19.476421465 +0000 UTC m=+22.190456195" watchObservedRunningTime="2025-01-17 12:30:31.506686226 +0000 UTC m=+34.220720956" Jan 17 12:30:31.543048 systemd[1]: Started cri-containerd-c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2.scope - libcontainer container c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2. Jan 17 12:30:31.571299 containerd[1485]: time="2025-01-17T12:30:31.571154152Z" level=info msg="StartContainer for \"c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2\" returns successfully" Jan 17 12:30:31.586642 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:30:31.587656 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:30:31.587742 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:30:31.593548 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:30:31.594162 systemd[1]: cri-containerd-c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2.scope: Deactivated successfully. Jan 17 12:30:31.625536 containerd[1485]: time="2025-01-17T12:30:31.625473766Z" level=info msg="shim disconnected" id=c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2 namespace=k8s.io Jan 17 12:30:31.626094 containerd[1485]: time="2025-01-17T12:30:31.626063620Z" level=warning msg="cleaning up after shim disconnected" id=c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2 namespace=k8s.io Jan 17 12:30:31.626094 containerd[1485]: time="2025-01-17T12:30:31.626089708Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:30:31.635873 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:30:32.411139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2-rootfs.mount: Deactivated successfully. Jan 17 12:30:32.486121 containerd[1485]: time="2025-01-17T12:30:32.485987088Z" level=info msg="CreateContainer within sandbox \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:30:32.513746 containerd[1485]: time="2025-01-17T12:30:32.513692016Z" level=info msg="CreateContainer within sandbox \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1\"" Jan 17 12:30:32.515513 containerd[1485]: time="2025-01-17T12:30:32.514475312Z" level=info msg="StartContainer for \"6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1\"" Jan 17 12:30:32.547078 systemd[1]: Started cri-containerd-6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1.scope - libcontainer container 6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1. Jan 17 12:30:32.581281 containerd[1485]: time="2025-01-17T12:30:32.581243264Z" level=info msg="StartContainer for \"6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1\" returns successfully" Jan 17 12:30:32.589616 systemd[1]: cri-containerd-6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1.scope: Deactivated successfully. Jan 17 12:30:32.612726 containerd[1485]: time="2025-01-17T12:30:32.612655720Z" level=info msg="shim disconnected" id=6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1 namespace=k8s.io Jan 17 12:30:32.612726 containerd[1485]: time="2025-01-17T12:30:32.612705272Z" level=warning msg="cleaning up after shim disconnected" id=6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1 namespace=k8s.io Jan 17 12:30:32.612726 containerd[1485]: time="2025-01-17T12:30:32.612713327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:30:33.410982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1-rootfs.mount: Deactivated successfully. Jan 17 12:30:33.493204 containerd[1485]: time="2025-01-17T12:30:33.493149109Z" level=info msg="CreateContainer within sandbox \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:30:33.515052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2907328004.mount: Deactivated successfully. Jan 17 12:30:33.519126 containerd[1485]: time="2025-01-17T12:30:33.516591610Z" level=info msg="CreateContainer within sandbox \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf\"" Jan 17 12:30:33.526144 containerd[1485]: time="2025-01-17T12:30:33.526111636Z" level=info msg="StartContainer for \"9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf\"" Jan 17 12:30:33.560046 systemd[1]: Started cri-containerd-9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf.scope - libcontainer container 9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf. Jan 17 12:30:33.583680 containerd[1485]: time="2025-01-17T12:30:33.583402944Z" level=info msg="StartContainer for \"9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf\" returns successfully" Jan 17 12:30:33.583617 systemd[1]: cri-containerd-9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf.scope: Deactivated successfully. Jan 17 12:30:33.611071 containerd[1485]: time="2025-01-17T12:30:33.611003357Z" level=info msg="shim disconnected" id=9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf namespace=k8s.io Jan 17 12:30:33.611071 containerd[1485]: time="2025-01-17T12:30:33.611063158Z" level=warning msg="cleaning up after shim disconnected" id=9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf namespace=k8s.io Jan 17 12:30:33.611285 containerd[1485]: time="2025-01-17T12:30:33.611075801Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:30:34.411068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf-rootfs.mount: Deactivated successfully. Jan 17 12:30:34.518879 containerd[1485]: time="2025-01-17T12:30:34.518714117Z" level=info msg="CreateContainer within sandbox \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:30:34.543262 containerd[1485]: time="2025-01-17T12:30:34.543182859Z" level=info msg="CreateContainer within sandbox \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003\"" Jan 17 12:30:34.543979 containerd[1485]: time="2025-01-17T12:30:34.543960525Z" level=info msg="StartContainer for \"6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003\"" Jan 17 12:30:34.577100 systemd[1]: Started cri-containerd-6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003.scope - libcontainer container 6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003. Jan 17 12:30:34.611335 containerd[1485]: time="2025-01-17T12:30:34.611179112Z" level=info msg="StartContainer for \"6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003\" returns successfully" Jan 17 12:30:34.760348 kubelet[2803]: I0117 12:30:34.760323 2803 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:30:34.788484 kubelet[2803]: I0117 12:30:34.787653 2803 topology_manager.go:215] "Topology Admit Handler" podUID="8f56ba96-19af-46a6-9b42-7cf9d5ca9094" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9hvxg" Jan 17 12:30:34.789148 kubelet[2803]: I0117 12:30:34.788967 2803 topology_manager.go:215] "Topology Admit Handler" podUID="8b5c4c23-a1b8-4274-a9ad-8a84c2e53db3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fckmk" Jan 17 12:30:34.798866 systemd[1]: Created slice kubepods-burstable-pod8f56ba96_19af_46a6_9b42_7cf9d5ca9094.slice - libcontainer container kubepods-burstable-pod8f56ba96_19af_46a6_9b42_7cf9d5ca9094.slice. Jan 17 12:30:34.807092 systemd[1]: Created slice kubepods-burstable-pod8b5c4c23_a1b8_4274_a9ad_8a84c2e53db3.slice - libcontainer container kubepods-burstable-pod8b5c4c23_a1b8_4274_a9ad_8a84c2e53db3.slice. Jan 17 12:30:34.828757 kubelet[2803]: I0117 12:30:34.828706 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b5c4c23-a1b8-4274-a9ad-8a84c2e53db3-config-volume\") pod \"coredns-7db6d8ff4d-fckmk\" (UID: \"8b5c4c23-a1b8-4274-a9ad-8a84c2e53db3\") " pod="kube-system/coredns-7db6d8ff4d-fckmk" Jan 17 12:30:34.828757 kubelet[2803]: I0117 12:30:34.828756 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f56ba96-19af-46a6-9b42-7cf9d5ca9094-config-volume\") pod \"coredns-7db6d8ff4d-9hvxg\" (UID: \"8f56ba96-19af-46a6-9b42-7cf9d5ca9094\") " pod="kube-system/coredns-7db6d8ff4d-9hvxg" Jan 17 12:30:34.828895 kubelet[2803]: I0117 12:30:34.828782 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49796\" (UniqueName: \"kubernetes.io/projected/8f56ba96-19af-46a6-9b42-7cf9d5ca9094-kube-api-access-49796\") pod \"coredns-7db6d8ff4d-9hvxg\" (UID: \"8f56ba96-19af-46a6-9b42-7cf9d5ca9094\") " pod="kube-system/coredns-7db6d8ff4d-9hvxg" Jan 17 12:30:34.828895 kubelet[2803]: I0117 12:30:34.828808 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sff8q\" (UniqueName: \"kubernetes.io/projected/8b5c4c23-a1b8-4274-a9ad-8a84c2e53db3-kube-api-access-sff8q\") pod \"coredns-7db6d8ff4d-fckmk\" (UID: \"8b5c4c23-a1b8-4274-a9ad-8a84c2e53db3\") " pod="kube-system/coredns-7db6d8ff4d-fckmk" Jan 17 12:30:35.107485 containerd[1485]: time="2025-01-17T12:30:35.106482633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hvxg,Uid:8f56ba96-19af-46a6-9b42-7cf9d5ca9094,Namespace:kube-system,Attempt:0,}" Jan 17 12:30:35.110407 containerd[1485]: time="2025-01-17T12:30:35.110366455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fckmk,Uid:8b5c4c23-a1b8-4274-a9ad-8a84c2e53db3,Namespace:kube-system,Attempt:0,}" Jan 17 12:30:35.517732 kubelet[2803]: I0117 12:30:35.517631 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cdhsv" podStartSLOduration=7.701773134 podStartE2EDuration="23.517560501s" podCreationTimestamp="2025-01-17 12:30:12 +0000 UTC" firstStartedPulling="2025-01-17 12:30:14.500447911 +0000 UTC m=+17.214482642" lastFinishedPulling="2025-01-17 12:30:30.316235279 +0000 UTC m=+33.030270009" observedRunningTime="2025-01-17 12:30:35.516786461 +0000 UTC m=+38.230821201" watchObservedRunningTime="2025-01-17 12:30:35.517560501 +0000 UTC m=+38.231595230" Jan 17 12:30:37.021485 systemd-networkd[1391]: cilium_host: Link UP Jan 17 12:30:37.021966 systemd-networkd[1391]: cilium_net: Link UP Jan 17 12:30:37.022552 systemd-networkd[1391]: cilium_net: Gained carrier Jan 17 12:30:37.022863 systemd-networkd[1391]: cilium_host: Gained carrier Jan 17 12:30:37.126825 systemd-networkd[1391]: cilium_vxlan: Link UP Jan 17 12:30:37.126844 systemd-networkd[1391]: cilium_vxlan: Gained carrier Jan 17 12:30:37.407805 systemd-networkd[1391]: cilium_host: Gained IPv6LL Jan 17 12:30:37.493983 kernel: NET: Registered PF_ALG protocol family Jan 17 12:30:37.750143 systemd-networkd[1391]: cilium_net: Gained IPv6LL Jan 17 12:30:38.139294 systemd-networkd[1391]: lxc_health: Link UP Jan 17 12:30:38.153069 systemd-networkd[1391]: lxc_health: Gained carrier Jan 17 12:30:38.692903 systemd-networkd[1391]: lxc5bb126c66b5b: Link UP Jan 17 12:30:38.697863 systemd-networkd[1391]: lxce4af728ca6be: Link UP Jan 17 12:30:38.702952 kernel: eth0: renamed from tmp9e833 Jan 17 12:30:38.710662 kernel: eth0: renamed from tmp5c43b Jan 17 12:30:38.717823 systemd-networkd[1391]: lxc5bb126c66b5b: Gained carrier Jan 17 12:30:38.721731 systemd-networkd[1391]: lxce4af728ca6be: Gained carrier Jan 17 12:30:39.094114 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL Jan 17 12:30:39.670166 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 17 12:30:39.862199 systemd-networkd[1391]: lxc5bb126c66b5b: Gained IPv6LL Jan 17 12:30:40.319187 systemd-networkd[1391]: lxce4af728ca6be: Gained IPv6LL Jan 17 12:30:41.969005 containerd[1485]: time="2025-01-17T12:30:41.967799926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:30:41.969005 containerd[1485]: time="2025-01-17T12:30:41.967851753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:30:41.969005 containerd[1485]: time="2025-01-17T12:30:41.967873124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:30:41.969005 containerd[1485]: time="2025-01-17T12:30:41.967987206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:30:41.990363 containerd[1485]: time="2025-01-17T12:30:41.989575724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:30:41.997126 containerd[1485]: time="2025-01-17T12:30:41.994060561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:30:41.997126 containerd[1485]: time="2025-01-17T12:30:41.994081649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:30:41.997126 containerd[1485]: time="2025-01-17T12:30:41.994179383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:30:41.999127 systemd[1]: Started cri-containerd-9e833b5cdae2c905300e283d705d755efc6f04656212ba968f07dbae3bdc6f47.scope - libcontainer container 9e833b5cdae2c905300e283d705d755efc6f04656212ba968f07dbae3bdc6f47. Jan 17 12:30:42.035023 systemd[1]: run-containerd-runc-k8s.io-5c43b868740ecc97f72c9bda0a8638858b6c842b392b5ac68c3a3997d08baa6e-runc.yeoD7e.mount: Deactivated successfully. Jan 17 12:30:42.048205 systemd[1]: Started cri-containerd-5c43b868740ecc97f72c9bda0a8638858b6c842b392b5ac68c3a3997d08baa6e.scope - libcontainer container 5c43b868740ecc97f72c9bda0a8638858b6c842b392b5ac68c3a3997d08baa6e. Jan 17 12:30:42.094736 containerd[1485]: time="2025-01-17T12:30:42.094580622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fckmk,Uid:8b5c4c23-a1b8-4274-a9ad-8a84c2e53db3,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e833b5cdae2c905300e283d705d755efc6f04656212ba968f07dbae3bdc6f47\"" Jan 17 12:30:42.107420 containerd[1485]: time="2025-01-17T12:30:42.105673339Z" level=info msg="CreateContainer within sandbox \"9e833b5cdae2c905300e283d705d755efc6f04656212ba968f07dbae3bdc6f47\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:30:42.154539 containerd[1485]: time="2025-01-17T12:30:42.154492307Z" level=info msg="CreateContainer within sandbox \"9e833b5cdae2c905300e283d705d755efc6f04656212ba968f07dbae3bdc6f47\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"654d15f94ba67edd1f59c9bbbe8f29eedd0c977078d29d307e7c1c64498529a3\"" Jan 17 12:30:42.156931 containerd[1485]: time="2025-01-17T12:30:42.156862725Z" level=info msg="StartContainer for \"654d15f94ba67edd1f59c9bbbe8f29eedd0c977078d29d307e7c1c64498529a3\"" Jan 17 12:30:42.167018 containerd[1485]: time="2025-01-17T12:30:42.166795140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hvxg,Uid:8f56ba96-19af-46a6-9b42-7cf9d5ca9094,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c43b868740ecc97f72c9bda0a8638858b6c842b392b5ac68c3a3997d08baa6e\"" Jan 17 12:30:42.172335 containerd[1485]: time="2025-01-17T12:30:42.172255253Z" level=info msg="CreateContainer within sandbox \"5c43b868740ecc97f72c9bda0a8638858b6c842b392b5ac68c3a3997d08baa6e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:30:42.187873 containerd[1485]: time="2025-01-17T12:30:42.187828408Z" level=info msg="CreateContainer within sandbox \"5c43b868740ecc97f72c9bda0a8638858b6c842b392b5ac68c3a3997d08baa6e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70e4a732d8951293b20310f0a0039af5e31375a66c3bbdf4f485f7a92d3e91dd\"" Jan 17 12:30:42.188728 containerd[1485]: time="2025-01-17T12:30:42.188693748Z" level=info msg="StartContainer for \"70e4a732d8951293b20310f0a0039af5e31375a66c3bbdf4f485f7a92d3e91dd\"" Jan 17 12:30:42.205156 systemd[1]: Started cri-containerd-654d15f94ba67edd1f59c9bbbe8f29eedd0c977078d29d307e7c1c64498529a3.scope - libcontainer container 654d15f94ba67edd1f59c9bbbe8f29eedd0c977078d29d307e7c1c64498529a3. Jan 17 12:30:42.226025 systemd[1]: Started cri-containerd-70e4a732d8951293b20310f0a0039af5e31375a66c3bbdf4f485f7a92d3e91dd.scope - libcontainer container 70e4a732d8951293b20310f0a0039af5e31375a66c3bbdf4f485f7a92d3e91dd. Jan 17 12:30:42.257518 containerd[1485]: time="2025-01-17T12:30:42.257472244Z" level=info msg="StartContainer for \"654d15f94ba67edd1f59c9bbbe8f29eedd0c977078d29d307e7c1c64498529a3\" returns successfully" Jan 17 12:30:42.264826 containerd[1485]: time="2025-01-17T12:30:42.264796677Z" level=info msg="StartContainer for \"70e4a732d8951293b20310f0a0039af5e31375a66c3bbdf4f485f7a92d3e91dd\" returns successfully" Jan 17 12:30:42.531152 kubelet[2803]: I0117 12:30:42.531007 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fckmk" podStartSLOduration=30.530967475 podStartE2EDuration="30.530967475s" podCreationTimestamp="2025-01-17 12:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:30:42.530415472 +0000 UTC m=+45.244450242" watchObservedRunningTime="2025-01-17 12:30:42.530967475 +0000 UTC m=+45.245002245" Jan 17 12:30:42.561490 kubelet[2803]: I0117 12:30:42.561168 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9hvxg" podStartSLOduration=30.56113246 podStartE2EDuration="30.56113246s" podCreationTimestamp="2025-01-17 12:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:30:42.561058391 +0000 UTC m=+45.275093120" watchObservedRunningTime="2025-01-17 12:30:42.56113246 +0000 UTC m=+45.275167190" Jan 17 12:30:42.973144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597803915.mount: Deactivated successfully. Jan 17 12:34:47.962155 systemd[1]: Started sshd@7-138.199.154.203:22-139.178.89.65:53970.service - OpenSSH per-connection server daemon (139.178.89.65:53970). Jan 17 12:34:48.937736 sshd[4207]: Accepted publickey for core from 139.178.89.65 port 53970 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:34:48.940001 sshd[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:34:48.946461 systemd-logind[1467]: New session 8 of user core. Jan 17 12:34:48.950052 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:34:50.035768 sshd[4207]: pam_unix(sshd:session): session closed for user core Jan 17 12:34:50.038721 systemd[1]: sshd@7-138.199.154.203:22-139.178.89.65:53970.service: Deactivated successfully. Jan 17 12:34:50.040714 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:34:50.042170 systemd-logind[1467]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:34:50.043899 systemd-logind[1467]: Removed session 8. Jan 17 12:34:55.211257 systemd[1]: Started sshd@8-138.199.154.203:22-139.178.89.65:43250.service - OpenSSH per-connection server daemon (139.178.89.65:43250). Jan 17 12:34:56.191565 sshd[4222]: Accepted publickey for core from 139.178.89.65 port 43250 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:34:56.193682 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:34:56.199716 systemd-logind[1467]: New session 9 of user core. Jan 17 12:34:56.205148 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:34:56.939184 sshd[4222]: pam_unix(sshd:session): session closed for user core Jan 17 12:34:56.943794 systemd[1]: sshd@8-138.199.154.203:22-139.178.89.65:43250.service: Deactivated successfully. Jan 17 12:34:56.946458 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:34:56.947374 systemd-logind[1467]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:34:56.948564 systemd-logind[1467]: Removed session 9. Jan 17 12:35:02.113198 systemd[1]: Started sshd@9-138.199.154.203:22-139.178.89.65:43860.service - OpenSSH per-connection server daemon (139.178.89.65:43860). Jan 17 12:35:03.096189 sshd[4238]: Accepted publickey for core from 139.178.89.65 port 43860 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:35:03.098086 sshd[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:03.102858 systemd-logind[1467]: New session 10 of user core. Jan 17 12:35:03.108082 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:35:03.829568 sshd[4238]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:03.834158 systemd-logind[1467]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:35:03.835287 systemd[1]: sshd@9-138.199.154.203:22-139.178.89.65:43860.service: Deactivated successfully. Jan 17 12:35:03.839439 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:35:03.841104 systemd-logind[1467]: Removed session 10. Jan 17 12:35:09.007199 systemd[1]: Started sshd@10-138.199.154.203:22-139.178.89.65:43874.service - OpenSSH per-connection server daemon (139.178.89.65:43874). Jan 17 12:35:09.987991 sshd[4252]: Accepted publickey for core from 139.178.89.65 port 43874 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:35:09.989881 sshd[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:09.995081 systemd-logind[1467]: New session 11 of user core. Jan 17 12:35:09.999074 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:35:10.721770 sshd[4252]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:10.725828 systemd[1]: sshd@10-138.199.154.203:22-139.178.89.65:43874.service: Deactivated successfully. Jan 17 12:35:10.727974 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:35:10.729513 systemd-logind[1467]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:35:10.730571 systemd-logind[1467]: Removed session 11. Jan 17 12:35:10.895213 systemd[1]: Started sshd@11-138.199.154.203:22-139.178.89.65:43886.service - OpenSSH per-connection server daemon (139.178.89.65:43886). Jan 17 12:35:11.874065 sshd[4266]: Accepted publickey for core from 139.178.89.65 port 43886 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:35:11.875700 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:11.881664 systemd-logind[1467]: New session 12 of user core. Jan 17 12:35:11.887115 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:35:12.646093 sshd[4266]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:12.653195 systemd[1]: sshd@11-138.199.154.203:22-139.178.89.65:43886.service: Deactivated successfully. Jan 17 12:35:12.656414 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:35:12.659663 systemd-logind[1467]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:35:12.661457 systemd-logind[1467]: Removed session 12. Jan 17 12:35:12.816186 systemd[1]: Started sshd@12-138.199.154.203:22-139.178.89.65:51862.service - OpenSSH per-connection server daemon (139.178.89.65:51862). Jan 17 12:35:13.789124 sshd[4277]: Accepted publickey for core from 139.178.89.65 port 51862 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:35:13.791302 sshd[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:13.797284 systemd-logind[1467]: New session 13 of user core. Jan 17 12:35:13.799091 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:35:14.535208 sshd[4277]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:14.540091 systemd[1]: sshd@12-138.199.154.203:22-139.178.89.65:51862.service: Deactivated successfully. Jan 17 12:35:14.544152 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:35:14.546198 systemd-logind[1467]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:35:14.548891 systemd-logind[1467]: Removed session 13. Jan 17 12:35:19.713132 systemd[1]: Started sshd@13-138.199.154.203:22-139.178.89.65:51868.service - OpenSSH per-connection server daemon (139.178.89.65:51868). Jan 17 12:35:20.686883 sshd[4292]: Accepted publickey for core from 139.178.89.65 port 51868 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:35:20.688776 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:20.694739 systemd-logind[1467]: New session 14 of user core. Jan 17 12:35:20.701101 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:35:21.434133 sshd[4292]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:21.437033 systemd[1]: sshd@13-138.199.154.203:22-139.178.89.65:51868.service: Deactivated successfully. Jan 17 12:35:21.439658 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:35:21.441294 systemd-logind[1467]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:35:21.442534 systemd-logind[1467]: Removed session 14. Jan 17 12:35:21.608177 systemd[1]: Started sshd@14-138.199.154.203:22-139.178.89.65:59558.service - OpenSSH per-connection server daemon (139.178.89.65:59558). Jan 17 12:35:22.573381 sshd[4305]: Accepted publickey for core from 139.178.89.65 port 59558 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:35:22.575301 sshd[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:22.580500 systemd-logind[1467]: New session 15 of user core. Jan 17 12:35:22.586068 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:35:23.524643 sshd[4305]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:23.531650 systemd[1]: sshd@14-138.199.154.203:22-139.178.89.65:59558.service: Deactivated successfully. Jan 17 12:35:23.534255 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:35:23.536259 systemd-logind[1467]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:35:23.538187 systemd-logind[1467]: Removed session 15. Jan 17 12:35:23.701167 systemd[1]: Started sshd@15-138.199.154.203:22-139.178.89.65:59562.service - OpenSSH per-connection server daemon (139.178.89.65:59562). Jan 17 12:35:24.692724 sshd[4316]: Accepted publickey for core from 139.178.89.65 port 59562 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:35:24.694828 sshd[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:24.700005 systemd-logind[1467]: New session 16 of user core. Jan 17 12:35:24.705293 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:35:26.891210 sshd[4316]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:26.895865 systemd[1]: sshd@15-138.199.154.203:22-139.178.89.65:59562.service: Deactivated successfully. Jan 17 12:35:26.898166 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:35:26.900293 systemd-logind[1467]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:35:26.903228 systemd-logind[1467]: Removed session 16. Jan 17 12:35:27.060227 systemd[1]: Started sshd@16-138.199.154.203:22-139.178.89.65:59566.service - OpenSSH per-connection server daemon (139.178.89.65:59566). Jan 17 12:35:28.036604 sshd[4334]: Accepted publickey for core from 139.178.89.65 port 59566 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:35:28.038406 sshd[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:28.043441 systemd-logind[1467]: New session 17 of user core. Jan 17 12:35:28.051091 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:35:28.894898 sshd[4334]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:28.899582 systemd[1]: sshd@16-138.199.154.203:22-139.178.89.65:59566.service: Deactivated successfully. Jan 17 12:35:28.902263 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:35:28.903092 systemd-logind[1467]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:35:28.904446 systemd-logind[1467]: Removed session 17. Jan 17 12:35:29.077254 systemd[1]: Started sshd@17-138.199.154.203:22-139.178.89.65:59568.service - OpenSSH per-connection server daemon (139.178.89.65:59568). Jan 17 12:35:30.068232 sshd[4345]: Accepted publickey for core from 139.178.89.65 port 59568 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:35:30.070107 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:30.075612 systemd-logind[1467]: New session 18 of user core. Jan 17 12:35:30.081102 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:35:30.809068 sshd[4345]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:30.812070 systemd[1]: sshd@17-138.199.154.203:22-139.178.89.65:59568.service: Deactivated successfully. Jan 17 12:35:30.814497 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:35:30.816587 systemd-logind[1467]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:35:30.818102 systemd-logind[1467]: Removed session 18. Jan 17 12:35:35.982183 systemd[1]: Started sshd@18-138.199.154.203:22-139.178.89.65:56210.service - OpenSSH per-connection server daemon (139.178.89.65:56210). Jan 17 12:35:36.961357 sshd[4361]: Accepted publickey for core from 139.178.89.65 port 56210 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:35:36.963303 sshd[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:36.968826 systemd-logind[1467]: New session 19 of user core. Jan 17 12:35:36.973079 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:35:37.706021 sshd[4361]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:37.710713 systemd[1]: sshd@18-138.199.154.203:22-139.178.89.65:56210.service: Deactivated successfully. Jan 17 12:35:37.713770 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:35:37.714894 systemd-logind[1467]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:35:37.716703 systemd-logind[1467]: Removed session 19. Jan 17 12:35:42.879196 systemd[1]: Started sshd@19-138.199.154.203:22-139.178.89.65:51150.service - OpenSSH per-connection server daemon (139.178.89.65:51150). Jan 17 12:35:43.849063 sshd[4374]: Accepted publickey for core from 139.178.89.65 port 51150 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:35:43.851231 sshd[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:43.856237 systemd-logind[1467]: New session 20 of user core. Jan 17 12:35:43.863063 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:35:44.597271 sshd[4374]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:44.601062 systemd-logind[1467]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:35:44.603054 systemd[1]: sshd@19-138.199.154.203:22-139.178.89.65:51150.service: Deactivated successfully. Jan 17 12:35:44.604948 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:35:44.605903 systemd-logind[1467]: Removed session 20. Jan 17 12:35:44.773353 systemd[1]: Started sshd@20-138.199.154.203:22-139.178.89.65:51162.service - OpenSSH per-connection server daemon (139.178.89.65:51162). Jan 17 12:35:45.740054 sshd[4387]: Accepted publickey for core from 139.178.89.65 port 51162 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:35:45.741939 sshd[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:45.747378 systemd-logind[1467]: New session 21 of user core. Jan 17 12:35:45.759083 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:35:47.580970 containerd[1485]: time="2025-01-17T12:35:47.580849949Z" level=info msg="StopContainer for \"d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c\" with timeout 30 (s)" Jan 17 12:35:47.583341 containerd[1485]: time="2025-01-17T12:35:47.583199364Z" level=info msg="Stop container \"d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c\" with signal terminated" Jan 17 12:35:47.609392 systemd[1]: cri-containerd-d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c.scope: Deactivated successfully. Jan 17 12:35:47.641639 containerd[1485]: time="2025-01-17T12:35:47.641599995Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:35:47.650702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c-rootfs.mount: Deactivated successfully. Jan 17 12:35:47.653860 containerd[1485]: time="2025-01-17T12:35:47.653760582Z" level=info msg="StopContainer for \"6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003\" with timeout 2 (s)" Jan 17 12:35:47.654433 containerd[1485]: time="2025-01-17T12:35:47.654221575Z" level=info msg="Stop container \"6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003\" with signal terminated" Jan 17 12:35:47.654612 containerd[1485]: time="2025-01-17T12:35:47.654557072Z" level=info msg="shim disconnected" id=d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c namespace=k8s.io Jan 17 12:35:47.654705 containerd[1485]: time="2025-01-17T12:35:47.654689410Z" level=warning msg="cleaning up after shim disconnected" id=d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c namespace=k8s.io Jan 17 12:35:47.654816 containerd[1485]: time="2025-01-17T12:35:47.654788085Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:35:47.665187 systemd-networkd[1391]: lxc_health: Link DOWN Jan 17 12:35:47.665198 systemd-networkd[1391]: lxc_health: Lost carrier Jan 17 12:35:47.686610 containerd[1485]: time="2025-01-17T12:35:47.686472903Z" level=info msg="StopContainer for \"d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c\" returns successfully" Jan 17 12:35:47.687797 containerd[1485]: time="2025-01-17T12:35:47.687365192Z" level=info msg="StopPodSandbox for \"97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4\"" Jan 17 12:35:47.687797 containerd[1485]: time="2025-01-17T12:35:47.687439552Z" level=info msg="Container to stop \"d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:35:47.691164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4-shm.mount: Deactivated successfully. Jan 17 12:35:47.695936 systemd[1]: cri-containerd-6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003.scope: Deactivated successfully. Jan 17 12:35:47.696240 systemd[1]: cri-containerd-6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003.scope: Consumed 7.384s CPU time. Jan 17 12:35:47.703538 systemd[1]: cri-containerd-97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4.scope: Deactivated successfully. Jan 17 12:35:47.735432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4-rootfs.mount: Deactivated successfully. Jan 17 12:35:47.741942 containerd[1485]: time="2025-01-17T12:35:47.741729364Z" level=info msg="shim disconnected" id=6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003 namespace=k8s.io Jan 17 12:35:47.742255 containerd[1485]: time="2025-01-17T12:35:47.742116888Z" level=warning msg="cleaning up after shim disconnected" id=6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003 namespace=k8s.io Jan 17 12:35:47.742255 containerd[1485]: time="2025-01-17T12:35:47.742132458Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:35:47.744018 containerd[1485]: time="2025-01-17T12:35:47.743003217Z" level=info msg="shim disconnected" id=97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4 namespace=k8s.io Jan 17 12:35:47.744018 containerd[1485]: time="2025-01-17T12:35:47.743038422Z" level=warning msg="cleaning up after shim disconnected" id=97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4 namespace=k8s.io Jan 17 12:35:47.744018 containerd[1485]: time="2025-01-17T12:35:47.743046517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:35:47.761673 containerd[1485]: time="2025-01-17T12:35:47.761614460Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:35:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:35:47.763387 containerd[1485]: time="2025-01-17T12:35:47.763355086Z" level=info msg="StopContainer for \"6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003\" returns successfully" Jan 17 12:35:47.764033 containerd[1485]: time="2025-01-17T12:35:47.763997337Z" level=info msg="StopPodSandbox for \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\"" Jan 17 12:35:47.764147 containerd[1485]: time="2025-01-17T12:35:47.764130075Z" level=info msg="Container to stop \"438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:35:47.764247 containerd[1485]: time="2025-01-17T12:35:47.764232227Z" level=info msg="Container to stop \"6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:35:47.764333 containerd[1485]: time="2025-01-17T12:35:47.764318187Z" level=info msg="Container to stop \"c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:35:47.764403 containerd[1485]: time="2025-01-17T12:35:47.764372719Z" level=info msg="Container to stop \"6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:35:47.764516 containerd[1485]: time="2025-01-17T12:35:47.764450045Z" level=info msg="Container to stop \"9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:35:47.768765 containerd[1485]: time="2025-01-17T12:35:47.768707689Z" level=info msg="TearDown network for sandbox \"97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4\" successfully" Jan 17 12:35:47.768765 containerd[1485]: time="2025-01-17T12:35:47.768733397Z" level=info msg="StopPodSandbox for \"97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4\" returns successfully" Jan 17 12:35:47.772214 systemd[1]: cri-containerd-3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43.scope: Deactivated successfully. Jan 17 12:35:47.799414 containerd[1485]: time="2025-01-17T12:35:47.799348484Z" level=info msg="shim disconnected" id=3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43 namespace=k8s.io Jan 17 12:35:47.799414 containerd[1485]: time="2025-01-17T12:35:47.799406302Z" level=warning msg="cleaning up after shim disconnected" id=3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43 namespace=k8s.io Jan 17 12:35:47.799639 containerd[1485]: time="2025-01-17T12:35:47.799420919Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:35:47.812268 containerd[1485]: time="2025-01-17T12:35:47.812211895Z" level=info msg="TearDown network for sandbox \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\" successfully" Jan 17 12:35:47.812268 containerd[1485]: time="2025-01-17T12:35:47.812249275Z" level=info msg="StopPodSandbox for \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\" returns successfully" Jan 17 12:35:47.956109 kubelet[2803]: I0117 12:35:47.955305 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-lib-modules\") pod \"0cd4201d-6182-45c3-b96d-10ca1338b05b\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " Jan 17 12:35:47.956109 kubelet[2803]: I0117 12:35:47.955382 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-host-proc-sys-kernel\") pod \"0cd4201d-6182-45c3-b96d-10ca1338b05b\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " Jan 17 12:35:47.956109 kubelet[2803]: I0117 12:35:47.955419 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c150322-227c-4e0b-84ec-c7418e0cb6ec-cilium-config-path\") pod \"9c150322-227c-4e0b-84ec-c7418e0cb6ec\" (UID: \"9c150322-227c-4e0b-84ec-c7418e0cb6ec\") " Jan 17 12:35:47.956109 kubelet[2803]: I0117 12:35:47.955445 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-cni-path\") pod \"0cd4201d-6182-45c3-b96d-10ca1338b05b\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " Jan 17 12:35:47.956109 kubelet[2803]: I0117 12:35:47.955470 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-cilium-cgroup\") pod \"0cd4201d-6182-45c3-b96d-10ca1338b05b\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " Jan 17 12:35:47.956109 kubelet[2803]: I0117 12:35:47.955494 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-cilium-run\") pod \"0cd4201d-6182-45c3-b96d-10ca1338b05b\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " Jan 17 12:35:47.956884 kubelet[2803]: I0117 12:35:47.955516 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-etc-cni-netd\") pod \"0cd4201d-6182-45c3-b96d-10ca1338b05b\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " Jan 17 12:35:47.956884 kubelet[2803]: I0117 12:35:47.955539 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-xtables-lock\") pod \"0cd4201d-6182-45c3-b96d-10ca1338b05b\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " Jan 17 12:35:47.956884 kubelet[2803]: I0117 12:35:47.955569 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59s4s\" (UniqueName: \"kubernetes.io/projected/9c150322-227c-4e0b-84ec-c7418e0cb6ec-kube-api-access-59s4s\") pod \"9c150322-227c-4e0b-84ec-c7418e0cb6ec\" (UID: \"9c150322-227c-4e0b-84ec-c7418e0cb6ec\") " Jan 17 12:35:47.956884 kubelet[2803]: I0117 12:35:47.955594 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljp65\" (UniqueName: \"kubernetes.io/projected/0cd4201d-6182-45c3-b96d-10ca1338b05b-kube-api-access-ljp65\") pod \"0cd4201d-6182-45c3-b96d-10ca1338b05b\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " Jan 17 12:35:47.956884 kubelet[2803]: I0117 12:35:47.955617 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-hostproc\") pod \"0cd4201d-6182-45c3-b96d-10ca1338b05b\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " Jan 17 12:35:47.956884 kubelet[2803]: I0117 12:35:47.955640 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-host-proc-sys-net\") pod \"0cd4201d-6182-45c3-b96d-10ca1338b05b\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " Jan 17 12:35:47.957202 kubelet[2803]: I0117 12:35:47.955664 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-bpf-maps\") pod \"0cd4201d-6182-45c3-b96d-10ca1338b05b\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " Jan 17 12:35:47.957202 kubelet[2803]: I0117 12:35:47.955689 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0cd4201d-6182-45c3-b96d-10ca1338b05b-hubble-tls\") pod \"0cd4201d-6182-45c3-b96d-10ca1338b05b\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " Jan 17 12:35:47.957202 kubelet[2803]: I0117 12:35:47.955716 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0cd4201d-6182-45c3-b96d-10ca1338b05b-cilium-config-path\") pod \"0cd4201d-6182-45c3-b96d-10ca1338b05b\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " Jan 17 12:35:47.957202 kubelet[2803]: I0117 12:35:47.955742 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0cd4201d-6182-45c3-b96d-10ca1338b05b-clustermesh-secrets\") pod \"0cd4201d-6182-45c3-b96d-10ca1338b05b\" (UID: \"0cd4201d-6182-45c3-b96d-10ca1338b05b\") " Jan 17 12:35:47.961938 kubelet[2803]: I0117 12:35:47.960663 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0cd4201d-6182-45c3-b96d-10ca1338b05b" (UID: "0cd4201d-6182-45c3-b96d-10ca1338b05b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:35:47.961938 kubelet[2803]: I0117 12:35:47.961890 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0cd4201d-6182-45c3-b96d-10ca1338b05b" (UID: "0cd4201d-6182-45c3-b96d-10ca1338b05b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:35:47.962064 kubelet[2803]: I0117 12:35:47.962042 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0cd4201d-6182-45c3-b96d-10ca1338b05b" (UID: "0cd4201d-6182-45c3-b96d-10ca1338b05b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:35:47.974372 kubelet[2803]: I0117 12:35:47.973238 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-cni-path" (OuterVolumeSpecName: "cni-path") pod "0cd4201d-6182-45c3-b96d-10ca1338b05b" (UID: "0cd4201d-6182-45c3-b96d-10ca1338b05b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:35:47.974372 kubelet[2803]: I0117 12:35:47.973316 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0cd4201d-6182-45c3-b96d-10ca1338b05b" (UID: "0cd4201d-6182-45c3-b96d-10ca1338b05b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:35:47.974372 kubelet[2803]: I0117 12:35:47.973342 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0cd4201d-6182-45c3-b96d-10ca1338b05b" (UID: "0cd4201d-6182-45c3-b96d-10ca1338b05b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:35:47.974372 kubelet[2803]: I0117 12:35:47.973365 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0cd4201d-6182-45c3-b96d-10ca1338b05b" (UID: "0cd4201d-6182-45c3-b96d-10ca1338b05b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:35:47.974763 kubelet[2803]: I0117 12:35:47.974731 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-hostproc" (OuterVolumeSpecName: "hostproc") pod "0cd4201d-6182-45c3-b96d-10ca1338b05b" (UID: "0cd4201d-6182-45c3-b96d-10ca1338b05b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:35:47.974846 kubelet[2803]: I0117 12:35:47.974790 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0cd4201d-6182-45c3-b96d-10ca1338b05b" (UID: "0cd4201d-6182-45c3-b96d-10ca1338b05b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:35:47.974846 kubelet[2803]: I0117 12:35:47.974811 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0cd4201d-6182-45c3-b96d-10ca1338b05b" (UID: "0cd4201d-6182-45c3-b96d-10ca1338b05b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:35:47.979033 kubelet[2803]: I0117 12:35:47.978998 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c150322-227c-4e0b-84ec-c7418e0cb6ec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9c150322-227c-4e0b-84ec-c7418e0cb6ec" (UID: "9c150322-227c-4e0b-84ec-c7418e0cb6ec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:35:47.979124 kubelet[2803]: I0117 12:35:47.979110 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c150322-227c-4e0b-84ec-c7418e0cb6ec-kube-api-access-59s4s" (OuterVolumeSpecName: "kube-api-access-59s4s") pod "9c150322-227c-4e0b-84ec-c7418e0cb6ec" (UID: "9c150322-227c-4e0b-84ec-c7418e0cb6ec"). InnerVolumeSpecName "kube-api-access-59s4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:35:47.979200 kubelet[2803]: I0117 12:35:47.979177 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cd4201d-6182-45c3-b96d-10ca1338b05b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0cd4201d-6182-45c3-b96d-10ca1338b05b" (UID: "0cd4201d-6182-45c3-b96d-10ca1338b05b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:35:47.979744 kubelet[2803]: I0117 12:35:47.979710 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cd4201d-6182-45c3-b96d-10ca1338b05b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0cd4201d-6182-45c3-b96d-10ca1338b05b" (UID: "0cd4201d-6182-45c3-b96d-10ca1338b05b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:35:47.980341 kubelet[2803]: I0117 12:35:47.980314 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cd4201d-6182-45c3-b96d-10ca1338b05b-kube-api-access-ljp65" (OuterVolumeSpecName: "kube-api-access-ljp65") pod "0cd4201d-6182-45c3-b96d-10ca1338b05b" (UID: "0cd4201d-6182-45c3-b96d-10ca1338b05b"). InnerVolumeSpecName "kube-api-access-ljp65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:35:47.983344 kubelet[2803]: I0117 12:35:47.983313 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cd4201d-6182-45c3-b96d-10ca1338b05b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0cd4201d-6182-45c3-b96d-10ca1338b05b" (UID: "0cd4201d-6182-45c3-b96d-10ca1338b05b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:35:48.058732 kubelet[2803]: I0117 12:35:48.058663 2803 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0cd4201d-6182-45c3-b96d-10ca1338b05b-hubble-tls\") on node \"ci-4081-3-0-0-e492bbae02\" DevicePath \"\"" Jan 17 12:35:48.058732 kubelet[2803]: I0117 12:35:48.058724 2803 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0cd4201d-6182-45c3-b96d-10ca1338b05b-cilium-config-path\") on node \"ci-4081-3-0-0-e492bbae02\" DevicePath \"\"" Jan 17 12:35:48.058732 kubelet[2803]: I0117 12:35:48.058743 2803 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0cd4201d-6182-45c3-b96d-10ca1338b05b-clustermesh-secrets\") on node \"ci-4081-3-0-0-e492bbae02\" DevicePath \"\"" Jan 17 12:35:48.058986 kubelet[2803]: I0117 12:35:48.058758 2803 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-lib-modules\") on node \"ci-4081-3-0-0-e492bbae02\" DevicePath \"\"" Jan 17 12:35:48.058986 kubelet[2803]: I0117 12:35:48.058786 2803 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-host-proc-sys-kernel\") on node \"ci-4081-3-0-0-e492bbae02\" DevicePath \"\"" Jan 17 12:35:48.058986 kubelet[2803]: I0117 12:35:48.058800 2803 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9c150322-227c-4e0b-84ec-c7418e0cb6ec-cilium-config-path\") on node \"ci-4081-3-0-0-e492bbae02\" DevicePath \"\"" Jan 17 12:35:48.058986 kubelet[2803]: I0117 12:35:48.058812 2803 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-cni-path\") on node \"ci-4081-3-0-0-e492bbae02\" DevicePath \"\"" Jan 17 12:35:48.058986 kubelet[2803]: I0117 12:35:48.058825 2803 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-cilium-cgroup\") on node \"ci-4081-3-0-0-e492bbae02\" DevicePath \"\"" Jan 17 12:35:48.058986 kubelet[2803]: I0117 12:35:48.058839 2803 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-cilium-run\") on node \"ci-4081-3-0-0-e492bbae02\" DevicePath \"\"" Jan 17 12:35:48.058986 kubelet[2803]: I0117 12:35:48.058853 2803 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-etc-cni-netd\") on node \"ci-4081-3-0-0-e492bbae02\" DevicePath \"\"" Jan 17 12:35:48.058986 kubelet[2803]: I0117 12:35:48.058866 2803 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-xtables-lock\") on node \"ci-4081-3-0-0-e492bbae02\" DevicePath \"\"" Jan 17 12:35:48.059209 kubelet[2803]: I0117 12:35:48.058878 2803 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-59s4s\" (UniqueName: \"kubernetes.io/projected/9c150322-227c-4e0b-84ec-c7418e0cb6ec-kube-api-access-59s4s\") on node \"ci-4081-3-0-0-e492bbae02\" DevicePath \"\"" Jan 17 12:35:48.059209 kubelet[2803]: I0117 12:35:48.058903 2803 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-hostproc\") on node \"ci-4081-3-0-0-e492bbae02\" DevicePath \"\"" Jan 17 12:35:48.059209 kubelet[2803]: I0117 12:35:48.058942 2803 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-host-proc-sys-net\") on node \"ci-4081-3-0-0-e492bbae02\" DevicePath \"\"" Jan 17 12:35:48.059209 kubelet[2803]: I0117 12:35:48.058955 2803 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0cd4201d-6182-45c3-b96d-10ca1338b05b-bpf-maps\") on node \"ci-4081-3-0-0-e492bbae02\" DevicePath \"\"" Jan 17 12:35:48.059209 kubelet[2803]: I0117 12:35:48.058970 2803 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ljp65\" (UniqueName: \"kubernetes.io/projected/0cd4201d-6182-45c3-b96d-10ca1338b05b-kube-api-access-ljp65\") on node \"ci-4081-3-0-0-e492bbae02\" DevicePath \"\"" Jan 17 12:35:48.140927 systemd[1]: Removed slice kubepods-besteffort-pod9c150322_227c_4e0b_84ec_c7418e0cb6ec.slice - libcontainer container kubepods-besteffort-pod9c150322_227c_4e0b_84ec_c7418e0cb6ec.slice. Jan 17 12:35:48.143423 kubelet[2803]: I0117 12:35:48.143366 2803 scope.go:117] "RemoveContainer" containerID="d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c" Jan 17 12:35:48.149297 containerd[1485]: time="2025-01-17T12:35:48.149252263Z" level=info msg="RemoveContainer for \"d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c\"" Jan 17 12:35:48.153548 containerd[1485]: time="2025-01-17T12:35:48.153492865Z" level=info msg="RemoveContainer for \"d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c\" returns successfully" Jan 17 12:35:48.157177 systemd[1]: Removed slice kubepods-burstable-pod0cd4201d_6182_45c3_b96d_10ca1338b05b.slice - libcontainer container kubepods-burstable-pod0cd4201d_6182_45c3_b96d_10ca1338b05b.slice. Jan 17 12:35:48.157429 systemd[1]: kubepods-burstable-pod0cd4201d_6182_45c3_b96d_10ca1338b05b.slice: Consumed 7.474s CPU time. Jan 17 12:35:48.160750 kubelet[2803]: I0117 12:35:48.160647 2803 scope.go:117] "RemoveContainer" containerID="d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c" Jan 17 12:35:48.184020 containerd[1485]: time="2025-01-17T12:35:48.166361306Z" level=error msg="ContainerStatus for \"d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c\": not found" Jan 17 12:35:48.186807 kubelet[2803]: E0117 12:35:48.186714 2803 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c\": not found" containerID="d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c" Jan 17 12:35:48.197605 kubelet[2803]: I0117 12:35:48.188155 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c"} err="failed to get container status \"d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"d240af293fc373f6eaa802d2718c7d10a23980aaf83d643bba15619d54380b5c\": not found" Jan 17 12:35:48.197605 kubelet[2803]: I0117 12:35:48.197594 2803 scope.go:117] "RemoveContainer" containerID="6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003" Jan 17 12:35:48.199700 containerd[1485]: time="2025-01-17T12:35:48.199672448Z" level=info msg="RemoveContainer for \"6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003\"" Jan 17 12:35:48.203956 containerd[1485]: time="2025-01-17T12:35:48.203927838Z" level=info msg="RemoveContainer for \"6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003\" returns successfully" Jan 17 12:35:48.204083 kubelet[2803]: I0117 12:35:48.204059 2803 scope.go:117] "RemoveContainer" containerID="9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf" Jan 17 12:35:48.205367 containerd[1485]: time="2025-01-17T12:35:48.205332845Z" level=info msg="RemoveContainer for \"9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf\"" Jan 17 12:35:48.208136 containerd[1485]: time="2025-01-17T12:35:48.208067511Z" level=info msg="RemoveContainer for \"9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf\" returns successfully" Jan 17 12:35:48.209043 kubelet[2803]: I0117 12:35:48.208238 2803 scope.go:117] "RemoveContainer" containerID="6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1" Jan 17 12:35:48.209267 containerd[1485]: time="2025-01-17T12:35:48.209244393Z" level=info msg="RemoveContainer for \"6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1\"" Jan 17 12:35:48.211947 containerd[1485]: time="2025-01-17T12:35:48.211896543Z" level=info msg="RemoveContainer for \"6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1\" returns successfully" Jan 17 12:35:48.212107 kubelet[2803]: I0117 12:35:48.212074 2803 scope.go:117] "RemoveContainer" containerID="c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2" Jan 17 12:35:48.213406 containerd[1485]: time="2025-01-17T12:35:48.213187919Z" level=info msg="RemoveContainer for \"c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2\"" Jan 17 12:35:48.215991 containerd[1485]: time="2025-01-17T12:35:48.215966858Z" level=info msg="RemoveContainer for \"c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2\" returns successfully" Jan 17 12:35:48.216249 kubelet[2803]: I0117 12:35:48.216228 2803 scope.go:117] "RemoveContainer" containerID="438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6" Jan 17 12:35:48.217147 containerd[1485]: time="2025-01-17T12:35:48.217100197Z" level=info msg="RemoveContainer for \"438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6\"" Jan 17 12:35:48.219662 containerd[1485]: time="2025-01-17T12:35:48.219635420Z" level=info msg="RemoveContainer for \"438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6\" returns successfully" Jan 17 12:35:48.219821 kubelet[2803]: I0117 12:35:48.219791 2803 scope.go:117] "RemoveContainer" containerID="6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003" Jan 17 12:35:48.220183 containerd[1485]: time="2025-01-17T12:35:48.220087415Z" level=error msg="ContainerStatus for \"6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003\": not found" Jan 17 12:35:48.220268 kubelet[2803]: E0117 12:35:48.220251 2803 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003\": not found" containerID="6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003" Jan 17 12:35:48.220326 kubelet[2803]: I0117 12:35:48.220270 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003"} err="failed to get container status \"6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003\": not found" Jan 17 12:35:48.220326 kubelet[2803]: I0117 12:35:48.220303 2803 scope.go:117] "RemoveContainer" containerID="9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf" Jan 17 12:35:48.220454 containerd[1485]: time="2025-01-17T12:35:48.220423143Z" level=error msg="ContainerStatus for \"9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf\": not found" Jan 17 12:35:48.220528 kubelet[2803]: E0117 12:35:48.220507 2803 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf\": not found" containerID="9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf" Jan 17 12:35:48.220528 kubelet[2803]: I0117 12:35:48.220523 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf"} err="failed to get container status \"9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"9771a26dbb08249ef7f5e8fd83ae5ee5bf34e4520369120555c143422ed0b8cf\": not found" Jan 17 12:35:48.220701 kubelet[2803]: I0117 12:35:48.220536 2803 scope.go:117] "RemoveContainer" containerID="6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1" Jan 17 12:35:48.220732 containerd[1485]: time="2025-01-17T12:35:48.220676867Z" level=error msg="ContainerStatus for \"6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1\": not found" Jan 17 12:35:48.221042 kubelet[2803]: E0117 12:35:48.220846 2803 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1\": not found" containerID="6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1" Jan 17 12:35:48.221042 kubelet[2803]: I0117 12:35:48.220965 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1"} err="failed to get container status \"6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"6bca338a5264a67b88301ce64896967969c053975ef41d93d843cca033ebf4a1\": not found" Jan 17 12:35:48.221042 kubelet[2803]: I0117 12:35:48.220981 2803 scope.go:117] "RemoveContainer" containerID="c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2" Jan 17 12:35:48.221139 containerd[1485]: time="2025-01-17T12:35:48.221119486Z" level=error msg="ContainerStatus for \"c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2\": not found" Jan 17 12:35:48.221235 kubelet[2803]: E0117 12:35:48.221207 2803 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2\": not found" containerID="c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2" Jan 17 12:35:48.221300 kubelet[2803]: I0117 12:35:48.221230 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2"} err="failed to get container status \"c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2\": rpc error: code = NotFound desc = an error occurred when try to find container \"c28b3948e0bfbdab1de3eef61866a84a5a32a3bfbfb8254781aaf808538cbcd2\": not found" Jan 17 12:35:48.221300 kubelet[2803]: I0117 12:35:48.221245 2803 scope.go:117] "RemoveContainer" containerID="438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6" Jan 17 12:35:48.221415 containerd[1485]: time="2025-01-17T12:35:48.221385393Z" level=error msg="ContainerStatus for \"438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6\": not found" Jan 17 12:35:48.221540 kubelet[2803]: E0117 12:35:48.221490 2803 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6\": not found" containerID="438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6" Jan 17 12:35:48.221575 kubelet[2803]: I0117 12:35:48.221507 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6"} err="failed to get container status \"438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6\": rpc error: code = NotFound desc = an error occurred when try to find container \"438f8d73789b071edff384922fa75c6324464dece028138b798fe91b579eddb6\": not found" Jan 17 12:35:48.598598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f48188e79477fd421d7d1b5ba531812216f932859b27ef86c3096c19d16c003-rootfs.mount: Deactivated successfully. Jan 17 12:35:48.598759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43-rootfs.mount: Deactivated successfully. Jan 17 12:35:48.598875 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43-shm.mount: Deactivated successfully. Jan 17 12:35:48.598997 systemd[1]: var-lib-kubelet-pods-9c150322\x2d227c\x2d4e0b\x2d84ec\x2dc7418e0cb6ec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d59s4s.mount: Deactivated successfully. Jan 17 12:35:48.599083 systemd[1]: var-lib-kubelet-pods-0cd4201d\x2d6182\x2d45c3\x2db96d\x2d10ca1338b05b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dljp65.mount: Deactivated successfully. Jan 17 12:35:48.599183 systemd[1]: var-lib-kubelet-pods-0cd4201d\x2d6182\x2d45c3\x2db96d\x2d10ca1338b05b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 12:35:48.599266 systemd[1]: var-lib-kubelet-pods-0cd4201d\x2d6182\x2d45c3\x2db96d\x2d10ca1338b05b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 12:35:49.384148 kubelet[2803]: I0117 12:35:49.384097 2803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cd4201d-6182-45c3-b96d-10ca1338b05b" path="/var/lib/kubelet/pods/0cd4201d-6182-45c3-b96d-10ca1338b05b/volumes" Jan 17 12:35:49.385156 kubelet[2803]: I0117 12:35:49.385118 2803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c150322-227c-4e0b-84ec-c7418e0cb6ec" path="/var/lib/kubelet/pods/9c150322-227c-4e0b-84ec-c7418e0cb6ec/volumes" Jan 17 12:35:49.664270 sshd[4387]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:49.669456 systemd[1]: sshd@20-138.199.154.203:22-139.178.89.65:51162.service: Deactivated successfully. Jan 17 12:35:49.671790 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:35:49.672535 systemd-logind[1467]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:35:49.673699 systemd-logind[1467]: Removed session 21. Jan 17 12:35:49.840381 systemd[1]: Started sshd@21-138.199.154.203:22-139.178.89.65:51176.service - OpenSSH per-connection server daemon (139.178.89.65:51176). Jan 17 12:35:50.831940 sshd[4550]: Accepted publickey for core from 139.178.89.65 port 51176 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:35:50.833717 sshd[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:50.838478 systemd-logind[1467]: New session 22 of user core. Jan 17 12:35:50.846094 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:35:51.790661 kubelet[2803]: I0117 12:35:51.790508 2803 topology_manager.go:215] "Topology Admit Handler" podUID="e6c2dba5-3c8e-47ec-8c1f-8b65034ba364" podNamespace="kube-system" podName="cilium-kdlsm" Jan 17 12:35:51.790661 kubelet[2803]: E0117 12:35:51.790562 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0cd4201d-6182-45c3-b96d-10ca1338b05b" containerName="clean-cilium-state" Jan 17 12:35:51.790661 kubelet[2803]: E0117 12:35:51.790571 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0cd4201d-6182-45c3-b96d-10ca1338b05b" containerName="apply-sysctl-overwrites" Jan 17 12:35:51.790661 kubelet[2803]: E0117 12:35:51.790577 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0cd4201d-6182-45c3-b96d-10ca1338b05b" containerName="mount-bpf-fs" Jan 17 12:35:51.790661 kubelet[2803]: E0117 12:35:51.790582 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0cd4201d-6182-45c3-b96d-10ca1338b05b" containerName="cilium-agent" Jan 17 12:35:51.790661 kubelet[2803]: E0117 12:35:51.790588 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9c150322-227c-4e0b-84ec-c7418e0cb6ec" containerName="cilium-operator" Jan 17 12:35:51.790661 kubelet[2803]: E0117 12:35:51.790594 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0cd4201d-6182-45c3-b96d-10ca1338b05b" containerName="mount-cgroup" Jan 17 12:35:51.793329 kubelet[2803]: I0117 12:35:51.791267 2803 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c150322-227c-4e0b-84ec-c7418e0cb6ec" containerName="cilium-operator" Jan 17 12:35:51.793329 kubelet[2803]: I0117 12:35:51.791283 2803 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cd4201d-6182-45c3-b96d-10ca1338b05b" containerName="cilium-agent" Jan 17 12:35:51.838951 systemd[1]: Created slice kubepods-burstable-pode6c2dba5_3c8e_47ec_8c1f_8b65034ba364.slice - libcontainer container kubepods-burstable-pode6c2dba5_3c8e_47ec_8c1f_8b65034ba364.slice. Jan 17 12:35:51.885238 kubelet[2803]: I0117 12:35:51.885186 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6c2dba5-3c8e-47ec-8c1f-8b65034ba364-cilium-run\") pod \"cilium-kdlsm\" (UID: \"e6c2dba5-3c8e-47ec-8c1f-8b65034ba364\") " pod="kube-system/cilium-kdlsm" Jan 17 12:35:51.885238 kubelet[2803]: I0117 12:35:51.885230 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6c2dba5-3c8e-47ec-8c1f-8b65034ba364-hostproc\") pod \"cilium-kdlsm\" (UID: \"e6c2dba5-3c8e-47ec-8c1f-8b65034ba364\") " pod="kube-system/cilium-kdlsm" Jan 17 12:35:51.885238 kubelet[2803]: I0117 12:35:51.885250 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6c2dba5-3c8e-47ec-8c1f-8b65034ba364-cni-path\") pod \"cilium-kdlsm\" (UID: \"e6c2dba5-3c8e-47ec-8c1f-8b65034ba364\") " pod="kube-system/cilium-kdlsm" Jan 17 12:35:51.885425 kubelet[2803]: I0117 12:35:51.885263 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6c2dba5-3c8e-47ec-8c1f-8b65034ba364-bpf-maps\") pod \"cilium-kdlsm\" (UID: \"e6c2dba5-3c8e-47ec-8c1f-8b65034ba364\") " pod="kube-system/cilium-kdlsm" Jan 17 12:35:51.885425 kubelet[2803]: I0117 12:35:51.885277 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6c2dba5-3c8e-47ec-8c1f-8b65034ba364-lib-modules\") pod \"cilium-kdlsm\" (UID: \"e6c2dba5-3c8e-47ec-8c1f-8b65034ba364\") " pod="kube-system/cilium-kdlsm" Jan 17 12:35:51.885425 kubelet[2803]: I0117 12:35:51.885290 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e6c2dba5-3c8e-47ec-8c1f-8b65034ba364-cilium-ipsec-secrets\") pod \"cilium-kdlsm\" (UID: \"e6c2dba5-3c8e-47ec-8c1f-8b65034ba364\") " pod="kube-system/cilium-kdlsm" Jan 17 12:35:51.885425 kubelet[2803]: I0117 12:35:51.885302 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6c2dba5-3c8e-47ec-8c1f-8b65034ba364-hubble-tls\") pod \"cilium-kdlsm\" (UID: \"e6c2dba5-3c8e-47ec-8c1f-8b65034ba364\") " pod="kube-system/cilium-kdlsm" Jan 17 12:35:51.885425 kubelet[2803]: I0117 12:35:51.885316 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfwcq\" (UniqueName: \"kubernetes.io/projected/e6c2dba5-3c8e-47ec-8c1f-8b65034ba364-kube-api-access-wfwcq\") pod \"cilium-kdlsm\" (UID: \"e6c2dba5-3c8e-47ec-8c1f-8b65034ba364\") " pod="kube-system/cilium-kdlsm" Jan 17 12:35:51.885425 kubelet[2803]: I0117 12:35:51.885330 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6c2dba5-3c8e-47ec-8c1f-8b65034ba364-cilium-cgroup\") pod \"cilium-kdlsm\" (UID: \"e6c2dba5-3c8e-47ec-8c1f-8b65034ba364\") " pod="kube-system/cilium-kdlsm" Jan 17 12:35:51.886381 kubelet[2803]: I0117 12:35:51.885343 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6c2dba5-3c8e-47ec-8c1f-8b65034ba364-clustermesh-secrets\") pod \"cilium-kdlsm\" (UID: \"e6c2dba5-3c8e-47ec-8c1f-8b65034ba364\") " pod="kube-system/cilium-kdlsm" Jan 17 12:35:51.886381 kubelet[2803]: I0117 12:35:51.885355 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6c2dba5-3c8e-47ec-8c1f-8b65034ba364-cilium-config-path\") pod \"cilium-kdlsm\" (UID: \"e6c2dba5-3c8e-47ec-8c1f-8b65034ba364\") " pod="kube-system/cilium-kdlsm" Jan 17 12:35:51.886381 kubelet[2803]: I0117 12:35:51.885368 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6c2dba5-3c8e-47ec-8c1f-8b65034ba364-etc-cni-netd\") pod \"cilium-kdlsm\" (UID: \"e6c2dba5-3c8e-47ec-8c1f-8b65034ba364\") " pod="kube-system/cilium-kdlsm" Jan 17 12:35:51.886381 kubelet[2803]: I0117 12:35:51.885380 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6c2dba5-3c8e-47ec-8c1f-8b65034ba364-xtables-lock\") pod \"cilium-kdlsm\" (UID: \"e6c2dba5-3c8e-47ec-8c1f-8b65034ba364\") " pod="kube-system/cilium-kdlsm" Jan 17 12:35:51.886381 kubelet[2803]: I0117 12:35:51.885393 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6c2dba5-3c8e-47ec-8c1f-8b65034ba364-host-proc-sys-kernel\") pod \"cilium-kdlsm\" (UID: \"e6c2dba5-3c8e-47ec-8c1f-8b65034ba364\") " pod="kube-system/cilium-kdlsm" Jan 17 12:35:51.886514 kubelet[2803]: I0117 12:35:51.885405 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6c2dba5-3c8e-47ec-8c1f-8b65034ba364-host-proc-sys-net\") pod \"cilium-kdlsm\" (UID: \"e6c2dba5-3c8e-47ec-8c1f-8b65034ba364\") " pod="kube-system/cilium-kdlsm" Jan 17 12:35:52.006245 sshd[4550]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:52.010203 systemd[1]: sshd@21-138.199.154.203:22-139.178.89.65:51176.service: Deactivated successfully. Jan 17 12:35:52.021180 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:35:52.023247 systemd-logind[1467]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:35:52.025842 systemd-logind[1467]: Removed session 22. Jan 17 12:35:52.145986 containerd[1485]: time="2025-01-17T12:35:52.145812395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kdlsm,Uid:e6c2dba5-3c8e-47ec-8c1f-8b65034ba364,Namespace:kube-system,Attempt:0,}" Jan 17 12:35:52.172284 containerd[1485]: time="2025-01-17T12:35:52.169066695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:35:52.172284 containerd[1485]: time="2025-01-17T12:35:52.171065455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:35:52.172284 containerd[1485]: time="2025-01-17T12:35:52.171126859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:35:52.174314 containerd[1485]: time="2025-01-17T12:35:52.174244982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:35:52.178288 systemd[1]: Started sshd@22-138.199.154.203:22-139.178.89.65:42686.service - OpenSSH per-connection server daemon (139.178.89.65:42686). Jan 17 12:35:52.193055 systemd[1]: Started cri-containerd-f2788c48d05c05517fe026c609abd4b1b232e3ec24996386c844147a9c794d35.scope - libcontainer container f2788c48d05c05517fe026c609abd4b1b232e3ec24996386c844147a9c794d35. Jan 17 12:35:52.219715 containerd[1485]: time="2025-01-17T12:35:52.219688980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kdlsm,Uid:e6c2dba5-3c8e-47ec-8c1f-8b65034ba364,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2788c48d05c05517fe026c609abd4b1b232e3ec24996386c844147a9c794d35\"" Jan 17 12:35:52.223487 containerd[1485]: time="2025-01-17T12:35:52.223410762Z" level=info msg="CreateContainer within sandbox \"f2788c48d05c05517fe026c609abd4b1b232e3ec24996386c844147a9c794d35\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:35:52.232672 containerd[1485]: time="2025-01-17T12:35:52.232647019Z" level=info msg="CreateContainer within sandbox \"f2788c48d05c05517fe026c609abd4b1b232e3ec24996386c844147a9c794d35\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e2725b4a2b97822ea9a78035ce9383913e95acebd87064d265aa1184646100c9\"" Jan 17 12:35:52.233885 containerd[1485]: time="2025-01-17T12:35:52.233246310Z" level=info msg="StartContainer for \"e2725b4a2b97822ea9a78035ce9383913e95acebd87064d265aa1184646100c9\"" Jan 17 12:35:52.261092 systemd[1]: Started cri-containerd-e2725b4a2b97822ea9a78035ce9383913e95acebd87064d265aa1184646100c9.scope - libcontainer container e2725b4a2b97822ea9a78035ce9383913e95acebd87064d265aa1184646100c9. Jan 17 12:35:52.291218 containerd[1485]: time="2025-01-17T12:35:52.291177357Z" level=info msg="StartContainer for \"e2725b4a2b97822ea9a78035ce9383913e95acebd87064d265aa1184646100c9\" returns successfully" Jan 17 12:35:52.311593 systemd[1]: cri-containerd-e2725b4a2b97822ea9a78035ce9383913e95acebd87064d265aa1184646100c9.scope: Deactivated successfully. Jan 17 12:35:52.344646 containerd[1485]: time="2025-01-17T12:35:52.344525390Z" level=info msg="shim disconnected" id=e2725b4a2b97822ea9a78035ce9383913e95acebd87064d265aa1184646100c9 namespace=k8s.io Jan 17 12:35:52.344646 containerd[1485]: time="2025-01-17T12:35:52.344624075Z" level=warning msg="cleaning up after shim disconnected" id=e2725b4a2b97822ea9a78035ce9383913e95acebd87064d265aa1184646100c9 namespace=k8s.io Jan 17 12:35:52.344646 containerd[1485]: time="2025-01-17T12:35:52.344634986Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:35:52.528539 kubelet[2803]: E0117 12:35:52.528471 2803 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:35:53.165499 containerd[1485]: time="2025-01-17T12:35:53.165434665Z" level=info msg="CreateContainer within sandbox \"f2788c48d05c05517fe026c609abd4b1b232e3ec24996386c844147a9c794d35\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:35:53.170097 sshd[4584]: Accepted publickey for core from 139.178.89.65 port 42686 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:35:53.170987 sshd[4584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:53.182954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1053754773.mount: Deactivated successfully. Jan 17 12:35:53.189130 containerd[1485]: time="2025-01-17T12:35:53.187299445Z" level=info msg="CreateContainer within sandbox \"f2788c48d05c05517fe026c609abd4b1b232e3ec24996386c844147a9c794d35\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f88eabd9525512bd2216c9ebe7ac4126f832fcbeeaad4c333df8af852017f97f\"" Jan 17 12:35:53.190350 systemd-logind[1467]: New session 23 of user core. Jan 17 12:35:53.193967 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:35:53.194312 containerd[1485]: time="2025-01-17T12:35:53.191540078Z" level=info msg="StartContainer for \"f88eabd9525512bd2216c9ebe7ac4126f832fcbeeaad4c333df8af852017f97f\"" Jan 17 12:35:53.232065 systemd[1]: Started cri-containerd-f88eabd9525512bd2216c9ebe7ac4126f832fcbeeaad4c333df8af852017f97f.scope - libcontainer container f88eabd9525512bd2216c9ebe7ac4126f832fcbeeaad4c333df8af852017f97f. Jan 17 12:35:53.258227 containerd[1485]: time="2025-01-17T12:35:53.258169857Z" level=info msg="StartContainer for \"f88eabd9525512bd2216c9ebe7ac4126f832fcbeeaad4c333df8af852017f97f\" returns successfully" Jan 17 12:35:53.264870 systemd[1]: cri-containerd-f88eabd9525512bd2216c9ebe7ac4126f832fcbeeaad4c333df8af852017f97f.scope: Deactivated successfully. Jan 17 12:35:53.301407 containerd[1485]: time="2025-01-17T12:35:53.301202575Z" level=info msg="shim disconnected" id=f88eabd9525512bd2216c9ebe7ac4126f832fcbeeaad4c333df8af852017f97f namespace=k8s.io Jan 17 12:35:53.301407 containerd[1485]: time="2025-01-17T12:35:53.301256726Z" level=warning msg="cleaning up after shim disconnected" id=f88eabd9525512bd2216c9ebe7ac4126f832fcbeeaad4c333df8af852017f97f namespace=k8s.io Jan 17 12:35:53.301407 containerd[1485]: time="2025-01-17T12:35:53.301265763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:35:53.847243 sshd[4584]: pam_unix(sshd:session): session closed for user core Jan 17 12:35:53.852500 systemd[1]: sshd@22-138.199.154.203:22-139.178.89.65:42686.service: Deactivated successfully. Jan 17 12:35:53.855296 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:35:53.856459 systemd-logind[1467]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:35:53.858436 systemd-logind[1467]: Removed session 23. Jan 17 12:35:53.995833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f88eabd9525512bd2216c9ebe7ac4126f832fcbeeaad4c333df8af852017f97f-rootfs.mount: Deactivated successfully. Jan 17 12:35:54.019227 systemd[1]: Started sshd@23-138.199.154.203:22-139.178.89.65:42702.service - OpenSSH per-connection server daemon (139.178.89.65:42702). Jan 17 12:35:54.170043 containerd[1485]: time="2025-01-17T12:35:54.168705295Z" level=info msg="CreateContainer within sandbox \"f2788c48d05c05517fe026c609abd4b1b232e3ec24996386c844147a9c794d35\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:35:54.192231 containerd[1485]: time="2025-01-17T12:35:54.190581568Z" level=info msg="CreateContainer within sandbox \"f2788c48d05c05517fe026c609abd4b1b232e3ec24996386c844147a9c794d35\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cd1294f4a0c7093248ec6979b88bb793e78a45a03c01d524ff067f5a453de721\"" Jan 17 12:35:54.193877 containerd[1485]: time="2025-01-17T12:35:54.193361358Z" level=info msg="StartContainer for \"cd1294f4a0c7093248ec6979b88bb793e78a45a03c01d524ff067f5a453de721\"" Jan 17 12:35:54.231094 systemd[1]: Started cri-containerd-cd1294f4a0c7093248ec6979b88bb793e78a45a03c01d524ff067f5a453de721.scope - libcontainer container cd1294f4a0c7093248ec6979b88bb793e78a45a03c01d524ff067f5a453de721. Jan 17 12:35:54.270172 containerd[1485]: time="2025-01-17T12:35:54.270137868Z" level=info msg="StartContainer for \"cd1294f4a0c7093248ec6979b88bb793e78a45a03c01d524ff067f5a453de721\" returns successfully" Jan 17 12:35:54.280510 systemd[1]: cri-containerd-cd1294f4a0c7093248ec6979b88bb793e78a45a03c01d524ff067f5a453de721.scope: Deactivated successfully. Jan 17 12:35:54.311203 containerd[1485]: time="2025-01-17T12:35:54.311131382Z" level=info msg="shim disconnected" id=cd1294f4a0c7093248ec6979b88bb793e78a45a03c01d524ff067f5a453de721 namespace=k8s.io Jan 17 12:35:54.311203 containerd[1485]: time="2025-01-17T12:35:54.311179732Z" level=warning msg="cleaning up after shim disconnected" id=cd1294f4a0c7093248ec6979b88bb793e78a45a03c01d524ff067f5a453de721 namespace=k8s.io Jan 17 12:35:54.311203 containerd[1485]: time="2025-01-17T12:35:54.311188659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:35:54.331311 kubelet[2803]: I0117 12:35:54.331251 2803 setters.go:580] "Node became not ready" node="ci-4081-3-0-0-e492bbae02" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-17T12:35:54Z","lastTransitionTime":"2025-01-17T12:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 12:35:54.995662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd1294f4a0c7093248ec6979b88bb793e78a45a03c01d524ff067f5a453de721-rootfs.mount: Deactivated successfully. Jan 17 12:35:54.997144 sshd[4736]: Accepted publickey for core from 139.178.89.65 port 42702 ssh2: RSA SHA256:POK76LnfMRTGy0EQVCmwE5zYtxbV7WfkhtMcTwTh3Uc Jan 17 12:35:54.998391 sshd[4736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:35:55.003425 systemd-logind[1467]: New session 24 of user core. Jan 17 12:35:55.008115 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:35:55.174421 containerd[1485]: time="2025-01-17T12:35:55.174096051Z" level=info msg="CreateContainer within sandbox \"f2788c48d05c05517fe026c609abd4b1b232e3ec24996386c844147a9c794d35\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:35:55.197426 containerd[1485]: time="2025-01-17T12:35:55.197354169Z" level=info msg="CreateContainer within sandbox \"f2788c48d05c05517fe026c609abd4b1b232e3ec24996386c844147a9c794d35\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8df4b3a4d2b3bea65e84d44592cb68ac1226fc68efab197cad9bc0f440d93286\"" Jan 17 12:35:55.198295 containerd[1485]: time="2025-01-17T12:35:55.198226099Z" level=info msg="StartContainer for \"8df4b3a4d2b3bea65e84d44592cb68ac1226fc68efab197cad9bc0f440d93286\"" Jan 17 12:35:55.238072 systemd[1]: Started cri-containerd-8df4b3a4d2b3bea65e84d44592cb68ac1226fc68efab197cad9bc0f440d93286.scope - libcontainer container 8df4b3a4d2b3bea65e84d44592cb68ac1226fc68efab197cad9bc0f440d93286. Jan 17 12:35:55.268148 systemd[1]: cri-containerd-8df4b3a4d2b3bea65e84d44592cb68ac1226fc68efab197cad9bc0f440d93286.scope: Deactivated successfully. Jan 17 12:35:55.272037 containerd[1485]: time="2025-01-17T12:35:55.271822232Z" level=info msg="StartContainer for \"8df4b3a4d2b3bea65e84d44592cb68ac1226fc68efab197cad9bc0f440d93286\" returns successfully" Jan 17 12:35:55.276796 containerd[1485]: time="2025-01-17T12:35:55.275703883Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6c2dba5_3c8e_47ec_8c1f_8b65034ba364.slice/cri-containerd-8df4b3a4d2b3bea65e84d44592cb68ac1226fc68efab197cad9bc0f440d93286.scope/memory.events\": no such file or directory" Jan 17 12:35:55.302485 containerd[1485]: time="2025-01-17T12:35:55.302418496Z" level=info msg="shim disconnected" id=8df4b3a4d2b3bea65e84d44592cb68ac1226fc68efab197cad9bc0f440d93286 namespace=k8s.io Jan 17 12:35:55.302485 containerd[1485]: time="2025-01-17T12:35:55.302469131Z" level=warning msg="cleaning up after shim disconnected" id=8df4b3a4d2b3bea65e84d44592cb68ac1226fc68efab197cad9bc0f440d93286 namespace=k8s.io Jan 17 12:35:55.302485 containerd[1485]: time="2025-01-17T12:35:55.302477377Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:35:55.995270 systemd[1]: run-containerd-runc-k8s.io-8df4b3a4d2b3bea65e84d44592cb68ac1226fc68efab197cad9bc0f440d93286-runc.xKvnsi.mount: Deactivated successfully. Jan 17 12:35:55.995384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8df4b3a4d2b3bea65e84d44592cb68ac1226fc68efab197cad9bc0f440d93286-rootfs.mount: Deactivated successfully. Jan 17 12:35:56.178953 containerd[1485]: time="2025-01-17T12:35:56.178869632Z" level=info msg="CreateContainer within sandbox \"f2788c48d05c05517fe026c609abd4b1b232e3ec24996386c844147a9c794d35\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:35:56.206778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount327623443.mount: Deactivated successfully. Jan 17 12:35:56.221002 containerd[1485]: time="2025-01-17T12:35:56.220831929Z" level=info msg="CreateContainer within sandbox \"f2788c48d05c05517fe026c609abd4b1b232e3ec24996386c844147a9c794d35\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e3d821dc64ab8ac4ca55931115bc243c906059e719b9a60f3c4d1aff5f996e6c\"" Jan 17 12:35:56.233570 containerd[1485]: time="2025-01-17T12:35:56.232992086Z" level=info msg="StartContainer for \"e3d821dc64ab8ac4ca55931115bc243c906059e719b9a60f3c4d1aff5f996e6c\"" Jan 17 12:35:56.270143 systemd[1]: Started cri-containerd-e3d821dc64ab8ac4ca55931115bc243c906059e719b9a60f3c4d1aff5f996e6c.scope - libcontainer container e3d821dc64ab8ac4ca55931115bc243c906059e719b9a60f3c4d1aff5f996e6c. Jan 17 12:35:56.308184 containerd[1485]: time="2025-01-17T12:35:56.308136094Z" level=info msg="StartContainer for \"e3d821dc64ab8ac4ca55931115bc243c906059e719b9a60f3c4d1aff5f996e6c\" returns successfully" Jan 17 12:35:56.908045 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 12:35:57.195502 kubelet[2803]: I0117 12:35:57.195156 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kdlsm" podStartSLOduration=6.195136231 podStartE2EDuration="6.195136231s" podCreationTimestamp="2025-01-17 12:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:35:57.194858661 +0000 UTC m=+359.908893401" watchObservedRunningTime="2025-01-17 12:35:57.195136231 +0000 UTC m=+359.909170960" Jan 17 12:35:57.407558 containerd[1485]: time="2025-01-17T12:35:57.407495051Z" level=info msg="StopPodSandbox for \"97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4\"" Jan 17 12:35:57.408157 containerd[1485]: time="2025-01-17T12:35:57.407620545Z" level=info msg="TearDown network for sandbox \"97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4\" successfully" Jan 17 12:35:57.408157 containerd[1485]: time="2025-01-17T12:35:57.407636586Z" level=info msg="StopPodSandbox for \"97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4\" returns successfully" Jan 17 12:35:57.408356 containerd[1485]: time="2025-01-17T12:35:57.408315164Z" level=info msg="RemovePodSandbox for \"97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4\"" Jan 17 12:35:57.408506 containerd[1485]: time="2025-01-17T12:35:57.408349199Z" level=info msg="Forcibly stopping sandbox \"97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4\"" Jan 17 12:35:57.408506 containerd[1485]: time="2025-01-17T12:35:57.408431733Z" level=info msg="TearDown network for sandbox \"97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4\" successfully" Jan 17 12:35:57.413042 containerd[1485]: time="2025-01-17T12:35:57.412876096Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:35:57.413042 containerd[1485]: time="2025-01-17T12:35:57.412993196Z" level=info msg="RemovePodSandbox \"97b10edf4d721411f22e23d3e7e4b37e7dc5fa079944f956904dfd79a43a58c4\" returns successfully" Jan 17 12:35:57.413573 containerd[1485]: time="2025-01-17T12:35:57.413375923Z" level=info msg="StopPodSandbox for \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\"" Jan 17 12:35:57.413573 containerd[1485]: time="2025-01-17T12:35:57.413441233Z" level=info msg="TearDown network for sandbox \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\" successfully" Jan 17 12:35:57.413573 containerd[1485]: time="2025-01-17T12:35:57.413450962Z" level=info msg="StopPodSandbox for \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\" returns successfully" Jan 17 12:35:57.413799 containerd[1485]: time="2025-01-17T12:35:57.413742878Z" level=info msg="RemovePodSandbox for \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\"" Jan 17 12:35:57.413799 containerd[1485]: time="2025-01-17T12:35:57.413773066Z" level=info msg="Forcibly stopping sandbox \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\"" Jan 17 12:35:57.413960 containerd[1485]: time="2025-01-17T12:35:57.413818179Z" level=info msg="TearDown network for sandbox \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\" successfully" Jan 17 12:35:57.416886 containerd[1485]: time="2025-01-17T12:35:57.416842137Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:35:57.416980 containerd[1485]: time="2025-01-17T12:35:57.416908149Z" level=info msg="RemovePodSandbox \"3e10f057b6a8daea2694dbec636e47cc52aaa5e92f2fe2f1430526b3ccb62f43\" returns successfully" Jan 17 12:35:59.924461 systemd-networkd[1391]: lxc_health: Link UP Jan 17 12:35:59.924894 systemd-networkd[1391]: lxc_health: Gained carrier Jan 17 12:36:01.910217 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 17 12:36:02.173735 systemd[1]: run-containerd-runc-k8s.io-e3d821dc64ab8ac4ca55931115bc243c906059e719b9a60f3c4d1aff5f996e6c-runc.O4h2pm.mount: Deactivated successfully. Jan 17 12:36:06.606783 sshd[4736]: pam_unix(sshd:session): session closed for user core Jan 17 12:36:06.610568 systemd[1]: sshd@23-138.199.154.203:22-139.178.89.65:42702.service: Deactivated successfully. Jan 17 12:36:06.613363 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:36:06.615322 systemd-logind[1467]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:36:06.616560 systemd-logind[1467]: Removed session 24. Jan 17 12:36:22.079571 systemd[1]: cri-containerd-384bc2b16140e2bbfc412c2922b845c7299683550779ea4f29504b4453544ea3.scope: Deactivated successfully. Jan 17 12:36:22.081402 systemd[1]: cri-containerd-384bc2b16140e2bbfc412c2922b845c7299683550779ea4f29504b4453544ea3.scope: Consumed 1.564s CPU time, 18.8M memory peak, 0B memory swap peak. Jan 17 12:36:22.095342 kubelet[2803]: E0117 12:36:22.095052 2803 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:40812->10.0.0.2:2379: read: connection timed out" Jan 17 12:36:22.108709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-384bc2b16140e2bbfc412c2922b845c7299683550779ea4f29504b4453544ea3-rootfs.mount: Deactivated successfully. Jan 17 12:36:22.119554 containerd[1485]: time="2025-01-17T12:36:22.119479327Z" level=info msg="shim disconnected" id=384bc2b16140e2bbfc412c2922b845c7299683550779ea4f29504b4453544ea3 namespace=k8s.io Jan 17 12:36:22.119554 containerd[1485]: time="2025-01-17T12:36:22.119532686Z" level=warning msg="cleaning up after shim disconnected" id=384bc2b16140e2bbfc412c2922b845c7299683550779ea4f29504b4453544ea3 namespace=k8s.io Jan 17 12:36:22.119554 containerd[1485]: time="2025-01-17T12:36:22.119541122Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:36:22.234385 kubelet[2803]: I0117 12:36:22.233944 2803 scope.go:117] "RemoveContainer" containerID="384bc2b16140e2bbfc412c2922b845c7299683550779ea4f29504b4453544ea3" Jan 17 12:36:22.238050 containerd[1485]: time="2025-01-17T12:36:22.237982028Z" level=info msg="CreateContainer within sandbox \"6ad757793bd1c88b335235daf40a68f7e826645cc388fe1588b231f2b5f95b3a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 12:36:22.253515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176162112.mount: Deactivated successfully. Jan 17 12:36:22.254661 containerd[1485]: time="2025-01-17T12:36:22.254613523Z" level=info msg="CreateContainer within sandbox \"6ad757793bd1c88b335235daf40a68f7e826645cc388fe1588b231f2b5f95b3a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6307ab173cbb22c3f552bf386b2783c0a1f5ff00f9ac95ab20d856956c08ead1\"" Jan 17 12:36:22.255165 containerd[1485]: time="2025-01-17T12:36:22.255133005Z" level=info msg="StartContainer for \"6307ab173cbb22c3f552bf386b2783c0a1f5ff00f9ac95ab20d856956c08ead1\"" Jan 17 12:36:22.286203 systemd[1]: Started cri-containerd-6307ab173cbb22c3f552bf386b2783c0a1f5ff00f9ac95ab20d856956c08ead1.scope - libcontainer container 6307ab173cbb22c3f552bf386b2783c0a1f5ff00f9ac95ab20d856956c08ead1. Jan 17 12:36:22.326252 containerd[1485]: time="2025-01-17T12:36:22.326218245Z" level=info msg="StartContainer for \"6307ab173cbb22c3f552bf386b2783c0a1f5ff00f9ac95ab20d856956c08ead1\" returns successfully" Jan 17 12:36:22.502292 systemd[1]: cri-containerd-9eede87efd255e6aa794e76d8aac2c63460c4af66f79b2c766386af239409c4f.scope: Deactivated successfully. Jan 17 12:36:22.503007 systemd[1]: cri-containerd-9eede87efd255e6aa794e76d8aac2c63460c4af66f79b2c766386af239409c4f.scope: Consumed 5.564s CPU time, 25.5M memory peak, 0B memory swap peak. Jan 17 12:36:22.530206 containerd[1485]: time="2025-01-17T12:36:22.530123634Z" level=info msg="shim disconnected" id=9eede87efd255e6aa794e76d8aac2c63460c4af66f79b2c766386af239409c4f namespace=k8s.io Jan 17 12:36:22.530206 containerd[1485]: time="2025-01-17T12:36:22.530196992Z" level=warning msg="cleaning up after shim disconnected" id=9eede87efd255e6aa794e76d8aac2c63460c4af66f79b2c766386af239409c4f namespace=k8s.io Jan 17 12:36:22.530206 containerd[1485]: time="2025-01-17T12:36:22.530205837Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:36:23.109079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9eede87efd255e6aa794e76d8aac2c63460c4af66f79b2c766386af239409c4f-rootfs.mount: Deactivated successfully. Jan 17 12:36:23.238889 kubelet[2803]: I0117 12:36:23.238854 2803 scope.go:117] "RemoveContainer" containerID="9eede87efd255e6aa794e76d8aac2c63460c4af66f79b2c766386af239409c4f" Jan 17 12:36:23.241561 containerd[1485]: time="2025-01-17T12:36:23.241507577Z" level=info msg="CreateContainer within sandbox \"a8e8fe258d70582366df9716d281ac261240afbe688a3dea56d5a73d05d7d196\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 12:36:23.254590 containerd[1485]: time="2025-01-17T12:36:23.254547123Z" level=info msg="CreateContainer within sandbox \"a8e8fe258d70582366df9716d281ac261240afbe688a3dea56d5a73d05d7d196\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cfeb14d1b045896a06bfe95c95c82a4238d7108cd24dc88042f479d605bbf5c3\"" Jan 17 12:36:23.256548 containerd[1485]: time="2025-01-17T12:36:23.255061254Z" level=info msg="StartContainer for \"cfeb14d1b045896a06bfe95c95c82a4238d7108cd24dc88042f479d605bbf5c3\"" Jan 17 12:36:23.287081 systemd[1]: Started cri-containerd-cfeb14d1b045896a06bfe95c95c82a4238d7108cd24dc88042f479d605bbf5c3.scope - libcontainer container cfeb14d1b045896a06bfe95c95c82a4238d7108cd24dc88042f479d605bbf5c3. Jan 17 12:36:23.325619 containerd[1485]: time="2025-01-17T12:36:23.325535181Z" level=info msg="StartContainer for \"cfeb14d1b045896a06bfe95c95c82a4238d7108cd24dc88042f479d605bbf5c3\" returns successfully" Jan 17 12:36:26.686366 kubelet[2803]: E0117 12:36:26.680904 2803 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:40652->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-0-0-e492bbae02.181b7b0cd0bf4f29 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-0-0-e492bbae02,UID:14b19a8e952fc29107e822864b425fe8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-0-e492bbae02,},FirstTimestamp:2025-01-17 12:36:16.253480745 +0000 UTC m=+378.967515485,LastTimestamp:2025-01-17 12:36:16.253480745 +0000 UTC m=+378.967515485,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-0-e492bbae02,}"