Jan 29 16:54:34.114633 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:54:34.114657 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:54:34.114668 kernel: BIOS-provided physical RAM map: Jan 29 16:54:34.114675 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 16:54:34.114681 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 16:54:34.114688 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 16:54:34.114695 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jan 29 16:54:34.114702 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jan 29 16:54:34.114710 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 16:54:34.114718 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 16:54:34.114727 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 16:54:34.114735 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 16:54:34.114743 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 16:54:34.114752 kernel: NX (Execute Disable) protection: active Jan 29 16:54:34.114765 kernel: APIC: Static calls initialized Jan 29 16:54:34.114775 kernel: SMBIOS 3.0.0 present. Jan 29 16:54:34.114784 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 29 16:54:34.114793 kernel: Hypervisor detected: KVM Jan 29 16:54:34.114802 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:54:34.114811 kernel: kvm-clock: using sched offset of 4133976804 cycles Jan 29 16:54:34.114820 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:54:34.114829 kernel: tsc: Detected 2495.312 MHz processor Jan 29 16:54:34.114838 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:54:34.114847 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:54:34.114860 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jan 29 16:54:34.114869 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 16:54:34.115500 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:54:34.115510 kernel: Using GB pages for direct mapping Jan 29 16:54:34.115517 kernel: ACPI: Early table checksum verification disabled Jan 29 16:54:34.115525 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Jan 29 16:54:34.115532 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:54:34.115539 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:54:34.115550 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:54:34.115558 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jan 29 16:54:34.115565 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:54:34.115573 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:54:34.115580 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:54:34.115588 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:54:34.115595 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Jan 29 16:54:34.115603 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Jan 29 16:54:34.115617 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jan 29 16:54:34.115627 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Jan 29 16:54:34.115637 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Jan 29 16:54:34.115646 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Jan 29 16:54:34.115655 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Jan 29 16:54:34.115664 kernel: No NUMA configuration found Jan 29 16:54:34.115677 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jan 29 16:54:34.115687 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jan 29 16:54:34.115697 kernel: Zone ranges: Jan 29 16:54:34.115707 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:54:34.115717 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jan 29 16:54:34.115726 kernel: Normal empty Jan 29 16:54:34.115736 kernel: Movable zone start for each node Jan 29 16:54:34.115746 kernel: Early memory node ranges Jan 29 16:54:34.115756 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 16:54:34.115766 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jan 29 16:54:34.115780 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jan 29 16:54:34.115789 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:54:34.115798 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 16:54:34.115808 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 16:54:34.115817 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 16:54:34.115826 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:54:34.115836 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:54:34.115843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 16:54:34.115851 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:54:34.115860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:54:34.115868 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:54:34.115891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:54:34.115898 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:54:34.115906 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 16:54:34.115914 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 16:54:34.115922 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 16:54:34.115929 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 16:54:34.115937 kernel: Booting paravirtualized kernel on KVM Jan 29 16:54:34.115948 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:54:34.115956 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 16:54:34.115964 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 16:54:34.115972 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 16:54:34.115979 kernel: pcpu-alloc: [0] 0 1 Jan 29 16:54:34.115987 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 16:54:34.115996 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:54:34.116004 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:54:34.116014 kernel: random: crng init done Jan 29 16:54:34.116022 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:54:34.116030 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 16:54:34.116038 kernel: Fallback order for Node 0: 0 Jan 29 16:54:34.116045 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jan 29 16:54:34.116052 kernel: Policy zone: DMA32 Jan 29 16:54:34.116060 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:54:34.116068 kernel: Memory: 1920004K/2047464K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 127200K reserved, 0K cma-reserved) Jan 29 16:54:34.116075 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:54:34.116086 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:54:34.116093 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:54:34.116101 kernel: Dynamic Preempt: voluntary Jan 29 16:54:34.116108 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:54:34.116116 kernel: rcu: RCU event tracing is enabled. Jan 29 16:54:34.116124 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:54:34.116132 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:54:34.116139 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:54:34.116147 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:54:34.116174 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:54:34.116184 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:54:34.116194 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 16:54:34.116204 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:54:34.116214 kernel: Console: colour VGA+ 80x25 Jan 29 16:54:34.116223 kernel: printk: console [tty0] enabled Jan 29 16:54:34.116232 kernel: printk: console [ttyS0] enabled Jan 29 16:54:34.116241 kernel: ACPI: Core revision 20230628 Jan 29 16:54:34.116251 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 16:54:34.116261 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:54:34.116273 kernel: x2apic enabled Jan 29 16:54:34.116283 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:54:34.116293 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 16:54:34.116304 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 16:54:34.116314 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495312) Jan 29 16:54:34.116324 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 16:54:34.116334 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 16:54:34.116345 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 16:54:34.116370 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:54:34.116380 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:54:34.116390 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:54:34.116403 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:54:34.116412 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 16:54:34.116422 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 16:54:34.116432 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 16:54:34.116441 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 16:54:34.116449 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 16:54:34.116459 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 16:54:34.116467 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 16:54:34.116475 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:54:34.116483 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:54:34.116491 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:54:34.116498 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:54:34.116506 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 16:54:34.116516 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:54:34.116524 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:54:34.116532 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:54:34.116539 kernel: landlock: Up and running. Jan 29 16:54:34.116547 kernel: SELinux: Initializing. Jan 29 16:54:34.116555 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:54:34.116563 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:54:34.116570 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 16:54:34.116578 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:54:34.116588 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:54:34.116596 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:54:34.116604 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 16:54:34.116611 kernel: ... version: 0 Jan 29 16:54:34.116619 kernel: ... bit width: 48 Jan 29 16:54:34.116627 kernel: ... generic registers: 6 Jan 29 16:54:34.116634 kernel: ... value mask: 0000ffffffffffff Jan 29 16:54:34.116642 kernel: ... max period: 00007fffffffffff Jan 29 16:54:34.116649 kernel: ... fixed-purpose events: 0 Jan 29 16:54:34.116659 kernel: ... event mask: 000000000000003f Jan 29 16:54:34.116667 kernel: signal: max sigframe size: 1776 Jan 29 16:54:34.116674 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:54:34.116682 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:54:34.116690 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:54:34.116698 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:54:34.116705 kernel: .... node #0, CPUs: #1 Jan 29 16:54:34.116713 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:54:34.116721 kernel: smpboot: Max logical packages: 1 Jan 29 16:54:34.116731 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Jan 29 16:54:34.116739 kernel: devtmpfs: initialized Jan 29 16:54:34.116746 kernel: x86/mm: Memory block size: 128MB Jan 29 16:54:34.116754 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:54:34.116762 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:54:34.116769 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:54:34.116777 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:54:34.116785 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:54:34.116793 kernel: audit: type=2000 audit(1738169671.935:1): state=initialized audit_enabled=0 res=1 Jan 29 16:54:34.116803 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:54:34.116810 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:54:34.116818 kernel: cpuidle: using governor menu Jan 29 16:54:34.116826 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:54:34.116833 kernel: dca service started, version 1.12.1 Jan 29 16:54:34.116841 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 16:54:34.116849 kernel: PCI: Using configuration type 1 for base access Jan 29 16:54:34.116857 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:54:34.116865 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:54:34.117262 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:54:34.117273 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:54:34.117281 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:54:34.117289 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:54:34.117296 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:54:34.117304 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:54:34.117312 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:54:34.117320 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:54:34.117327 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:54:34.117338 kernel: ACPI: Interpreter enabled Jan 29 16:54:34.117346 kernel: ACPI: PM: (supports S0 S5) Jan 29 16:54:34.117354 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:54:34.117362 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:54:34.117370 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:54:34.117377 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 16:54:34.117385 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:54:34.117569 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:54:34.117704 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 16:54:34.117927 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 16:54:34.117940 kernel: PCI host bridge to bus 0000:00 Jan 29 16:54:34.118079 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:54:34.118224 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:54:34.118341 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:54:34.118478 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jan 29 16:54:34.118613 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 16:54:34.118731 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 16:54:34.118871 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:54:34.119035 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 16:54:34.119202 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 29 16:54:34.119351 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jan 29 16:54:34.119502 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jan 29 16:54:34.119636 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jan 29 16:54:34.119757 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jan 29 16:54:34.119897 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:54:34.120052 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:34.120264 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jan 29 16:54:34.120400 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:34.120531 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jan 29 16:54:34.120663 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:34.120787 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jan 29 16:54:34.120955 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:34.121080 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jan 29 16:54:34.121234 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:34.121367 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jan 29 16:54:34.121505 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:34.121628 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jan 29 16:54:34.121760 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:34.121915 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jan 29 16:54:34.122068 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:34.122246 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jan 29 16:54:34.122416 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:34.122570 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jan 29 16:54:34.122715 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 16:54:34.122841 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 16:54:34.123007 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 16:54:34.123194 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jan 29 16:54:34.123340 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jan 29 16:54:34.123475 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 16:54:34.123600 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 16:54:34.123737 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 16:54:34.123867 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jan 29 16:54:34.124845 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 29 16:54:34.125235 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jan 29 16:54:34.125382 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 16:54:34.125503 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 16:54:34.125625 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 29 16:54:34.125769 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 16:54:34.126167 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jan 29 16:54:34.126318 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 16:54:34.126447 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 16:54:34.126567 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 16:54:34.126706 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 29 16:54:34.126835 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jan 29 16:54:34.126980 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jan 29 16:54:34.127104 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 16:54:34.127260 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 16:54:34.127392 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 16:54:34.127528 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 29 16:54:34.127656 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 29 16:54:34.127779 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 16:54:34.131055 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 16:54:34.131248 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 16:54:34.131401 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 16:54:34.131539 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jan 29 16:54:34.131667 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jan 29 16:54:34.131792 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 16:54:34.132692 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 16:54:34.132824 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 16:54:34.132992 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 29 16:54:34.133122 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jan 29 16:54:34.133283 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jan 29 16:54:34.133409 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 16:54:34.133539 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 16:54:34.133688 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 16:54:34.133704 kernel: acpiphp: Slot [0] registered Jan 29 16:54:34.134642 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 16:54:34.134789 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jan 29 16:54:34.134952 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jan 29 16:54:34.135086 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jan 29 16:54:34.135232 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 16:54:34.135354 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 16:54:34.135476 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 16:54:34.135494 kernel: acpiphp: Slot [0-2] registered Jan 29 16:54:34.135659 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 16:54:34.135802 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 29 16:54:34.137974 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 16:54:34.137994 kernel: acpiphp: Slot [0-3] registered Jan 29 16:54:34.138125 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 16:54:34.138268 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 16:54:34.138392 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 16:54:34.138403 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:54:34.138412 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:54:34.138420 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:54:34.138428 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:54:34.138436 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 16:54:34.138447 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 16:54:34.138455 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 16:54:34.138463 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 16:54:34.138471 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 16:54:34.138479 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 16:54:34.138487 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 16:54:34.138495 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 16:54:34.138503 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 16:54:34.138511 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 16:54:34.138521 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 16:54:34.138529 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 16:54:34.138537 kernel: iommu: Default domain type: Translated Jan 29 16:54:34.138545 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:54:34.138553 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:54:34.138561 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:54:34.138569 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 16:54:34.138577 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jan 29 16:54:34.138703 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 16:54:34.138829 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 16:54:34.139986 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:54:34.140001 kernel: vgaarb: loaded Jan 29 16:54:34.140009 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 16:54:34.140020 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 16:54:34.140031 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:54:34.140041 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:54:34.140053 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:54:34.140068 kernel: pnp: PnP ACPI init Jan 29 16:54:34.140228 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 16:54:34.140243 kernel: pnp: PnP ACPI: found 5 devices Jan 29 16:54:34.140251 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:54:34.140259 kernel: NET: Registered PF_INET protocol family Jan 29 16:54:34.140267 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:54:34.140275 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 16:54:34.140284 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:54:34.140295 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 16:54:34.140303 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 16:54:34.140311 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 16:54:34.140320 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:54:34.140328 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:54:34.140336 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:54:34.140344 kernel: NET: Registered PF_XDP protocol family Jan 29 16:54:34.140468 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 16:54:34.140592 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 16:54:34.140720 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 16:54:34.140843 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 16:54:34.142476 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 16:54:34.142681 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 16:54:34.143265 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 16:54:34.143436 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 16:54:34.143572 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 29 16:54:34.143720 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 16:54:34.143852 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 16:54:34.144020 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 16:54:34.144177 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 16:54:34.144309 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 16:54:34.144461 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 16:54:34.144602 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 16:54:34.144753 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 16:54:34.144967 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 16:54:34.145113 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 16:54:34.145914 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 16:54:34.146087 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 16:54:34.146249 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 16:54:34.146377 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 16:54:34.146501 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 16:54:34.146635 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 16:54:34.146843 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 29 16:54:34.147031 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 16:54:34.147195 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 16:54:34.147339 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 16:54:34.147465 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 29 16:54:34.147587 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 29 16:54:34.147710 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 16:54:34.150021 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 16:54:34.150187 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 29 16:54:34.150364 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 16:54:34.150517 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 16:54:34.150650 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:54:34.150802 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:54:34.150965 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:54:34.151102 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jan 29 16:54:34.151259 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 16:54:34.151404 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 16:54:34.151562 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 29 16:54:34.151683 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 29 16:54:34.151817 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 29 16:54:34.153999 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 16:54:34.154138 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 29 16:54:34.154295 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 16:54:34.154432 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 29 16:54:34.154583 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 16:54:34.154747 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 29 16:54:34.155934 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 16:54:34.156101 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jan 29 16:54:34.156265 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 16:54:34.156440 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 29 16:54:34.156593 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 29 16:54:34.156743 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 16:54:34.157983 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 29 16:54:34.158137 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jan 29 16:54:34.158303 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 16:54:34.158445 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 29 16:54:34.158565 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 29 16:54:34.158682 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 16:54:34.158701 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 16:54:34.158709 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:54:34.158718 kernel: Initialise system trusted keyrings Jan 29 16:54:34.158727 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 16:54:34.158735 kernel: Key type asymmetric registered Jan 29 16:54:34.158743 kernel: Asymmetric key parser 'x509' registered Jan 29 16:54:34.158751 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:54:34.158759 kernel: io scheduler mq-deadline registered Jan 29 16:54:34.158770 kernel: io scheduler kyber registered Jan 29 16:54:34.158781 kernel: io scheduler bfq registered Jan 29 16:54:34.159987 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 29 16:54:34.160120 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 29 16:54:34.160260 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 29 16:54:34.160385 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 29 16:54:34.160509 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 29 16:54:34.160644 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 29 16:54:34.160790 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 29 16:54:34.162007 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 29 16:54:34.162204 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 29 16:54:34.162361 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 29 16:54:34.162515 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 29 16:54:34.162642 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 29 16:54:34.162766 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 29 16:54:34.163935 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 29 16:54:34.164079 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 29 16:54:34.164243 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 29 16:54:34.164261 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 16:54:34.164385 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 29 16:54:34.164507 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 29 16:54:34.164518 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:54:34.164527 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 29 16:54:34.164535 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:54:34.164544 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:54:34.164552 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:54:34.164564 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:54:34.164572 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:54:34.164709 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 16:54:34.164726 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:54:34.164871 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 16:54:34.166093 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T16:54:33 UTC (1738169673) Jan 29 16:54:34.166242 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 16:54:34.166258 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 16:54:34.166277 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:54:34.166288 kernel: Segment Routing with IPv6 Jan 29 16:54:34.166299 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:54:34.166309 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:54:34.166318 kernel: Key type dns_resolver registered Jan 29 16:54:34.166326 kernel: IPI shorthand broadcast: enabled Jan 29 16:54:34.166334 kernel: sched_clock: Marking stable (1553013743, 164651882)->(1735152104, -17486479) Jan 29 16:54:34.166342 kernel: registered taskstats version 1 Jan 29 16:54:34.166351 kernel: Loading compiled-in X.509 certificates Jan 29 16:54:34.166362 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:54:34.166370 kernel: Key type .fscrypt registered Jan 29 16:54:34.166378 kernel: Key type fscrypt-provisioning registered Jan 29 16:54:34.166386 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:54:34.166394 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:54:34.166402 kernel: ima: No architecture policies found Jan 29 16:54:34.166411 kernel: clk: Disabling unused clocks Jan 29 16:54:34.166419 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:54:34.166427 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:54:34.166438 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:54:34.166449 kernel: Run /init as init process Jan 29 16:54:34.166461 kernel: with arguments: Jan 29 16:54:34.166473 kernel: /init Jan 29 16:54:34.166486 kernel: with environment: Jan 29 16:54:34.166496 kernel: HOME=/ Jan 29 16:54:34.166507 kernel: TERM=linux Jan 29 16:54:34.166518 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:54:34.166529 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:54:34.166544 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:54:34.166554 systemd[1]: Detected virtualization kvm. Jan 29 16:54:34.166562 systemd[1]: Detected architecture x86-64. Jan 29 16:54:34.166570 systemd[1]: Running in initrd. Jan 29 16:54:34.166579 systemd[1]: No hostname configured, using default hostname. Jan 29 16:54:34.166588 systemd[1]: Hostname set to . Jan 29 16:54:34.166596 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:54:34.166607 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:54:34.166616 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:54:34.166625 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:54:34.166634 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:54:34.166643 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:54:34.166652 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:54:34.166662 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:54:34.166674 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:54:34.166683 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:54:34.166692 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:54:34.166701 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:54:34.166709 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:54:34.166718 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:54:34.166727 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:54:34.166735 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:54:34.166746 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:54:34.166755 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:54:34.166764 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:54:34.166773 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:54:34.166782 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:54:34.166791 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:54:34.166800 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:54:34.166808 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:54:34.166818 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:54:34.166829 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:54:34.166838 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:54:34.166846 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:54:34.166855 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:54:34.166864 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:54:34.166887 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:54:34.168023 systemd-journald[188]: Collecting audit messages is disabled. Jan 29 16:54:34.168053 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:54:34.168062 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:54:34.168074 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:54:34.168083 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:54:34.168092 systemd-journald[188]: Journal started Jan 29 16:54:34.168117 systemd-journald[188]: Runtime Journal (/run/log/journal/4dc995dfd16848c89fe8522993d4da25) is 4.8M, max 38.3M, 33.5M free. Jan 29 16:54:34.145453 systemd-modules-load[189]: Inserted module 'overlay' Jan 29 16:54:34.200340 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:54:34.200372 kernel: Bridge firewalling registered Jan 29 16:54:34.176064 systemd-modules-load[189]: Inserted module 'br_netfilter' Jan 29 16:54:34.207932 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:54:34.208640 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:54:34.210201 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:54:34.222192 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:54:34.224266 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:54:34.231122 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:54:34.236794 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:54:34.250140 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:54:34.255714 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:54:34.260067 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:54:34.260717 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:54:34.267713 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:54:34.276065 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:54:34.277488 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:54:34.288169 dracut-cmdline[221]: dracut-dracut-053 Jan 29 16:54:34.292239 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:54:34.318895 systemd-resolved[222]: Positive Trust Anchors: Jan 29 16:54:34.318917 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:54:34.318964 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:54:34.327386 systemd-resolved[222]: Defaulting to hostname 'linux'. Jan 29 16:54:34.328851 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:54:34.330024 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:54:34.383959 kernel: SCSI subsystem initialized Jan 29 16:54:34.394010 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:54:34.406927 kernel: iscsi: registered transport (tcp) Jan 29 16:54:34.430222 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:54:34.430345 kernel: QLogic iSCSI HBA Driver Jan 29 16:54:34.508836 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:54:34.518243 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:54:34.565716 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:54:34.565939 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:54:34.570948 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:54:34.625950 kernel: raid6: avx2x4 gen() 29888 MB/s Jan 29 16:54:34.643933 kernel: raid6: avx2x2 gen() 28545 MB/s Jan 29 16:54:34.661187 kernel: raid6: avx2x1 gen() 24339 MB/s Jan 29 16:54:34.661274 kernel: raid6: using algorithm avx2x4 gen() 29888 MB/s Jan 29 16:54:34.681099 kernel: raid6: .... xor() 7376 MB/s, rmw enabled Jan 29 16:54:34.681205 kernel: raid6: using avx2x2 recovery algorithm Jan 29 16:54:34.703962 kernel: xor: automatically using best checksumming function avx Jan 29 16:54:34.883983 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:54:34.909014 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:54:34.918378 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:54:34.938000 systemd-udevd[406]: Using default interface naming scheme 'v255'. Jan 29 16:54:34.944271 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:54:34.957481 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:54:34.985968 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jan 29 16:54:35.048844 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:54:35.056275 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:54:35.172277 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:54:35.193462 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:54:35.240841 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:54:35.243590 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:54:35.245609 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:54:35.247003 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:54:35.254134 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:54:35.267128 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:54:35.326983 kernel: scsi host0: Virtio SCSI HBA Jan 29 16:54:35.334920 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 29 16:54:35.340948 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:54:35.364599 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:54:35.365680 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:54:35.368686 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:54:35.369611 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:54:35.369692 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:54:35.372009 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:54:35.390295 kernel: ACPI: bus type USB registered Jan 29 16:54:35.390478 kernel: usbcore: registered new interface driver usbfs Jan 29 16:54:35.390492 kernel: usbcore: registered new interface driver hub Jan 29 16:54:35.394972 kernel: usbcore: registered new device driver usb Jan 29 16:54:35.395909 kernel: libata version 3.00 loaded. Jan 29 16:54:35.396080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:54:35.398568 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:54:35.419214 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 16:54:35.492620 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 16:54:35.492650 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 16:54:35.492854 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 16:54:35.493201 kernel: scsi host1: ahci Jan 29 16:54:35.493417 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 29 16:54:35.493701 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 29 16:54:35.493949 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 16:54:35.495954 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 29 16:54:35.496215 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 16:54:35.496389 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:54:35.496412 kernel: GPT:17805311 != 80003071 Jan 29 16:54:35.496423 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:54:35.496434 kernel: GPT:17805311 != 80003071 Jan 29 16:54:35.496444 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:54:35.496454 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:54:35.496466 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:54:35.496476 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 16:54:35.496674 kernel: AES CTR mode by8 optimization enabled Jan 29 16:54:35.496687 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 16:54:35.503050 kernel: scsi host2: ahci Jan 29 16:54:35.503309 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 29 16:54:35.503502 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 16:54:35.503708 kernel: scsi host3: ahci Jan 29 16:54:35.504984 kernel: scsi host4: ahci Jan 29 16:54:35.505521 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 16:54:35.505710 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 29 16:54:35.505870 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 29 16:54:35.506360 kernel: scsi host5: ahci Jan 29 16:54:35.506552 kernel: scsi host6: ahci Jan 29 16:54:35.506743 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Jan 29 16:54:35.506756 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Jan 29 16:54:35.506766 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Jan 29 16:54:35.506777 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Jan 29 16:54:35.506793 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Jan 29 16:54:35.506804 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Jan 29 16:54:35.506814 kernel: hub 1-0:1.0: USB hub found Jan 29 16:54:35.507121 kernel: hub 1-0:1.0: 4 ports detected Jan 29 16:54:35.507363 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 16:54:35.507526 kernel: hub 2-0:1.0: USB hub found Jan 29 16:54:35.507744 kernel: hub 2-0:1.0: 4 ports detected Jan 29 16:54:35.596979 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (460) Jan 29 16:54:35.598058 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 29 16:54:35.603277 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (464) Jan 29 16:54:35.600833 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:54:35.618418 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 29 16:54:35.630904 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 29 16:54:35.632437 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 29 16:54:35.642921 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 16:54:35.653231 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:54:35.658038 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:54:35.662388 disk-uuid[552]: Primary Header is updated. Jan 29 16:54:35.662388 disk-uuid[552]: Secondary Entries is updated. Jan 29 16:54:35.662388 disk-uuid[552]: Secondary Header is updated. Jan 29 16:54:35.674903 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:54:35.691189 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:54:35.743968 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 16:54:35.806350 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 16:54:35.806413 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 16:54:35.806425 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 16:54:35.806435 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 16:54:35.806445 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 16:54:35.813827 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 29 16:54:35.813914 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 16:54:35.813926 kernel: ata1.00: applying bridge limits Jan 29 16:54:35.817895 kernel: ata1.00: configured for UDMA/100 Jan 29 16:54:35.817922 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 16:54:35.885919 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 16:54:35.889184 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 16:54:35.903172 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 16:54:35.903199 kernel: usbcore: registered new interface driver usbhid Jan 29 16:54:35.903227 kernel: usbhid: USB HID core driver Jan 29 16:54:35.903240 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 29 16:54:35.903254 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 29 16:54:35.903449 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 29 16:54:36.690181 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:54:36.690266 disk-uuid[553]: The operation has completed successfully. Jan 29 16:54:36.761917 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:54:36.762077 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:54:36.819088 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:54:36.826356 sh[593]: Success Jan 29 16:54:36.845946 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 16:54:36.924706 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:54:36.935999 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:54:36.937674 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:54:36.964510 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:54:36.964574 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:54:36.967285 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:54:36.970065 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:54:36.972214 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:54:36.983912 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 16:54:36.986904 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:54:36.988504 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:54:37.000348 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:54:37.004186 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:54:37.029489 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:54:37.029552 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:54:37.032365 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:54:37.038039 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:54:37.038081 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:54:37.054941 kernel: BTRFS info (device sda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:54:37.055080 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:54:37.063701 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:54:37.072199 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:54:37.201403 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:54:37.210244 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:54:37.258527 ignition[692]: Ignition 2.20.0 Jan 29 16:54:37.258546 ignition[692]: Stage: fetch-offline Jan 29 16:54:37.258657 ignition[692]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:54:37.258672 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:54:37.258814 ignition[692]: parsed url from cmdline: "" Jan 29 16:54:37.258819 ignition[692]: no config URL provided Jan 29 16:54:37.258825 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:54:37.264548 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:54:37.258836 ignition[692]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:54:37.258843 ignition[692]: failed to fetch config: resource requires networking Jan 29 16:54:37.259100 ignition[692]: Ignition finished successfully Jan 29 16:54:37.280000 systemd-networkd[778]: lo: Link UP Jan 29 16:54:37.280017 systemd-networkd[778]: lo: Gained carrier Jan 29 16:54:37.283396 systemd-networkd[778]: Enumeration completed Jan 29 16:54:37.283546 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:54:37.284384 systemd[1]: Reached target network.target - Network. Jan 29 16:54:37.287044 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:54:37.287054 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:54:37.288237 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:54:37.288242 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:54:37.292843 systemd-networkd[778]: eth0: Link UP Jan 29 16:54:37.292849 systemd-networkd[778]: eth0: Gained carrier Jan 29 16:54:37.292865 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:54:37.297396 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:54:37.297407 systemd-networkd[778]: eth1: Link UP Jan 29 16:54:37.297413 systemd-networkd[778]: eth1: Gained carrier Jan 29 16:54:37.297426 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:54:37.331859 ignition[784]: Ignition 2.20.0 Jan 29 16:54:37.331909 ignition[784]: Stage: fetch Jan 29 16:54:37.332211 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:54:37.332235 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:54:37.332363 ignition[784]: parsed url from cmdline: "" Jan 29 16:54:37.332368 ignition[784]: no config URL provided Jan 29 16:54:37.332376 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:54:37.332389 ignition[784]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:54:37.332423 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 29 16:54:37.332645 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 16:54:37.353970 systemd-networkd[778]: eth0: DHCPv4 address 116.202.14.223/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 16:54:37.370983 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:54:37.533016 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 29 16:54:37.541779 ignition[784]: GET result: OK Jan 29 16:54:37.541940 ignition[784]: parsing config with SHA512: eedf1dc2fa4df90ba48d8f64605a79b5e924ec0211e23b4884d95a24bf9b0fd3825e091214efc7c8aa0e6a22b472be8d4a270d5c33b0154b8a92d09bc7710f43 Jan 29 16:54:37.555212 unknown[784]: fetched base config from "system" Jan 29 16:54:37.555234 unknown[784]: fetched base config from "system" Jan 29 16:54:37.555257 unknown[784]: fetched user config from "hetzner" Jan 29 16:54:37.559548 ignition[784]: fetch: fetch complete Jan 29 16:54:37.559561 ignition[784]: fetch: fetch passed Jan 29 16:54:37.559671 ignition[784]: Ignition finished successfully Jan 29 16:54:37.566579 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:54:37.575145 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:54:37.600333 ignition[791]: Ignition 2.20.0 Jan 29 16:54:37.600372 ignition[791]: Stage: kargs Jan 29 16:54:37.604594 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:54:37.600581 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:54:37.600592 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:54:37.601549 ignition[791]: kargs: kargs passed Jan 29 16:54:37.601607 ignition[791]: Ignition finished successfully Jan 29 16:54:37.619094 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:54:37.636488 ignition[798]: Ignition 2.20.0 Jan 29 16:54:37.636508 ignition[798]: Stage: disks Jan 29 16:54:37.636844 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:54:37.636864 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:54:37.640177 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:54:37.638297 ignition[798]: disks: disks passed Jan 29 16:54:37.641444 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:54:37.638346 ignition[798]: Ignition finished successfully Jan 29 16:54:37.642409 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:54:37.643713 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:54:37.644961 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:54:37.646572 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:54:37.654372 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:54:37.678851 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 16:54:37.684097 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:54:37.965020 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:54:38.115056 kernel: EXT4-fs (sda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:54:38.117405 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:54:38.119581 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:54:38.128438 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:54:38.133821 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:54:38.136577 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 16:54:38.140367 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:54:38.152586 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (815) Jan 29 16:54:38.152653 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:54:38.152665 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:54:38.152676 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:54:38.142005 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:54:38.156615 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:54:38.156648 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:54:38.159268 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:54:38.164724 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:54:38.167034 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:54:38.278756 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:54:38.288902 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:54:38.292702 coreos-metadata[817]: Jan 29 16:54:38.292 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 29 16:54:38.294355 coreos-metadata[817]: Jan 29 16:54:38.294 INFO Fetch successful Jan 29 16:54:38.296445 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:54:38.297521 coreos-metadata[817]: Jan 29 16:54:38.297 INFO wrote hostname ci-4230-0-0-b-bb52c92a60 to /sysroot/etc/hostname Jan 29 16:54:38.299915 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:54:38.305015 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:54:38.462126 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:54:38.467037 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:54:38.469035 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:54:38.483912 kernel: BTRFS info (device sda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:54:38.516720 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:54:38.518399 ignition[932]: INFO : Ignition 2.20.0 Jan 29 16:54:38.518399 ignition[932]: INFO : Stage: mount Jan 29 16:54:38.519795 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:54:38.519795 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:54:38.521336 ignition[932]: INFO : mount: mount passed Jan 29 16:54:38.521336 ignition[932]: INFO : Ignition finished successfully Jan 29 16:54:38.522425 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:54:38.529015 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:54:38.907368 systemd-networkd[778]: eth1: Gained IPv6LL Jan 29 16:54:38.962546 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:54:38.971437 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:54:39.012988 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (944) Jan 29 16:54:39.020058 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:54:39.020112 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:54:39.023436 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:54:39.030964 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:54:39.031055 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:54:39.037310 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:54:39.064934 ignition[961]: INFO : Ignition 2.20.0 Jan 29 16:54:39.066241 ignition[961]: INFO : Stage: files Jan 29 16:54:39.066241 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:54:39.066241 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:54:39.068798 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:54:39.069604 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:54:39.069604 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:54:39.075338 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:54:39.076404 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:54:39.077987 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:54:39.076503 unknown[961]: wrote ssh authorized keys file for user: core Jan 29 16:54:39.079908 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:54:39.079908 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 16:54:39.163366 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:54:39.275369 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:54:39.277658 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:54:39.277658 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 16:54:39.291484 systemd-networkd[778]: eth0: Gained IPv6LL Jan 29 16:54:39.836043 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 16:54:39.966957 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:54:39.966957 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:54:39.970262 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:54:39.970262 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:54:39.970262 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:54:39.970262 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:54:39.970262 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:54:39.970262 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:54:39.970262 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:54:39.970262 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:54:39.970262 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:54:39.970262 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:54:39.970262 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:54:39.970262 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:54:39.970262 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 16:54:40.584506 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 16:54:40.924305 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:54:40.924305 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 16:54:40.929321 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:54:40.929321 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:54:40.929321 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 16:54:40.929321 ignition[961]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 16:54:40.929321 ignition[961]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 16:54:40.929321 ignition[961]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 16:54:40.929321 ignition[961]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 16:54:40.929321 ignition[961]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:54:40.929321 ignition[961]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:54:40.929321 ignition[961]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:54:40.929321 ignition[961]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:54:40.929321 ignition[961]: INFO : files: files passed Jan 29 16:54:40.929321 ignition[961]: INFO : Ignition finished successfully Jan 29 16:54:40.930745 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:54:40.939195 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:54:40.948114 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:54:40.952040 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:54:40.952215 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:54:40.963436 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:54:40.963436 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:54:40.967075 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:54:40.968755 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:54:40.971188 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:54:40.978702 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:54:41.005939 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:54:41.006112 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:54:41.007990 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:54:41.008681 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:54:41.009980 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:54:41.015118 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:54:41.034415 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:54:41.040179 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:54:41.054009 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:54:41.054964 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:54:41.056672 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:54:41.057870 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:54:41.058025 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:54:41.059218 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:54:41.059971 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:54:41.060978 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:54:41.061998 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:54:41.062943 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:54:41.064031 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:54:41.065194 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:54:41.066306 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:54:41.067356 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:54:41.068481 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:54:41.069541 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:54:41.069654 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:54:41.070907 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:54:41.071622 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:54:41.072553 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:54:41.072666 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:54:41.073710 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:54:41.073839 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:54:41.075357 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:54:41.075473 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:54:41.076770 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:54:41.076983 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:54:41.078082 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 16:54:41.078273 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:54:41.090052 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:54:41.092109 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:54:41.092596 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:54:41.092737 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:54:41.093355 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:54:41.093459 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:54:41.109437 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:54:41.110360 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:54:41.113507 ignition[1013]: INFO : Ignition 2.20.0 Jan 29 16:54:41.113507 ignition[1013]: INFO : Stage: umount Jan 29 16:54:41.117105 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:54:41.117105 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:54:41.117105 ignition[1013]: INFO : umount: umount passed Jan 29 16:54:41.117105 ignition[1013]: INFO : Ignition finished successfully Jan 29 16:54:41.118211 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:54:41.118341 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:54:41.120344 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:54:41.120451 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:54:41.121449 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:54:41.121510 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:54:41.122703 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:54:41.122756 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:54:41.123308 systemd[1]: Stopped target network.target - Network. Jan 29 16:54:41.125197 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:54:41.125272 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:54:41.126434 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:54:41.130252 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:54:41.134005 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:54:41.134576 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:54:41.135043 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:54:41.135580 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:54:41.135648 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:54:41.139457 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:54:41.139529 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:54:41.140612 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:54:41.140685 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:54:41.141257 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:54:41.141318 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:54:41.143912 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:54:41.145644 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:54:41.150572 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:54:41.159785 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:54:41.159951 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:54:41.165444 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:54:41.165816 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:54:41.165869 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:54:41.184086 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:54:41.184388 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:54:41.184515 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:54:41.187053 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:54:41.188970 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:54:41.189069 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:54:41.198384 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:54:41.198864 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:54:41.198937 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:54:41.204171 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:54:41.204285 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:54:41.205608 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:54:41.205676 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:54:41.207542 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:54:41.211468 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:54:41.212117 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:54:41.212253 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:54:41.221693 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:54:41.223203 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:54:41.228621 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:54:41.228869 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:54:41.231291 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:54:41.231453 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:54:41.233492 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:54:41.233585 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:54:41.234473 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:54:41.234528 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:54:41.235717 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:54:41.235774 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:54:41.237759 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:54:41.237813 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:54:41.239178 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:54:41.239251 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:54:41.246173 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:54:41.247975 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:54:41.248099 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:54:41.248947 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 16:54:41.249018 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:54:41.249714 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:54:41.249781 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:54:41.251455 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:54:41.251527 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:54:41.255375 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:54:41.255539 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:54:41.257097 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:54:41.266321 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:54:41.275378 systemd[1]: Switching root. Jan 29 16:54:41.329334 systemd-journald[188]: Journal stopped Jan 29 16:54:42.844474 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Jan 29 16:54:42.844538 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:54:42.844552 kernel: SELinux: policy capability open_perms=1 Jan 29 16:54:42.844567 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:54:42.844578 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:54:42.844593 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:54:42.844604 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:54:42.844619 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:54:42.844635 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:54:42.844655 kernel: audit: type=1403 audit(1738169681.536:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:54:42.844667 systemd[1]: Successfully loaded SELinux policy in 79.993ms. Jan 29 16:54:42.844688 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 26.481ms. Jan 29 16:54:42.844701 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:54:42.844713 systemd[1]: Detected virtualization kvm. Jan 29 16:54:42.844728 systemd[1]: Detected architecture x86-64. Jan 29 16:54:42.844739 systemd[1]: Detected first boot. Jan 29 16:54:42.844752 systemd[1]: Hostname set to . Jan 29 16:54:42.844764 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:54:42.844775 zram_generator::config[1057]: No configuration found. Jan 29 16:54:42.844788 kernel: Guest personality initialized and is inactive Jan 29 16:54:42.844799 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:54:42.844812 kernel: Initialized host personality Jan 29 16:54:42.844824 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:54:42.844835 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:54:42.844848 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:54:42.844860 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:54:42.846500 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:54:42.846521 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:54:42.846534 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:54:42.846547 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:54:42.846563 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:54:42.846575 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:54:42.846587 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:54:42.846599 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:54:42.846611 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:54:42.846623 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:54:42.846635 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:54:42.846648 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:54:42.846660 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:54:42.846674 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:54:42.846687 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:54:42.846699 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:54:42.846710 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:54:42.846722 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:54:42.846734 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:54:42.846748 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:54:42.846761 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:54:42.846778 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:54:42.846792 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:54:42.846808 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:54:42.846821 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:54:42.846834 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:54:42.846847 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:54:42.846859 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:54:42.846870 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:54:42.846894 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:54:42.846906 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:54:42.846918 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:54:42.846930 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:54:42.846942 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:54:42.846957 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:54:42.846969 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:54:42.846981 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:54:42.846993 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:54:42.847005 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:54:42.847017 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:54:42.847029 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:54:42.847041 systemd[1]: Reached target machines.target - Containers. Jan 29 16:54:42.847055 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:54:42.847070 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:54:42.847086 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:54:42.847102 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:54:42.847117 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:54:42.847129 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:54:42.847141 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:54:42.847171 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:54:42.847183 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:54:42.847199 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:54:42.847211 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:54:42.847222 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:54:42.847234 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:54:42.847246 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:54:42.847259 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:54:42.847271 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:54:42.847283 kernel: loop: module loaded Jan 29 16:54:42.847297 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:54:42.847309 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:54:42.847321 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:54:42.847333 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:54:42.847344 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:54:42.847356 kernel: ACPI: bus type drm_connector registered Jan 29 16:54:42.847367 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:54:42.847379 systemd[1]: Stopped verity-setup.service. Jan 29 16:54:42.847393 kernel: fuse: init (API version 7.39) Jan 29 16:54:42.847405 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:54:42.847417 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:54:42.847433 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:54:42.847446 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:54:42.847458 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:54:42.847470 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:54:42.847482 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:54:42.847495 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:54:42.847507 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:54:42.847519 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:54:42.847533 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:54:42.847545 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:54:42.847557 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:54:42.847591 systemd-journald[1138]: Collecting audit messages is disabled. Jan 29 16:54:42.847614 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:54:42.847627 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:54:42.847641 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:54:42.847656 systemd-journald[1138]: Journal started Jan 29 16:54:42.847678 systemd-journald[1138]: Runtime Journal (/run/log/journal/4dc995dfd16848c89fe8522993d4da25) is 4.8M, max 38.3M, 33.5M free. Jan 29 16:54:42.479257 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:54:42.851405 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:54:42.851425 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:54:42.495925 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 16:54:42.496508 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:54:42.854127 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:54:42.854341 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:54:42.855393 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:54:42.855595 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:54:42.856468 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:54:42.857550 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:54:42.858533 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:54:42.859489 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:54:42.875007 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:54:42.883134 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:54:42.888942 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:54:42.889517 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:54:42.889553 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:54:42.892187 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:54:42.896007 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:54:42.904725 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:54:42.906259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:54:42.909024 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:54:42.912079 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:54:42.914098 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:54:42.916983 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:54:42.917605 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:54:42.926170 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:54:42.928997 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:54:42.936082 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:54:42.948550 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:54:42.954539 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:54:42.956789 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:54:42.980464 systemd-journald[1138]: Time spent on flushing to /var/log/journal/4dc995dfd16848c89fe8522993d4da25 is 29.467ms for 1153 entries. Jan 29 16:54:42.980464 systemd-journald[1138]: System Journal (/var/log/journal/4dc995dfd16848c89fe8522993d4da25) is 8M, max 584.8M, 576.8M free. Jan 29 16:54:43.047468 systemd-journald[1138]: Received client request to flush runtime journal. Jan 29 16:54:43.047547 kernel: loop0: detected capacity change from 0 to 147912 Jan 29 16:54:43.010320 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:54:43.011478 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:54:43.017741 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:54:43.025137 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:54:43.038098 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:54:43.041277 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:54:43.058382 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:54:43.070111 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:54:43.080431 udevadm[1195]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 16:54:43.083348 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:54:43.087558 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 29 16:54:43.087574 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 29 16:54:43.095609 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:54:43.104925 kernel: loop1: detected capacity change from 0 to 138176 Jan 29 16:54:43.103806 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:54:43.150056 kernel: loop2: detected capacity change from 0 to 8 Jan 29 16:54:43.153616 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:54:43.162641 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:54:43.180082 kernel: loop3: detected capacity change from 0 to 205544 Jan 29 16:54:43.187674 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jan 29 16:54:43.188193 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jan 29 16:54:43.194577 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:54:43.248089 kernel: loop4: detected capacity change from 0 to 147912 Jan 29 16:54:43.278382 kernel: loop5: detected capacity change from 0 to 138176 Jan 29 16:54:43.304918 kernel: loop6: detected capacity change from 0 to 8 Jan 29 16:54:43.309914 kernel: loop7: detected capacity change from 0 to 205544 Jan 29 16:54:43.349101 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 29 16:54:43.349865 (sd-merge)[1212]: Merged extensions into '/usr'. Jan 29 16:54:43.357126 systemd[1]: Reload requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:54:43.357305 systemd[1]: Reloading... Jan 29 16:54:43.491914 zram_generator::config[1240]: No configuration found. Jan 29 16:54:43.669999 ldconfig[1178]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:54:43.669791 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:54:43.770864 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:54:43.771255 systemd[1]: Reloading finished in 410 ms. Jan 29 16:54:43.796796 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:54:43.798352 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:54:43.813055 systemd[1]: Starting ensure-sysext.service... Jan 29 16:54:43.822218 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:54:43.838813 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:54:43.856005 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:54:43.857001 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:54:43.857023 systemd[1]: Reloading... Jan 29 16:54:43.863298 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:54:43.863712 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:54:43.866670 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:54:43.868110 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jan 29 16:54:43.868366 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jan 29 16:54:43.880392 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:54:43.880416 systemd-tmpfiles[1284]: Skipping /boot Jan 29 16:54:43.897569 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:54:43.897587 systemd-tmpfiles[1284]: Skipping /boot Jan 29 16:54:43.931106 systemd-udevd[1287]: Using default interface naming scheme 'v255'. Jan 29 16:54:43.982324 zram_generator::config[1319]: No configuration found. Jan 29 16:54:44.196037 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:54:44.228314 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:54:44.247067 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 16:54:44.259523 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1317) Jan 29 16:54:44.277898 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:54:44.341915 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 16:54:44.374923 kernel: EDAC MC: Ver: 3.0.0 Jan 29 16:54:44.397579 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:54:44.398143 systemd[1]: Reloading finished in 540 ms. Jan 29 16:54:44.402001 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 29 16:54:44.411603 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 16:54:44.415604 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 16:54:44.415846 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 16:54:44.412456 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:54:44.414675 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:54:44.443425 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 29 16:54:44.448893 kernel: Console: switching to colour dummy device 80x25 Jan 29 16:54:44.448948 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 16:54:44.448965 kernel: [drm] features: -context_init Jan 29 16:54:44.451938 kernel: [drm] number of scanouts: 1 Jan 29 16:54:44.451989 kernel: [drm] number of cap sets: 0 Jan 29 16:54:44.456918 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 29 16:54:44.470912 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 16:54:44.471002 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 16:54:44.475916 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 16:54:44.503797 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 29 16:54:44.516976 systemd[1]: Finished ensure-sysext.service. Jan 29 16:54:44.531068 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 16:54:44.535887 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:54:44.544144 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:54:44.548062 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:54:44.548365 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:54:44.551381 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:54:44.560125 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:54:44.563055 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:54:44.567034 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:54:44.567934 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:54:44.571020 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:54:44.571180 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:54:44.575687 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:54:44.584315 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:54:44.589142 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:54:44.594899 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:54:44.600303 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:54:44.612176 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:54:44.612310 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:54:44.613531 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:54:44.613816 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:54:44.617057 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:54:44.617346 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:54:44.617795 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:54:44.618093 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:54:44.626468 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:54:44.642086 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:54:44.643668 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:54:44.644084 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:54:44.644319 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:54:44.647795 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:54:44.664829 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:54:44.668221 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:54:44.686808 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:54:44.709273 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:54:44.711531 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:54:44.734674 augenrules[1440]: No rules Jan 29 16:54:44.727193 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:54:44.734241 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:54:44.736310 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:54:44.756173 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:54:44.761210 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:54:44.768476 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:54:44.781979 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:54:44.791252 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:54:44.793481 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:54:44.817729 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:54:44.821055 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:54:44.861642 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:54:44.873021 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:54:44.928573 systemd-resolved[1410]: Positive Trust Anchors: Jan 29 16:54:44.928599 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:54:44.928630 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:54:44.938597 systemd-resolved[1410]: Using system hostname 'ci-4230-0-0-b-bb52c92a60'. Jan 29 16:54:44.942084 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:54:44.943103 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:54:44.949952 systemd-networkd[1409]: lo: Link UP Jan 29 16:54:44.950392 systemd-networkd[1409]: lo: Gained carrier Jan 29 16:54:44.954495 systemd-networkd[1409]: Enumeration completed Jan 29 16:54:44.956011 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:54:44.956021 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:54:44.956826 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:54:44.958789 systemd-networkd[1409]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:54:44.958814 systemd-networkd[1409]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:54:44.959523 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:54:44.961001 systemd-networkd[1409]: eth0: Link UP Jan 29 16:54:44.961014 systemd-networkd[1409]: eth0: Gained carrier Jan 29 16:54:44.961255 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:54:44.962470 systemd[1]: Reached target network.target - Network. Jan 29 16:54:44.964279 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:54:44.964326 systemd-networkd[1409]: eth1: Link UP Jan 29 16:54:44.964331 systemd-networkd[1409]: eth1: Gained carrier Jan 29 16:54:44.964349 systemd-networkd[1409]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:54:44.966506 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:54:44.968657 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:54:44.969642 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:54:44.970394 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:54:44.970438 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:54:44.972418 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:54:44.974365 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:54:44.975914 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:54:44.977520 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:54:44.981794 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:54:44.986604 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:54:44.995007 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:54:44.999276 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:54:45.000482 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:54:45.015494 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:54:45.018614 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:54:45.029172 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:54:45.034306 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:54:45.039246 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:54:45.044417 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:54:45.044987 systemd-networkd[1409]: eth0: DHCPv4 address 116.202.14.223/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 16:54:45.047816 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Jan 29 16:54:45.048200 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:54:45.049600 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:54:45.049653 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:54:45.056821 systemd-networkd[1409]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:54:45.057045 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:54:45.063926 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:54:45.074196 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:54:45.080897 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:54:45.091077 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:54:45.092035 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:54:45.097068 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:54:45.104675 jq[1474]: false Jan 29 16:54:45.109052 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:54:45.116029 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 29 16:54:45.121491 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:54:45.126198 systemd-timesyncd[1411]: Contacted time server 17.253.14.251:123 (0.flatcar.pool.ntp.org). Jan 29 16:54:45.126274 systemd-timesyncd[1411]: Initial clock synchronization to Wed 2025-01-29 16:54:44.991220 UTC. Jan 29 16:54:45.127112 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:54:45.135491 dbus-daemon[1471]: [system] SELinux support is enabled Jan 29 16:54:45.153469 extend-filesystems[1475]: Found loop4 Jan 29 16:54:45.153469 extend-filesystems[1475]: Found loop5 Jan 29 16:54:45.153469 extend-filesystems[1475]: Found loop6 Jan 29 16:54:45.153469 extend-filesystems[1475]: Found loop7 Jan 29 16:54:45.153469 extend-filesystems[1475]: Found sda Jan 29 16:54:45.153469 extend-filesystems[1475]: Found sda1 Jan 29 16:54:45.153469 extend-filesystems[1475]: Found sda2 Jan 29 16:54:45.153469 extend-filesystems[1475]: Found sda3 Jan 29 16:54:45.153469 extend-filesystems[1475]: Found usr Jan 29 16:54:45.153469 extend-filesystems[1475]: Found sda4 Jan 29 16:54:45.153469 extend-filesystems[1475]: Found sda6 Jan 29 16:54:45.153469 extend-filesystems[1475]: Found sda7 Jan 29 16:54:45.153469 extend-filesystems[1475]: Found sda9 Jan 29 16:54:45.153469 extend-filesystems[1475]: Checking size of /dev/sda9 Jan 29 16:54:45.141160 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:54:45.148581 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:54:45.152690 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:54:45.156115 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:54:45.196267 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:54:45.202892 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:54:45.210492 extend-filesystems[1475]: Resized partition /dev/sda9 Jan 29 16:54:45.212439 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:54:45.228919 extend-filesystems[1499]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:54:45.228832 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:54:45.239638 coreos-metadata[1470]: Jan 29 16:54:45.236 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 29 16:54:45.245132 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 29 16:54:45.229168 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:54:45.229528 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:54:45.230143 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:54:45.243779 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:54:45.246407 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:54:45.250467 coreos-metadata[1470]: Jan 29 16:54:45.250 INFO Fetch successful Jan 29 16:54:45.250467 coreos-metadata[1470]: Jan 29 16:54:45.250 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 29 16:54:45.253941 coreos-metadata[1470]: Jan 29 16:54:45.253 INFO Fetch successful Jan 29 16:54:45.260119 jq[1496]: true Jan 29 16:54:45.315078 (ntainerd)[1503]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:54:45.315540 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:54:45.315572 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:54:45.318234 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:54:45.318264 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:54:45.347939 update_engine[1490]: I20250129 16:54:45.343708 1490 main.cc:92] Flatcar Update Engine starting Jan 29 16:54:45.350072 jq[1508]: true Jan 29 16:54:45.364659 tar[1501]: linux-amd64/helm Jan 29 16:54:45.365459 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:54:45.372952 update_engine[1490]: I20250129 16:54:45.372740 1490 update_check_scheduler.cc:74] Next update check in 11m20s Jan 29 16:54:45.375091 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:54:45.397378 systemd-logind[1483]: New seat seat0. Jan 29 16:54:45.399626 systemd-logind[1483]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 16:54:45.399651 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:54:45.399825 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:54:45.418428 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1317) Jan 29 16:54:45.474334 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 29 16:54:45.507462 extend-filesystems[1499]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 16:54:45.507462 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 29 16:54:45.507462 extend-filesystems[1499]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 29 16:54:45.526054 extend-filesystems[1475]: Resized filesystem in /dev/sda9 Jan 29 16:54:45.526054 extend-filesystems[1475]: Found sr0 Jan 29 16:54:45.551812 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:54:45.529442 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:54:45.529721 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:54:45.544566 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:54:45.567735 systemd[1]: Starting sshkeys.service... Jan 29 16:54:45.576584 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:54:45.579799 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:54:45.623869 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 16:54:45.639578 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 16:54:45.698902 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:54:45.717542 coreos-metadata[1550]: Jan 29 16:54:45.716 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 29 16:54:45.719143 coreos-metadata[1550]: Jan 29 16:54:45.718 INFO Fetch successful Jan 29 16:54:45.722778 unknown[1550]: wrote ssh authorized keys file for user: core Jan 29 16:54:45.777344 update-ssh-keys[1557]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:54:45.781838 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 16:54:45.786322 systemd[1]: Finished sshkeys.service. Jan 29 16:54:45.834312 sshd_keygen[1505]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:54:45.862482 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:54:45.877332 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:54:45.891120 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:54:45.891427 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:54:45.907257 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:54:45.911699 containerd[1503]: time="2025-01-29T16:54:45.911607230Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:54:45.932401 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:54:45.947100 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:54:45.959248 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:54:45.961038 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:54:45.966605 containerd[1503]: time="2025-01-29T16:54:45.966558944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:54:45.968327 containerd[1503]: time="2025-01-29T16:54:45.968301160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:54:45.968401 containerd[1503]: time="2025-01-29T16:54:45.968387993Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:54:45.968450 containerd[1503]: time="2025-01-29T16:54:45.968439619Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:54:45.968673 containerd[1503]: time="2025-01-29T16:54:45.968655564Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:54:45.968746 containerd[1503]: time="2025-01-29T16:54:45.968731677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:54:45.969906 containerd[1503]: time="2025-01-29T16:54:45.968868423Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:54:45.969906 containerd[1503]: time="2025-01-29T16:54:45.968901095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:54:45.969906 containerd[1503]: time="2025-01-29T16:54:45.969143519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:54:45.969906 containerd[1503]: time="2025-01-29T16:54:45.969170280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:54:45.969906 containerd[1503]: time="2025-01-29T16:54:45.969183574Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:54:45.969906 containerd[1503]: time="2025-01-29T16:54:45.969192751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:54:45.969906 containerd[1503]: time="2025-01-29T16:54:45.969282330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:54:45.969906 containerd[1503]: time="2025-01-29T16:54:45.969547657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:54:45.969906 containerd[1503]: time="2025-01-29T16:54:45.969701385Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:54:45.969906 containerd[1503]: time="2025-01-29T16:54:45.969712466Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:54:45.969906 containerd[1503]: time="2025-01-29T16:54:45.969805791Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:54:45.970124 containerd[1503]: time="2025-01-29T16:54:45.969863299Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:54:45.978572 containerd[1503]: time="2025-01-29T16:54:45.978536729Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:54:45.978735 containerd[1503]: time="2025-01-29T16:54:45.978721696Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:54:45.978836 containerd[1503]: time="2025-01-29T16:54:45.978822856Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:54:45.978937 containerd[1503]: time="2025-01-29T16:54:45.978919808Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:54:45.979051 containerd[1503]: time="2025-01-29T16:54:45.979033792Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:54:45.979408 containerd[1503]: time="2025-01-29T16:54:45.979387415Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:54:45.979830 containerd[1503]: time="2025-01-29T16:54:45.979806060Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:54:45.980052 containerd[1503]: time="2025-01-29T16:54:45.980030941Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:54:45.980139 containerd[1503]: time="2025-01-29T16:54:45.980121561Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:54:45.980242 containerd[1503]: time="2025-01-29T16:54:45.980224464Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:54:45.980334 containerd[1503]: time="2025-01-29T16:54:45.980317579Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:54:45.980424 containerd[1503]: time="2025-01-29T16:54:45.980407316Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:54:45.980523 containerd[1503]: time="2025-01-29T16:54:45.980506923Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:54:45.980610 containerd[1503]: time="2025-01-29T16:54:45.980595199Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:54:45.980690 containerd[1503]: time="2025-01-29T16:54:45.980677173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:54:45.980758 containerd[1503]: time="2025-01-29T16:54:45.980730433Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:54:45.980812 containerd[1503]: time="2025-01-29T16:54:45.980801466Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:54:45.980888 containerd[1503]: time="2025-01-29T16:54:45.980864284Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:54:45.981014 containerd[1503]: time="2025-01-29T16:54:45.981000348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:54:45.981083 containerd[1503]: time="2025-01-29T16:54:45.981071572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:54:45.981169 containerd[1503]: time="2025-01-29T16:54:45.981128108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:54:45.981251 containerd[1503]: time="2025-01-29T16:54:45.981235669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:54:45.981338 containerd[1503]: time="2025-01-29T16:54:45.981323985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:54:45.981409 containerd[1503]: time="2025-01-29T16:54:45.981397684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:54:45.981472 containerd[1503]: time="2025-01-29T16:54:45.981446315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:54:45.981586 containerd[1503]: time="2025-01-29T16:54:45.981507449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:54:45.981586 containerd[1503]: time="2025-01-29T16:54:45.981526374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:54:45.981586 containerd[1503]: time="2025-01-29T16:54:45.981542335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:54:45.981586 containerd[1503]: time="2025-01-29T16:54:45.981553846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:54:45.981586 containerd[1503]: time="2025-01-29T16:54:45.981566089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:54:45.981789 containerd[1503]: time="2025-01-29T16:54:45.981710941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:54:45.981789 containerd[1503]: time="2025-01-29T16:54:45.981730277Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:54:45.981789 containerd[1503]: time="2025-01-29T16:54:45.981751046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:54:45.981789 containerd[1503]: time="2025-01-29T16:54:45.981763890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:54:45.981965 containerd[1503]: time="2025-01-29T16:54:45.981885658Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:54:45.982079 containerd[1503]: time="2025-01-29T16:54:45.982024408Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:54:45.982079 containerd[1503]: time="2025-01-29T16:54:45.982047221Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:54:45.982079 containerd[1503]: time="2025-01-29T16:54:45.982057881Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:54:45.982429 containerd[1503]: time="2025-01-29T16:54:45.982240043Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:54:45.982429 containerd[1503]: time="2025-01-29T16:54:45.982282042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:54:45.982429 containerd[1503]: time="2025-01-29T16:54:45.982297951Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:54:45.982429 containerd[1503]: time="2025-01-29T16:54:45.982310455Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:54:45.982429 containerd[1503]: time="2025-01-29T16:54:45.982322948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:54:45.983078 containerd[1503]: time="2025-01-29T16:54:45.982923454Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:54:45.983078 containerd[1503]: time="2025-01-29T16:54:45.983004886Z" level=info msg="Connect containerd service" Jan 29 16:54:45.983078 containerd[1503]: time="2025-01-29T16:54:45.983041856Z" level=info msg="using legacy CRI server" Jan 29 16:54:45.983590 containerd[1503]: time="2025-01-29T16:54:45.983051193Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:54:45.983590 containerd[1503]: time="2025-01-29T16:54:45.983519171Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:54:45.984896 containerd[1503]: time="2025-01-29T16:54:45.984321225Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:54:45.984896 containerd[1503]: time="2025-01-29T16:54:45.984621838Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:54:45.984896 containerd[1503]: time="2025-01-29T16:54:45.984686129Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:54:45.984896 containerd[1503]: time="2025-01-29T16:54:45.984749407Z" level=info msg="Start subscribing containerd event" Jan 29 16:54:45.984896 containerd[1503]: time="2025-01-29T16:54:45.984789933Z" level=info msg="Start recovering state" Jan 29 16:54:45.984896 containerd[1503]: time="2025-01-29T16:54:45.984856188Z" level=info msg="Start event monitor" Jan 29 16:54:45.986035 containerd[1503]: time="2025-01-29T16:54:45.986014780Z" level=info msg="Start snapshots syncer" Jan 29 16:54:45.986785 containerd[1503]: time="2025-01-29T16:54:45.986764575Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:54:45.986888 containerd[1503]: time="2025-01-29T16:54:45.986855576Z" level=info msg="Start streaming server" Jan 29 16:54:45.987516 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:54:45.990123 containerd[1503]: time="2025-01-29T16:54:45.990086113Z" level=info msg="containerd successfully booted in 0.083887s" Jan 29 16:54:46.075058 systemd-networkd[1409]: eth0: Gained IPv6LL Jan 29 16:54:46.082798 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:54:46.088560 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:54:46.101516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:54:46.112820 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:54:46.151846 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:54:46.212801 tar[1501]: linux-amd64/LICENSE Jan 29 16:54:46.212801 tar[1501]: linux-amd64/README.md Jan 29 16:54:46.226547 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:54:46.843071 systemd-networkd[1409]: eth1: Gained IPv6LL Jan 29 16:54:47.276194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:54:47.276308 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:54:47.278690 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:54:47.282175 systemd[1]: Startup finished in 1.768s (kernel) + 7.697s (initrd) + 5.823s (userspace) = 15.289s. Jan 29 16:54:48.054035 kubelet[1601]: E0129 16:54:48.053935 1601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:54:48.062706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:54:48.063304 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:54:48.064362 systemd[1]: kubelet.service: Consumed 1.320s CPU time, 235.9M memory peak. Jan 29 16:54:58.313464 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:54:58.324557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:54:58.507226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:54:58.508679 (kubelet)[1619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:54:58.552265 kubelet[1619]: E0129 16:54:58.552205 1619 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:54:58.566170 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:54:58.566378 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:54:58.566775 systemd[1]: kubelet.service: Consumed 215ms CPU time, 98M memory peak. Jan 29 16:55:08.817479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:55:08.824161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:55:09.013830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:55:09.020098 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:55:09.071009 kubelet[1635]: E0129 16:55:09.070192 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:55:09.076461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:55:09.076734 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:55:09.077310 systemd[1]: kubelet.service: Consumed 213ms CPU time, 95.8M memory peak. Jan 29 16:55:19.328258 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 16:55:19.338373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:55:19.533773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:55:19.538044 (kubelet)[1650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:55:19.602620 kubelet[1650]: E0129 16:55:19.602428 1650 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:55:19.607578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:55:19.608082 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:55:19.608727 systemd[1]: kubelet.service: Consumed 239ms CPU time, 96M memory peak. Jan 29 16:55:29.807798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 16:55:29.814234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:55:29.984741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:55:29.990111 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:55:30.038791 kubelet[1665]: E0129 16:55:30.038730 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:55:30.041789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:55:30.041990 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:55:30.042314 systemd[1]: kubelet.service: Consumed 204ms CPU time, 95.8M memory peak. Jan 29 16:55:30.995166 update_engine[1490]: I20250129 16:55:30.995040 1490 update_attempter.cc:509] Updating boot flags... Jan 29 16:55:31.093379 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1681) Jan 29 16:55:31.182948 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1680) Jan 29 16:55:31.241910 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1680) Jan 29 16:55:40.057913 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 16:55:40.065387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:55:40.248147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:55:40.251264 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:55:40.285470 kubelet[1701]: E0129 16:55:40.285333 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:55:40.291740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:55:40.291954 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:55:40.292540 systemd[1]: kubelet.service: Consumed 206ms CPU time, 95.5M memory peak. Jan 29 16:55:50.308302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 16:55:50.315838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:55:50.517181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:55:50.521455 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:55:50.555910 kubelet[1716]: E0129 16:55:50.555495 1716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:55:50.564001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:55:50.564202 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:55:50.564645 systemd[1]: kubelet.service: Consumed 223ms CPU time, 97.6M memory peak. Jan 29 16:56:00.808391 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 16:56:00.821216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:56:01.032140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:56:01.036767 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:56:01.072303 kubelet[1732]: E0129 16:56:01.072121 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:56:01.079386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:56:01.079629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:56:01.080175 systemd[1]: kubelet.service: Consumed 221ms CPU time, 97.3M memory peak. Jan 29 16:56:11.307362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 29 16:56:11.315093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:56:11.501300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:56:11.502802 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:56:11.551645 kubelet[1747]: E0129 16:56:11.551592 1747 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:56:11.555643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:56:11.556043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:56:11.556623 systemd[1]: kubelet.service: Consumed 218ms CPU time, 97M memory peak. Jan 29 16:56:21.559167 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 29 16:56:21.568504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:56:21.837286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:56:21.837967 (kubelet)[1763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:56:21.914116 kubelet[1763]: E0129 16:56:21.914016 1763 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:56:21.919124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:56:21.919483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:56:21.920145 systemd[1]: kubelet.service: Consumed 303ms CPU time, 95.3M memory peak. Jan 29 16:56:32.058067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 29 16:56:32.071538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:56:32.250337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:56:32.263510 (kubelet)[1778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:56:32.303680 kubelet[1778]: E0129 16:56:32.303529 1778 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:56:32.310705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:56:32.310954 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:56:32.311496 systemd[1]: kubelet.service: Consumed 219ms CPU time, 96.3M memory peak. Jan 29 16:56:36.863753 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:56:36.875480 systemd[1]: Started sshd@0-116.202.14.223:22-147.75.109.163:55770.service - OpenSSH per-connection server daemon (147.75.109.163:55770). Jan 29 16:56:37.896363 sshd[1786]: Accepted publickey for core from 147.75.109.163 port 55770 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 16:56:37.900126 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:56:37.914619 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:56:37.924442 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:56:37.946466 systemd-logind[1483]: New session 1 of user core. Jan 29 16:56:37.958111 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:56:37.968540 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:56:37.985993 (systemd)[1790]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:56:37.990267 systemd-logind[1483]: New session c1 of user core. Jan 29 16:56:38.189567 systemd[1790]: Queued start job for default target default.target. Jan 29 16:56:38.196327 systemd[1790]: Created slice app.slice - User Application Slice. Jan 29 16:56:38.196357 systemd[1790]: Reached target paths.target - Paths. Jan 29 16:56:38.196401 systemd[1790]: Reached target timers.target - Timers. Jan 29 16:56:38.198319 systemd[1790]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:56:38.212748 systemd[1790]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:56:38.212957 systemd[1790]: Reached target sockets.target - Sockets. Jan 29 16:56:38.213006 systemd[1790]: Reached target basic.target - Basic System. Jan 29 16:56:38.213050 systemd[1790]: Reached target default.target - Main User Target. Jan 29 16:56:38.213084 systemd[1790]: Startup finished in 214ms. Jan 29 16:56:38.213716 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:56:38.222334 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:56:38.924386 systemd[1]: Started sshd@1-116.202.14.223:22-147.75.109.163:56354.service - OpenSSH per-connection server daemon (147.75.109.163:56354). Jan 29 16:56:39.942632 sshd[1802]: Accepted publickey for core from 147.75.109.163 port 56354 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 16:56:39.945711 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:56:39.956164 systemd-logind[1483]: New session 2 of user core. Jan 29 16:56:39.966305 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:56:40.621580 sshd[1804]: Connection closed by 147.75.109.163 port 56354 Jan 29 16:56:40.622311 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Jan 29 16:56:40.627607 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:56:40.628318 systemd[1]: sshd@1-116.202.14.223:22-147.75.109.163:56354.service: Deactivated successfully. Jan 29 16:56:40.630537 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:56:40.631505 systemd-logind[1483]: Removed session 2. Jan 29 16:56:40.801473 systemd[1]: Started sshd@2-116.202.14.223:22-147.75.109.163:56356.service - OpenSSH per-connection server daemon (147.75.109.163:56356). Jan 29 16:56:41.818539 sshd[1810]: Accepted publickey for core from 147.75.109.163 port 56356 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 16:56:41.821304 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:56:41.826238 systemd-logind[1483]: New session 3 of user core. Jan 29 16:56:41.833074 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:56:42.493108 sshd[1812]: Connection closed by 147.75.109.163 port 56356 Jan 29 16:56:42.494723 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Jan 29 16:56:42.500323 systemd[1]: sshd@2-116.202.14.223:22-147.75.109.163:56356.service: Deactivated successfully. Jan 29 16:56:42.503115 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:56:42.504594 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 29 16:56:42.506599 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:56:42.507819 systemd-logind[1483]: Removed session 3. Jan 29 16:56:42.515567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:56:42.669338 systemd[1]: Started sshd@3-116.202.14.223:22-147.75.109.163:56366.service - OpenSSH per-connection server daemon (147.75.109.163:56366). Jan 29 16:56:42.672025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:56:42.683471 (kubelet)[1826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:56:42.725461 kubelet[1826]: E0129 16:56:42.725380 1826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:56:42.729990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:56:42.730195 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:56:42.730567 systemd[1]: kubelet.service: Consumed 184ms CPU time, 95.6M memory peak. Jan 29 16:56:43.683234 sshd[1825]: Accepted publickey for core from 147.75.109.163 port 56366 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 16:56:43.686105 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:56:43.693065 systemd-logind[1483]: New session 4 of user core. Jan 29 16:56:43.705303 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:56:44.379860 sshd[1835]: Connection closed by 147.75.109.163 port 56366 Jan 29 16:56:44.381415 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Jan 29 16:56:44.389198 systemd[1]: sshd@3-116.202.14.223:22-147.75.109.163:56366.service: Deactivated successfully. Jan 29 16:56:44.395149 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:56:44.399326 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:56:44.402839 systemd-logind[1483]: Removed session 4. Jan 29 16:56:44.567435 systemd[1]: Started sshd@4-116.202.14.223:22-147.75.109.163:56380.service - OpenSSH per-connection server daemon (147.75.109.163:56380). Jan 29 16:56:45.588519 sshd[1841]: Accepted publickey for core from 147.75.109.163 port 56380 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 16:56:45.590826 sshd-session[1841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:56:45.597701 systemd-logind[1483]: New session 5 of user core. Jan 29 16:56:45.612234 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:56:46.126434 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:56:46.126853 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:56:46.146915 sudo[1844]: pam_unix(sudo:session): session closed for user root Jan 29 16:56:46.307457 sshd[1843]: Connection closed by 147.75.109.163 port 56380 Jan 29 16:56:46.308861 sshd-session[1841]: pam_unix(sshd:session): session closed for user core Jan 29 16:56:46.314232 systemd[1]: sshd@4-116.202.14.223:22-147.75.109.163:56380.service: Deactivated successfully. Jan 29 16:56:46.316693 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:56:46.317860 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:56:46.319506 systemd-logind[1483]: Removed session 5. Jan 29 16:56:46.488975 systemd[1]: Started sshd@5-116.202.14.223:22-147.75.109.163:56382.service - OpenSSH per-connection server daemon (147.75.109.163:56382). Jan 29 16:56:47.500852 sshd[1850]: Accepted publickey for core from 147.75.109.163 port 56382 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 16:56:47.504292 sshd-session[1850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:56:47.513648 systemd-logind[1483]: New session 6 of user core. Jan 29 16:56:47.525201 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:56:48.028513 sudo[1854]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:56:48.029145 sudo[1854]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:56:48.035389 sudo[1854]: pam_unix(sudo:session): session closed for user root Jan 29 16:56:48.043761 sudo[1853]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:56:48.044268 sudo[1853]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:56:48.065581 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:56:48.106278 augenrules[1876]: No rules Jan 29 16:56:48.107278 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:56:48.107654 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:56:48.110511 sudo[1853]: pam_unix(sudo:session): session closed for user root Jan 29 16:56:48.268576 sshd[1852]: Connection closed by 147.75.109.163 port 56382 Jan 29 16:56:48.269655 sshd-session[1850]: pam_unix(sshd:session): session closed for user core Jan 29 16:56:48.275624 systemd[1]: sshd@5-116.202.14.223:22-147.75.109.163:56382.service: Deactivated successfully. Jan 29 16:56:48.280062 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:56:48.282930 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:56:48.284609 systemd-logind[1483]: Removed session 6. Jan 29 16:56:48.454456 systemd[1]: Started sshd@6-116.202.14.223:22-147.75.109.163:53970.service - OpenSSH per-connection server daemon (147.75.109.163:53970). Jan 29 16:56:49.453282 sshd[1885]: Accepted publickey for core from 147.75.109.163 port 53970 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 16:56:49.456314 sshd-session[1885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:56:49.467229 systemd-logind[1483]: New session 7 of user core. Jan 29 16:56:49.479211 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:56:49.983769 sudo[1888]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:56:49.984781 sudo[1888]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:56:50.486638 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:56:50.487229 (dockerd)[1906]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:56:50.986090 dockerd[1906]: time="2025-01-29T16:56:50.985949117Z" level=info msg="Starting up" Jan 29 16:56:51.098863 systemd[1]: var-lib-docker-metacopy\x2dcheck911245656-merged.mount: Deactivated successfully. Jan 29 16:56:51.137437 dockerd[1906]: time="2025-01-29T16:56:51.137344808Z" level=info msg="Loading containers: start." Jan 29 16:56:51.386994 kernel: Initializing XFRM netlink socket Jan 29 16:56:51.525419 systemd-networkd[1409]: docker0: Link UP Jan 29 16:56:51.563735 dockerd[1906]: time="2025-01-29T16:56:51.563675662Z" level=info msg="Loading containers: done." Jan 29 16:56:51.593197 dockerd[1906]: time="2025-01-29T16:56:51.593045516Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:56:51.593519 dockerd[1906]: time="2025-01-29T16:56:51.593250745Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:56:51.593519 dockerd[1906]: time="2025-01-29T16:56:51.593461012Z" level=info msg="Daemon has completed initialization" Jan 29 16:56:51.665969 dockerd[1906]: time="2025-01-29T16:56:51.664590960Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:56:51.664993 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:56:52.807135 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 29 16:56:52.816345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:56:53.020029 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:56:53.025766 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:56:53.045836 containerd[1503]: time="2025-01-29T16:56:53.045770730Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 16:56:53.095052 kubelet[2101]: E0129 16:56:53.093724 2101 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:56:53.098239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:56:53.098577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:56:53.099688 systemd[1]: kubelet.service: Consumed 246ms CPU time, 95.6M memory peak. Jan 29 16:56:53.751799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount659452300.mount: Deactivated successfully. Jan 29 16:56:54.736462 containerd[1503]: time="2025-01-29T16:56:54.736389164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:54.738103 containerd[1503]: time="2025-01-29T16:56:54.738056154Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976813" Jan 29 16:56:54.739772 containerd[1503]: time="2025-01-29T16:56:54.739685092Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:54.743175 containerd[1503]: time="2025-01-29T16:56:54.743119746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:54.744363 containerd[1503]: time="2025-01-29T16:56:54.744168894Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 1.698334116s" Jan 29 16:56:54.744363 containerd[1503]: time="2025-01-29T16:56:54.744225089Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 16:56:54.746368 containerd[1503]: time="2025-01-29T16:56:54.746334013Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 16:56:56.052751 containerd[1503]: time="2025-01-29T16:56:56.052657675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:56.054077 containerd[1503]: time="2025-01-29T16:56:56.054020274Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701163" Jan 29 16:56:56.055549 containerd[1503]: time="2025-01-29T16:56:56.055480154Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:56.059749 containerd[1503]: time="2025-01-29T16:56:56.059660332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:56.061222 containerd[1503]: time="2025-01-29T16:56:56.061176385Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.314792881s" Jan 29 16:56:56.061274 containerd[1503]: time="2025-01-29T16:56:56.061226088Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 16:56:56.061833 containerd[1503]: time="2025-01-29T16:56:56.061720912Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 16:56:57.209551 containerd[1503]: time="2025-01-29T16:56:57.209451579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:57.211186 containerd[1503]: time="2025-01-29T16:56:57.211113624Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652073" Jan 29 16:56:57.212855 containerd[1503]: time="2025-01-29T16:56:57.212807276Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:57.216503 containerd[1503]: time="2025-01-29T16:56:57.216440958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:57.217975 containerd[1503]: time="2025-01-29T16:56:57.217426701Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.155674061s" Jan 29 16:56:57.217975 containerd[1503]: time="2025-01-29T16:56:57.217463339Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 16:56:57.218592 containerd[1503]: time="2025-01-29T16:56:57.218305006Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 16:56:58.263014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2014483243.mount: Deactivated successfully. Jan 29 16:56:58.644206 containerd[1503]: time="2025-01-29T16:56:58.644132924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:58.645371 containerd[1503]: time="2025-01-29T16:56:58.645323637Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231154" Jan 29 16:56:58.646527 containerd[1503]: time="2025-01-29T16:56:58.646476561Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:58.648906 containerd[1503]: time="2025-01-29T16:56:58.648843691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:58.649405 containerd[1503]: time="2025-01-29T16:56:58.649370998Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.431034283s" Jan 29 16:56:58.649405 containerd[1503]: time="2025-01-29T16:56:58.649402957Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 16:56:58.650172 containerd[1503]: time="2025-01-29T16:56:58.649936464Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 16:56:59.284149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount687430741.mount: Deactivated successfully. Jan 29 16:57:00.239458 containerd[1503]: time="2025-01-29T16:57:00.239369925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:00.241103 containerd[1503]: time="2025-01-29T16:57:00.241024570Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Jan 29 16:57:00.242871 containerd[1503]: time="2025-01-29T16:57:00.242790160Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:00.246490 containerd[1503]: time="2025-01-29T16:57:00.246420293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:00.247975 containerd[1503]: time="2025-01-29T16:57:00.247344284Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.597386641s" Jan 29 16:57:00.247975 containerd[1503]: time="2025-01-29T16:57:00.247373969Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 16:57:00.247975 containerd[1503]: time="2025-01-29T16:57:00.247779721Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 16:57:00.787577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184351202.mount: Deactivated successfully. Jan 29 16:57:00.798387 containerd[1503]: time="2025-01-29T16:57:00.798299661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:00.799986 containerd[1503]: time="2025-01-29T16:57:00.799852227Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321158" Jan 29 16:57:00.801070 containerd[1503]: time="2025-01-29T16:57:00.800965690Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:00.805982 containerd[1503]: time="2025-01-29T16:57:00.805830039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:00.807624 containerd[1503]: time="2025-01-29T16:57:00.807325007Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 559.519899ms" Jan 29 16:57:00.807624 containerd[1503]: time="2025-01-29T16:57:00.807389006Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 16:57:00.808606 containerd[1503]: time="2025-01-29T16:57:00.808194819Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 16:57:01.409118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2334817680.mount: Deactivated successfully. Jan 29 16:57:02.929655 containerd[1503]: time="2025-01-29T16:57:02.929574104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:02.931633 containerd[1503]: time="2025-01-29T16:57:02.931575103Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780035" Jan 29 16:57:02.933228 containerd[1503]: time="2025-01-29T16:57:02.933171974Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:02.936710 containerd[1503]: time="2025-01-29T16:57:02.936672401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:02.938019 containerd[1503]: time="2025-01-29T16:57:02.937534910Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.129259702s" Jan 29 16:57:02.938019 containerd[1503]: time="2025-01-29T16:57:02.937563013Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 16:57:03.308361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 29 16:57:03.316245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:57:03.506321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:57:03.506740 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:57:03.618934 kubelet[2298]: E0129 16:57:03.617751 2298 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:57:03.622163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:57:03.622373 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:57:03.622781 systemd[1]: kubelet.service: Consumed 254ms CPU time, 96.4M memory peak. Jan 29 16:57:06.119792 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:57:06.120412 systemd[1]: kubelet.service: Consumed 254ms CPU time, 96.4M memory peak. Jan 29 16:57:06.135618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:57:06.189175 systemd[1]: Reload requested from client PID 2325 ('systemctl') (unit session-7.scope)... Jan 29 16:57:06.189196 systemd[1]: Reloading... Jan 29 16:57:06.354952 zram_generator::config[2376]: No configuration found. Jan 29 16:57:06.478106 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:57:06.614962 systemd[1]: Reloading finished in 425 ms. Jan 29 16:57:06.683862 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 16:57:06.684000 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 16:57:06.684295 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:57:06.684340 systemd[1]: kubelet.service: Consumed 149ms CPU time, 83.3M memory peak. Jan 29 16:57:06.686454 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:57:06.881176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:57:06.881415 (kubelet)[2424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:57:06.933937 kubelet[2424]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:57:06.935182 kubelet[2424]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:57:06.935290 kubelet[2424]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:57:06.941990 kubelet[2424]: I0129 16:57:06.941871 2424 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:57:07.164111 kubelet[2424]: I0129 16:57:07.163980 2424 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:57:07.164355 kubelet[2424]: I0129 16:57:07.164341 2424 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:57:07.165171 kubelet[2424]: I0129 16:57:07.165146 2424 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:57:07.200186 kubelet[2424]: I0129 16:57:07.200154 2424 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:57:07.202256 kubelet[2424]: E0129 16:57:07.201943 2424 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://116.202.14.223:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 116.202.14.223:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:07.210189 kubelet[2424]: E0129 16:57:07.210152 2424 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:57:07.210189 kubelet[2424]: I0129 16:57:07.210178 2424 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:57:07.215504 kubelet[2424]: I0129 16:57:07.215477 2424 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:57:07.221248 kubelet[2424]: I0129 16:57:07.221221 2424 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:57:07.221421 kubelet[2424]: I0129 16:57:07.221380 2424 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:57:07.221615 kubelet[2424]: I0129 16:57:07.221410 2424 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-b-bb52c92a60","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:57:07.221615 kubelet[2424]: I0129 16:57:07.221609 2424 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:57:07.221768 kubelet[2424]: I0129 16:57:07.221623 2424 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:57:07.221768 kubelet[2424]: I0129 16:57:07.221758 2424 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:57:07.224287 kubelet[2424]: I0129 16:57:07.224093 2424 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:57:07.224287 kubelet[2424]: I0129 16:57:07.224114 2424 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:57:07.224287 kubelet[2424]: I0129 16:57:07.224145 2424 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:57:07.224287 kubelet[2424]: I0129 16:57:07.224159 2424 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:57:07.235196 kubelet[2424]: W0129 16:57:07.235141 2424 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://116.202.14.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-b-bb52c92a60&limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 29 16:57:07.235196 kubelet[2424]: E0129 16:57:07.235194 2424 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://116.202.14.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-b-bb52c92a60&limit=500&resourceVersion=0\": dial tcp 116.202.14.223:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:07.236008 kubelet[2424]: W0129 16:57:07.235463 2424 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://116.202.14.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 29 16:57:07.236008 kubelet[2424]: E0129 16:57:07.235494 2424 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://116.202.14.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 116.202.14.223:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:07.236008 kubelet[2424]: I0129 16:57:07.235851 2424 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:57:07.239474 kubelet[2424]: I0129 16:57:07.239278 2424 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:57:07.240747 kubelet[2424]: W0129 16:57:07.240670 2424 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:57:07.245946 kubelet[2424]: I0129 16:57:07.244347 2424 server.go:1269] "Started kubelet" Jan 29 16:57:07.249036 kubelet[2424]: I0129 16:57:07.248066 2424 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:57:07.257055 kubelet[2424]: I0129 16:57:07.256992 2424 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:57:07.262344 kubelet[2424]: I0129 16:57:07.262312 2424 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:57:07.262522 kubelet[2424]: E0129 16:57:07.260160 2424 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-0-0-b-bb52c92a60\" not found" Jan 29 16:57:07.262621 kubelet[2424]: I0129 16:57:07.260205 2424 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:57:07.263098 kubelet[2424]: I0129 16:57:07.263068 2424 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:57:07.264764 kubelet[2424]: I0129 16:57:07.264742 2424 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:57:07.264931 kubelet[2424]: I0129 16:57:07.260143 2424 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:57:07.266457 kubelet[2424]: I0129 16:57:07.266432 2424 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:57:07.267612 kubelet[2424]: I0129 16:57:07.266539 2424 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:57:07.268195 kubelet[2424]: W0129 16:57:07.268145 2424 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://116.202.14.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 29 16:57:07.268335 kubelet[2424]: E0129 16:57:07.268312 2424 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://116.202.14.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 116.202.14.223:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:07.268495 kubelet[2424]: E0129 16:57:07.268466 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.14.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-b-bb52c92a60?timeout=10s\": dial tcp 116.202.14.223:6443: connect: connection refused" interval="200ms" Jan 29 16:57:07.271794 kubelet[2424]: I0129 16:57:07.270983 2424 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:57:07.271794 kubelet[2424]: I0129 16:57:07.271069 2424 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:57:07.274691 kubelet[2424]: E0129 16:57:07.269622 2424 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://116.202.14.223:6443/api/v1/namespaces/default/events\": dial tcp 116.202.14.223:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-0-b-bb52c92a60.181f383fa832d22c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-b-bb52c92a60,UID:ci-4230-0-0-b-bb52c92a60,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-b-bb52c92a60,},FirstTimestamp:2025-01-29 16:57:07.244298796 +0000 UTC m=+0.357498265,LastTimestamp:2025-01-29 16:57:07.244298796 +0000 UTC m=+0.357498265,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-b-bb52c92a60,}" Jan 29 16:57:07.276237 kubelet[2424]: I0129 16:57:07.276215 2424 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:57:07.291618 kubelet[2424]: I0129 16:57:07.291471 2424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:57:07.292990 kubelet[2424]: I0129 16:57:07.292975 2424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:57:07.293371 kubelet[2424]: I0129 16:57:07.293059 2424 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:57:07.293371 kubelet[2424]: I0129 16:57:07.293082 2424 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:57:07.293371 kubelet[2424]: E0129 16:57:07.293130 2424 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:57:07.307079 kubelet[2424]: W0129 16:57:07.305645 2424 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://116.202.14.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 29 16:57:07.307079 kubelet[2424]: E0129 16:57:07.305715 2424 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://116.202.14.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 116.202.14.223:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:07.307984 kubelet[2424]: E0129 16:57:07.307933 2424 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:57:07.312816 kubelet[2424]: I0129 16:57:07.312798 2424 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:57:07.312967 kubelet[2424]: I0129 16:57:07.312955 2424 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:57:07.313075 kubelet[2424]: I0129 16:57:07.313065 2424 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:57:07.317580 kubelet[2424]: I0129 16:57:07.317562 2424 policy_none.go:49] "None policy: Start" Jan 29 16:57:07.318267 kubelet[2424]: I0129 16:57:07.318254 2424 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:57:07.318426 kubelet[2424]: I0129 16:57:07.318412 2424 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:57:07.327777 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:57:07.348631 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:57:07.354149 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:57:07.362733 kubelet[2424]: E0129 16:57:07.362670 2424 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-0-0-b-bb52c92a60\" not found" Jan 29 16:57:07.363706 kubelet[2424]: I0129 16:57:07.363214 2424 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:57:07.363706 kubelet[2424]: I0129 16:57:07.363482 2424 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:57:07.363706 kubelet[2424]: I0129 16:57:07.363494 2424 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:57:07.364519 kubelet[2424]: I0129 16:57:07.364497 2424 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:57:07.365730 kubelet[2424]: E0129 16:57:07.365703 2424 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-0-0-b-bb52c92a60\" not found" Jan 29 16:57:07.411391 systemd[1]: Created slice kubepods-burstable-podcfb23053a384317632dc58b95a1ac53a.slice - libcontainer container kubepods-burstable-podcfb23053a384317632dc58b95a1ac53a.slice. Jan 29 16:57:07.430804 systemd[1]: Created slice kubepods-burstable-podae4fa30a29d0ae8b6fbed4bcb97b56c0.slice - libcontainer container kubepods-burstable-podae4fa30a29d0ae8b6fbed4bcb97b56c0.slice. Jan 29 16:57:07.452099 systemd[1]: Created slice kubepods-burstable-pod0edb73e790d8e6e04834fe3d03df0776.slice - libcontainer container kubepods-burstable-pod0edb73e790d8e6e04834fe3d03df0776.slice. Jan 29 16:57:07.465728 kubelet[2424]: I0129 16:57:07.465684 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ae4fa30a29d0ae8b6fbed4bcb97b56c0-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-b-bb52c92a60\" (UID: \"ae4fa30a29d0ae8b6fbed4bcb97b56c0\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:07.467238 kubelet[2424]: I0129 16:57:07.466099 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae4fa30a29d0ae8b6fbed4bcb97b56c0-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-b-bb52c92a60\" (UID: \"ae4fa30a29d0ae8b6fbed4bcb97b56c0\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:07.467238 kubelet[2424]: I0129 16:57:07.466164 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae4fa30a29d0ae8b6fbed4bcb97b56c0-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-b-bb52c92a60\" (UID: \"ae4fa30a29d0ae8b6fbed4bcb97b56c0\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:07.467238 kubelet[2424]: I0129 16:57:07.466196 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae4fa30a29d0ae8b6fbed4bcb97b56c0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-b-bb52c92a60\" (UID: \"ae4fa30a29d0ae8b6fbed4bcb97b56c0\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:07.467238 kubelet[2424]: I0129 16:57:07.466227 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0edb73e790d8e6e04834fe3d03df0776-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-b-bb52c92a60\" (UID: \"0edb73e790d8e6e04834fe3d03df0776\") " pod="kube-system/kube-scheduler-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:07.467238 kubelet[2424]: I0129 16:57:07.466256 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfb23053a384317632dc58b95a1ac53a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-b-bb52c92a60\" (UID: \"cfb23053a384317632dc58b95a1ac53a\") " pod="kube-system/kube-apiserver-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:07.467583 kubelet[2424]: I0129 16:57:07.466285 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae4fa30a29d0ae8b6fbed4bcb97b56c0-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-b-bb52c92a60\" (UID: \"ae4fa30a29d0ae8b6fbed4bcb97b56c0\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:07.467583 kubelet[2424]: I0129 16:57:07.466311 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfb23053a384317632dc58b95a1ac53a-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-b-bb52c92a60\" (UID: \"cfb23053a384317632dc58b95a1ac53a\") " pod="kube-system/kube-apiserver-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:07.467583 kubelet[2424]: I0129 16:57:07.467054 2424 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:07.467583 kubelet[2424]: I0129 16:57:07.467157 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfb23053a384317632dc58b95a1ac53a-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-b-bb52c92a60\" (UID: \"cfb23053a384317632dc58b95a1ac53a\") " pod="kube-system/kube-apiserver-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:07.467583 kubelet[2424]: E0129 16:57:07.467460 2424 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://116.202.14.223:6443/api/v1/nodes\": dial tcp 116.202.14.223:6443: connect: connection refused" node="ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:07.469146 kubelet[2424]: E0129 16:57:07.469089 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.14.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-b-bb52c92a60?timeout=10s\": dial tcp 116.202.14.223:6443: connect: connection refused" interval="400ms" Jan 29 16:57:07.484549 systemd[1]: Started sshd@7-116.202.14.223:22-114.218.158.114:41966.service - OpenSSH per-connection server daemon (114.218.158.114:41966). Jan 29 16:57:07.671073 kubelet[2424]: I0129 16:57:07.670956 2424 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:07.671769 kubelet[2424]: E0129 16:57:07.671616 2424 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://116.202.14.223:6443/api/v1/nodes\": dial tcp 116.202.14.223:6443: connect: connection refused" node="ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:07.728052 containerd[1503]: time="2025-01-29T16:57:07.727785738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-b-bb52c92a60,Uid:cfb23053a384317632dc58b95a1ac53a,Namespace:kube-system,Attempt:0,}" Jan 29 16:57:07.748602 containerd[1503]: time="2025-01-29T16:57:07.748529297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-b-bb52c92a60,Uid:ae4fa30a29d0ae8b6fbed4bcb97b56c0,Namespace:kube-system,Attempt:0,}" Jan 29 16:57:07.757743 containerd[1503]: time="2025-01-29T16:57:07.757540810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-b-bb52c92a60,Uid:0edb73e790d8e6e04834fe3d03df0776,Namespace:kube-system,Attempt:0,}" Jan 29 16:57:07.870240 kubelet[2424]: E0129 16:57:07.870145 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.14.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-b-bb52c92a60?timeout=10s\": dial tcp 116.202.14.223:6443: connect: connection refused" interval="800ms" Jan 29 16:57:08.075243 kubelet[2424]: I0129 16:57:08.075197 2424 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:08.075736 kubelet[2424]: E0129 16:57:08.075621 2424 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://116.202.14.223:6443/api/v1/nodes\": dial tcp 116.202.14.223:6443: connect: connection refused" node="ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:08.133754 kubelet[2424]: W0129 16:57:08.133663 2424 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://116.202.14.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 29 16:57:08.133939 kubelet[2424]: E0129 16:57:08.133765 2424 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://116.202.14.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 116.202.14.223:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:08.187304 kubelet[2424]: W0129 16:57:08.187161 2424 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://116.202.14.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-b-bb52c92a60&limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 29 16:57:08.187304 kubelet[2424]: E0129 16:57:08.187291 2424 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://116.202.14.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-b-bb52c92a60&limit=500&resourceVersion=0\": dial tcp 116.202.14.223:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:08.288116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2631639449.mount: Deactivated successfully. Jan 29 16:57:08.299311 containerd[1503]: time="2025-01-29T16:57:08.299234709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:57:08.305707 containerd[1503]: time="2025-01-29T16:57:08.305619428Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Jan 29 16:57:08.310150 containerd[1503]: time="2025-01-29T16:57:08.310093209Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:57:08.313256 containerd[1503]: time="2025-01-29T16:57:08.313205029Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:57:08.314736 containerd[1503]: time="2025-01-29T16:57:08.314678566Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:57:08.317926 containerd[1503]: time="2025-01-29T16:57:08.317835109Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:57:08.318838 containerd[1503]: time="2025-01-29T16:57:08.318741853Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:57:08.320110 containerd[1503]: time="2025-01-29T16:57:08.320081471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:57:08.323923 containerd[1503]: time="2025-01-29T16:57:08.323621207Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 565.95289ms" Jan 29 16:57:08.327223 containerd[1503]: time="2025-01-29T16:57:08.326756189Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 595.69006ms" Jan 29 16:57:08.328237 containerd[1503]: time="2025-01-29T16:57:08.328153625Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 579.475361ms" Jan 29 16:57:08.466325 kubelet[2424]: W0129 16:57:08.465293 2424 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://116.202.14.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 29 16:57:08.466325 kubelet[2424]: E0129 16:57:08.465347 2424 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://116.202.14.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 116.202.14.223:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:08.511730 containerd[1503]: time="2025-01-29T16:57:08.511639148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:57:08.511995 containerd[1503]: time="2025-01-29T16:57:08.511720439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:57:08.511995 containerd[1503]: time="2025-01-29T16:57:08.511732170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:08.512151 containerd[1503]: time="2025-01-29T16:57:08.512096577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:08.514270 containerd[1503]: time="2025-01-29T16:57:08.514035438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:57:08.514270 containerd[1503]: time="2025-01-29T16:57:08.514078289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:57:08.514270 containerd[1503]: time="2025-01-29T16:57:08.514092264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:08.514270 containerd[1503]: time="2025-01-29T16:57:08.514160201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:08.520739 containerd[1503]: time="2025-01-29T16:57:08.520498043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:57:08.520739 containerd[1503]: time="2025-01-29T16:57:08.520545971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:57:08.520739 containerd[1503]: time="2025-01-29T16:57:08.520560178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:08.520739 containerd[1503]: time="2025-01-29T16:57:08.520628665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:08.530655 kubelet[2424]: W0129 16:57:08.530612 2424 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://116.202.14.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 116.202.14.223:6443: connect: connection refused Jan 29 16:57:08.530655 kubelet[2424]: E0129 16:57:08.530655 2424 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://116.202.14.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 116.202.14.223:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:08.542379 systemd[1]: Started cri-containerd-9e2e0fdb1ed81f797241f6dfaecb411104fc9b6f08b0e1cf663cc10114c250c7.scope - libcontainer container 9e2e0fdb1ed81f797241f6dfaecb411104fc9b6f08b0e1cf663cc10114c250c7. Jan 29 16:57:08.554116 systemd[1]: Started cri-containerd-b255604e20606c96fe4e3bacfc09eaa4f903f2fd3a887ed161ea0c6297fb6303.scope - libcontainer container b255604e20606c96fe4e3bacfc09eaa4f903f2fd3a887ed161ea0c6297fb6303. Jan 29 16:57:08.561013 systemd[1]: Started cri-containerd-6045566ba5cf1f1e181346fd013e68e2266eb68a1c4d84035a8f9d8f89c539bd.scope - libcontainer container 6045566ba5cf1f1e181346fd013e68e2266eb68a1c4d84035a8f9d8f89c539bd. Jan 29 16:57:08.613386 containerd[1503]: time="2025-01-29T16:57:08.613050498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-b-bb52c92a60,Uid:ae4fa30a29d0ae8b6fbed4bcb97b56c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e2e0fdb1ed81f797241f6dfaecb411104fc9b6f08b0e1cf663cc10114c250c7\"" Jan 29 16:57:08.621767 containerd[1503]: time="2025-01-29T16:57:08.621722136Z" level=info msg="CreateContainer within sandbox \"9e2e0fdb1ed81f797241f6dfaecb411104fc9b6f08b0e1cf663cc10114c250c7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:57:08.634249 containerd[1503]: time="2025-01-29T16:57:08.634216434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-b-bb52c92a60,Uid:cfb23053a384317632dc58b95a1ac53a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6045566ba5cf1f1e181346fd013e68e2266eb68a1c4d84035a8f9d8f89c539bd\"" Jan 29 16:57:08.639595 containerd[1503]: time="2025-01-29T16:57:08.639528631Z" level=info msg="CreateContainer within sandbox \"6045566ba5cf1f1e181346fd013e68e2266eb68a1c4d84035a8f9d8f89c539bd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:57:08.646032 containerd[1503]: time="2025-01-29T16:57:08.645999480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-b-bb52c92a60,Uid:0edb73e790d8e6e04834fe3d03df0776,Namespace:kube-system,Attempt:0,} returns sandbox id \"b255604e20606c96fe4e3bacfc09eaa4f903f2fd3a887ed161ea0c6297fb6303\"" Jan 29 16:57:08.649398 containerd[1503]: time="2025-01-29T16:57:08.649374289Z" level=info msg="CreateContainer within sandbox \"b255604e20606c96fe4e3bacfc09eaa4f903f2fd3a887ed161ea0c6297fb6303\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:57:08.649861 containerd[1503]: time="2025-01-29T16:57:08.649742634Z" level=info msg="CreateContainer within sandbox \"9e2e0fdb1ed81f797241f6dfaecb411104fc9b6f08b0e1cf663cc10114c250c7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d7a1fe073b50b6c14c0656a30a0fef2be73a996f9551e1178007368e8f5f9771\"" Jan 29 16:57:08.650669 containerd[1503]: time="2025-01-29T16:57:08.650152173Z" level=info msg="StartContainer for \"d7a1fe073b50b6c14c0656a30a0fef2be73a996f9551e1178007368e8f5f9771\"" Jan 29 16:57:08.659032 containerd[1503]: time="2025-01-29T16:57:08.658759412Z" level=info msg="CreateContainer within sandbox \"6045566ba5cf1f1e181346fd013e68e2266eb68a1c4d84035a8f9d8f89c539bd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3a7b087f5eb2e2cc6f3a057b2f28c4dd3f8cf5b7381a1ded081bb6848f4a94e8\"" Jan 29 16:57:08.659773 containerd[1503]: time="2025-01-29T16:57:08.659732960Z" level=info msg="StartContainer for \"3a7b087f5eb2e2cc6f3a057b2f28c4dd3f8cf5b7381a1ded081bb6848f4a94e8\"" Jan 29 16:57:08.669211 containerd[1503]: time="2025-01-29T16:57:08.669089781Z" level=info msg="CreateContainer within sandbox \"b255604e20606c96fe4e3bacfc09eaa4f903f2fd3a887ed161ea0c6297fb6303\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9c6ace63ef8c3e9b9b8940e7e42cd7a85b7d53f7ee0228a1dedfc6f3bace713f\"" Jan 29 16:57:08.669675 containerd[1503]: time="2025-01-29T16:57:08.669655832Z" level=info msg="StartContainer for \"9c6ace63ef8c3e9b9b8940e7e42cd7a85b7d53f7ee0228a1dedfc6f3bace713f\"" Jan 29 16:57:08.671835 kubelet[2424]: E0129 16:57:08.671799 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.14.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-b-bb52c92a60?timeout=10s\": dial tcp 116.202.14.223:6443: connect: connection refused" interval="1.6s" Jan 29 16:57:08.696629 systemd[1]: Started cri-containerd-d7a1fe073b50b6c14c0656a30a0fef2be73a996f9551e1178007368e8f5f9771.scope - libcontainer container d7a1fe073b50b6c14c0656a30a0fef2be73a996f9551e1178007368e8f5f9771. Jan 29 16:57:08.703064 systemd[1]: Started cri-containerd-3a7b087f5eb2e2cc6f3a057b2f28c4dd3f8cf5b7381a1ded081bb6848f4a94e8.scope - libcontainer container 3a7b087f5eb2e2cc6f3a057b2f28c4dd3f8cf5b7381a1ded081bb6848f4a94e8. Jan 29 16:57:08.708336 systemd[1]: Started cri-containerd-9c6ace63ef8c3e9b9b8940e7e42cd7a85b7d53f7ee0228a1dedfc6f3bace713f.scope - libcontainer container 9c6ace63ef8c3e9b9b8940e7e42cd7a85b7d53f7ee0228a1dedfc6f3bace713f. Jan 29 16:57:08.763747 containerd[1503]: time="2025-01-29T16:57:08.763638924Z" level=info msg="StartContainer for \"3a7b087f5eb2e2cc6f3a057b2f28c4dd3f8cf5b7381a1ded081bb6848f4a94e8\" returns successfully" Jan 29 16:57:08.781622 containerd[1503]: time="2025-01-29T16:57:08.780865472Z" level=info msg="StartContainer for \"d7a1fe073b50b6c14c0656a30a0fef2be73a996f9551e1178007368e8f5f9771\" returns successfully" Jan 29 16:57:08.794584 containerd[1503]: time="2025-01-29T16:57:08.794508454Z" level=info msg="StartContainer for \"9c6ace63ef8c3e9b9b8940e7e42cd7a85b7d53f7ee0228a1dedfc6f3bace713f\" returns successfully" Jan 29 16:57:08.879845 kubelet[2424]: I0129 16:57:08.879478 2424 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:08.879845 kubelet[2424]: E0129 16:57:08.879752 2424 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://116.202.14.223:6443/api/v1/nodes\": dial tcp 116.202.14.223:6443: connect: connection refused" node="ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:10.320431 kubelet[2424]: E0129 16:57:10.320381 2424 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-0-0-b-bb52c92a60\" not found" node="ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:10.482762 kubelet[2424]: I0129 16:57:10.482449 2424 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:10.494837 kubelet[2424]: I0129 16:57:10.494765 2424 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:10.494837 kubelet[2424]: E0129 16:57:10.494817 2424 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230-0-0-b-bb52c92a60\": node \"ci-4230-0-0-b-bb52c92a60\" not found" Jan 29 16:57:10.508170 kubelet[2424]: E0129 16:57:10.508117 2424 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-0-0-b-bb52c92a60\" not found" Jan 29 16:57:10.608996 kubelet[2424]: E0129 16:57:10.608795 2424 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-0-0-b-bb52c92a60\" not found" Jan 29 16:57:11.239120 kubelet[2424]: I0129 16:57:11.237691 2424 apiserver.go:52] "Watching apiserver" Jan 29 16:57:11.263324 kubelet[2424]: I0129 16:57:11.263258 2424 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:57:12.413021 systemd[1]: Reload requested from client PID 2702 ('systemctl') (unit session-7.scope)... Jan 29 16:57:12.413055 systemd[1]: Reloading... Jan 29 16:57:12.579922 zram_generator::config[2752]: No configuration found. Jan 29 16:57:12.718300 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:57:12.864505 systemd[1]: Reloading finished in 450 ms. Jan 29 16:57:12.900744 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:57:12.919487 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:57:12.920159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:57:12.920262 systemd[1]: kubelet.service: Consumed 856ms CPU time, 115.4M memory peak. Jan 29 16:57:12.927374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:57:13.148109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:57:13.158379 (kubelet)[2800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:57:13.219786 kubelet[2800]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:57:13.219786 kubelet[2800]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:57:13.219786 kubelet[2800]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:57:13.221507 kubelet[2800]: I0129 16:57:13.219846 2800 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:57:13.233650 kubelet[2800]: I0129 16:57:13.233539 2800 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:57:13.233650 kubelet[2800]: I0129 16:57:13.233566 2800 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:57:13.233888 kubelet[2800]: I0129 16:57:13.233780 2800 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:57:13.234938 kubelet[2800]: I0129 16:57:13.234916 2800 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:57:13.237927 kubelet[2800]: I0129 16:57:13.237765 2800 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:57:13.240520 kubelet[2800]: E0129 16:57:13.240468 2800 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:57:13.240520 kubelet[2800]: I0129 16:57:13.240512 2800 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:57:13.244400 kubelet[2800]: I0129 16:57:13.244375 2800 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:57:13.244494 kubelet[2800]: I0129 16:57:13.244482 2800 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:57:13.244637 kubelet[2800]: I0129 16:57:13.244586 2800 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:57:13.246908 kubelet[2800]: I0129 16:57:13.244615 2800 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-b-bb52c92a60","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:57:13.246908 kubelet[2800]: I0129 16:57:13.244757 2800 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:57:13.246908 kubelet[2800]: I0129 16:57:13.244766 2800 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:57:13.246908 kubelet[2800]: I0129 16:57:13.244792 2800 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:57:13.246908 kubelet[2800]: I0129 16:57:13.244935 2800 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:57:13.247148 kubelet[2800]: I0129 16:57:13.244946 2800 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:57:13.247148 kubelet[2800]: I0129 16:57:13.244975 2800 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:57:13.247148 kubelet[2800]: I0129 16:57:13.244984 2800 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:57:13.257302 kubelet[2800]: I0129 16:57:13.257268 2800 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:57:13.258959 kubelet[2800]: I0129 16:57:13.258715 2800 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:57:13.261500 kubelet[2800]: I0129 16:57:13.259352 2800 server.go:1269] "Started kubelet" Jan 29 16:57:13.263105 kubelet[2800]: I0129 16:57:13.263076 2800 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:57:13.264557 kubelet[2800]: I0129 16:57:13.264527 2800 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:57:13.269913 kubelet[2800]: I0129 16:57:13.265792 2800 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:57:13.269913 kubelet[2800]: I0129 16:57:13.266744 2800 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:57:13.270289 kubelet[2800]: I0129 16:57:13.270270 2800 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:57:13.270666 kubelet[2800]: I0129 16:57:13.270650 2800 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:57:13.275355 kubelet[2800]: I0129 16:57:13.274981 2800 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:57:13.275355 kubelet[2800]: I0129 16:57:13.275219 2800 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:57:13.275515 kubelet[2800]: I0129 16:57:13.275495 2800 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:57:13.276418 kubelet[2800]: I0129 16:57:13.276386 2800 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:57:13.281029 kubelet[2800]: E0129 16:57:13.280992 2800 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-0-0-b-bb52c92a60\" not found" Jan 29 16:57:13.290607 kubelet[2800]: E0129 16:57:13.290526 2800 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:57:13.291929 kubelet[2800]: I0129 16:57:13.291086 2800 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:57:13.291929 kubelet[2800]: I0129 16:57:13.291104 2800 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:57:13.302179 kubelet[2800]: I0129 16:57:13.302132 2800 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:57:13.304142 kubelet[2800]: I0129 16:57:13.304123 2800 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:57:13.304310 kubelet[2800]: I0129 16:57:13.304297 2800 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:57:13.304391 kubelet[2800]: I0129 16:57:13.304379 2800 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:57:13.304674 kubelet[2800]: E0129 16:57:13.304652 2800 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:57:13.362298 kubelet[2800]: I0129 16:57:13.362270 2800 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:57:13.362648 kubelet[2800]: I0129 16:57:13.362500 2800 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:57:13.362754 kubelet[2800]: I0129 16:57:13.362742 2800 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:57:13.363737 kubelet[2800]: I0129 16:57:13.363719 2800 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:57:13.363854 kubelet[2800]: I0129 16:57:13.363806 2800 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:57:13.363935 kubelet[2800]: I0129 16:57:13.363925 2800 policy_none.go:49] "None policy: Start" Jan 29 16:57:13.365180 kubelet[2800]: I0129 16:57:13.365164 2800 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:57:13.365634 kubelet[2800]: I0129 16:57:13.365285 2800 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:57:13.365634 kubelet[2800]: I0129 16:57:13.365533 2800 state_mem.go:75] "Updated machine memory state" Jan 29 16:57:13.373246 kubelet[2800]: I0129 16:57:13.373220 2800 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:57:13.374528 kubelet[2800]: I0129 16:57:13.373724 2800 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:57:13.374528 kubelet[2800]: I0129 16:57:13.373741 2800 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:57:13.374528 kubelet[2800]: I0129 16:57:13.374100 2800 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:57:13.424581 sudo[2833]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 16:57:13.425468 sudo[2833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 16:57:13.476690 kubelet[2800]: I0129 16:57:13.476409 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfb23053a384317632dc58b95a1ac53a-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-b-bb52c92a60\" (UID: \"cfb23053a384317632dc58b95a1ac53a\") " pod="kube-system/kube-apiserver-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:13.477454 kubelet[2800]: I0129 16:57:13.477057 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfb23053a384317632dc58b95a1ac53a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-b-bb52c92a60\" (UID: \"cfb23053a384317632dc58b95a1ac53a\") " pod="kube-system/kube-apiserver-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:13.477454 kubelet[2800]: I0129 16:57:13.477085 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae4fa30a29d0ae8b6fbed4bcb97b56c0-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-b-bb52c92a60\" (UID: \"ae4fa30a29d0ae8b6fbed4bcb97b56c0\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:13.477454 kubelet[2800]: I0129 16:57:13.477100 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae4fa30a29d0ae8b6fbed4bcb97b56c0-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-b-bb52c92a60\" (UID: \"ae4fa30a29d0ae8b6fbed4bcb97b56c0\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:13.477454 kubelet[2800]: I0129 16:57:13.477116 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0edb73e790d8e6e04834fe3d03df0776-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-b-bb52c92a60\" (UID: \"0edb73e790d8e6e04834fe3d03df0776\") " pod="kube-system/kube-scheduler-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:13.477454 kubelet[2800]: I0129 16:57:13.477129 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfb23053a384317632dc58b95a1ac53a-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-b-bb52c92a60\" (UID: \"cfb23053a384317632dc58b95a1ac53a\") " pod="kube-system/kube-apiserver-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:13.477604 kubelet[2800]: I0129 16:57:13.477144 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae4fa30a29d0ae8b6fbed4bcb97b56c0-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-b-bb52c92a60\" (UID: \"ae4fa30a29d0ae8b6fbed4bcb97b56c0\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:13.477604 kubelet[2800]: I0129 16:57:13.477158 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ae4fa30a29d0ae8b6fbed4bcb97b56c0-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-b-bb52c92a60\" (UID: \"ae4fa30a29d0ae8b6fbed4bcb97b56c0\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:13.477604 kubelet[2800]: I0129 16:57:13.477172 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae4fa30a29d0ae8b6fbed4bcb97b56c0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-b-bb52c92a60\" (UID: \"ae4fa30a29d0ae8b6fbed4bcb97b56c0\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:13.477604 kubelet[2800]: I0129 16:57:13.476728 2800 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:13.484057 kubelet[2800]: I0129 16:57:13.483966 2800 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:13.484057 kubelet[2800]: I0129 16:57:13.484155 2800 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-0-0-b-bb52c92a60" Jan 29 16:57:14.007396 sudo[2833]: pam_unix(sudo:session): session closed for user root Jan 29 16:57:14.258463 kubelet[2800]: I0129 16:57:14.258296 2800 apiserver.go:52] "Watching apiserver" Jan 29 16:57:14.276591 kubelet[2800]: I0129 16:57:14.276394 2800 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:57:14.403292 kubelet[2800]: I0129 16:57:14.403053 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-0-0-b-bb52c92a60" podStartSLOduration=1.4030290029999999 podStartE2EDuration="1.403029003s" podCreationTimestamp="2025-01-29 16:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:57:14.401526367 +0000 UTC m=+1.229145401" watchObservedRunningTime="2025-01-29 16:57:14.403029003 +0000 UTC m=+1.230648046" Jan 29 16:57:14.403292 kubelet[2800]: I0129 16:57:14.403199 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-0-0-b-bb52c92a60" podStartSLOduration=1.403194461 podStartE2EDuration="1.403194461s" podCreationTimestamp="2025-01-29 16:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:57:14.382593248 +0000 UTC m=+1.210212282" watchObservedRunningTime="2025-01-29 16:57:14.403194461 +0000 UTC m=+1.230813495" Jan 29 16:57:14.414763 kubelet[2800]: I0129 16:57:14.414460 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-0-0-b-bb52c92a60" podStartSLOduration=1.414442981 podStartE2EDuration="1.414442981s" podCreationTimestamp="2025-01-29 16:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:57:14.413575106 +0000 UTC m=+1.241194130" watchObservedRunningTime="2025-01-29 16:57:14.414442981 +0000 UTC m=+1.242062005" Jan 29 16:57:15.771745 sudo[1888]: pam_unix(sudo:session): session closed for user root Jan 29 16:57:15.931180 sshd[1887]: Connection closed by 147.75.109.163 port 53970 Jan 29 16:57:15.933658 sshd-session[1885]: pam_unix(sshd:session): session closed for user core Jan 29 16:57:15.943182 systemd[1]: sshd@6-116.202.14.223:22-147.75.109.163:53970.service: Deactivated successfully. Jan 29 16:57:15.949830 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:57:15.950494 systemd[1]: session-7.scope: Consumed 5.621s CPU time, 215.8M memory peak. Jan 29 16:57:15.954363 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:57:15.956417 systemd-logind[1483]: Removed session 7. Jan 29 16:57:18.226583 kubelet[2800]: I0129 16:57:18.226518 2800 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:57:18.228388 kubelet[2800]: I0129 16:57:18.227708 2800 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:57:18.228487 containerd[1503]: time="2025-01-29T16:57:18.227353160Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:57:18.973817 systemd[1]: Created slice kubepods-besteffort-pod8dcc80af_fac1_481b_aef9_1f1e6cb26b21.slice - libcontainer container kubepods-besteffort-pod8dcc80af_fac1_481b_aef9_1f1e6cb26b21.slice. Jan 29 16:57:18.998041 systemd[1]: Created slice kubepods-burstable-pod462e791b_f35b_4d7b_aece_37cd6b6c792c.slice - libcontainer container kubepods-burstable-pod462e791b_f35b_4d7b_aece_37cd6b6c792c.slice. Jan 29 16:57:19.019554 kubelet[2800]: I0129 16:57:19.019510 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zns67\" (UniqueName: \"kubernetes.io/projected/462e791b-f35b-4d7b-aece-37cd6b6c792c-kube-api-access-zns67\") pod \"cilium-vpxdm\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " pod="kube-system/cilium-vpxdm" Jan 29 16:57:19.019792 kubelet[2800]: I0129 16:57:19.019776 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8dcc80af-fac1-481b-aef9-1f1e6cb26b21-xtables-lock\") pod \"kube-proxy-7jhnc\" (UID: \"8dcc80af-fac1-481b-aef9-1f1e6cb26b21\") " pod="kube-system/kube-proxy-7jhnc" Jan 29 16:57:19.019858 kubelet[2800]: I0129 16:57:19.019847 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-hostproc\") pod \"cilium-vpxdm\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " pod="kube-system/cilium-vpxdm" Jan 29 16:57:19.019944 kubelet[2800]: I0129 16:57:19.019933 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-etc-cni-netd\") pod \"cilium-vpxdm\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " pod="kube-system/cilium-vpxdm" Jan 29 16:57:19.020062 kubelet[2800]: I0129 16:57:19.020020 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8dcc80af-fac1-481b-aef9-1f1e6cb26b21-kube-proxy\") pod \"kube-proxy-7jhnc\" (UID: \"8dcc80af-fac1-481b-aef9-1f1e6cb26b21\") " pod="kube-system/kube-proxy-7jhnc" Jan 29 16:57:19.020141 kubelet[2800]: I0129 16:57:19.020130 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-xtables-lock\") pod \"cilium-vpxdm\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " pod="kube-system/cilium-vpxdm" Jan 29 16:57:19.020210 kubelet[2800]: I0129 16:57:19.020199 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/462e791b-f35b-4d7b-aece-37cd6b6c792c-clustermesh-secrets\") pod \"cilium-vpxdm\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " pod="kube-system/cilium-vpxdm" Jan 29 16:57:19.020281 kubelet[2800]: I0129 16:57:19.020271 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-lib-modules\") pod \"cilium-vpxdm\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " pod="kube-system/cilium-vpxdm" Jan 29 16:57:19.020352 kubelet[2800]: I0129 16:57:19.020340 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-cilium-cgroup\") pod \"cilium-vpxdm\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " pod="kube-system/cilium-vpxdm" Jan 29 16:57:19.020426 kubelet[2800]: I0129 16:57:19.020415 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-cni-path\") pod \"cilium-vpxdm\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " pod="kube-system/cilium-vpxdm" Jan 29 16:57:19.020672 kubelet[2800]: I0129 16:57:19.020494 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8dcc80af-fac1-481b-aef9-1f1e6cb26b21-lib-modules\") pod \"kube-proxy-7jhnc\" (UID: \"8dcc80af-fac1-481b-aef9-1f1e6cb26b21\") " pod="kube-system/kube-proxy-7jhnc" Jan 29 16:57:19.020672 kubelet[2800]: I0129 16:57:19.020511 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/462e791b-f35b-4d7b-aece-37cd6b6c792c-cilium-config-path\") pod \"cilium-vpxdm\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " pod="kube-system/cilium-vpxdm" Jan 29 16:57:19.020672 kubelet[2800]: I0129 16:57:19.020525 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-host-proc-sys-net\") pod \"cilium-vpxdm\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " pod="kube-system/cilium-vpxdm" Jan 29 16:57:19.020672 kubelet[2800]: I0129 16:57:19.020550 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-bpf-maps\") pod \"cilium-vpxdm\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " pod="kube-system/cilium-vpxdm" Jan 29 16:57:19.020672 kubelet[2800]: I0129 16:57:19.020574 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-host-proc-sys-kernel\") pod \"cilium-vpxdm\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " pod="kube-system/cilium-vpxdm" Jan 29 16:57:19.020672 kubelet[2800]: I0129 16:57:19.020588 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/462e791b-f35b-4d7b-aece-37cd6b6c792c-hubble-tls\") pod \"cilium-vpxdm\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " pod="kube-system/cilium-vpxdm" Jan 29 16:57:19.020845 kubelet[2800]: I0129 16:57:19.020601 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5n24\" (UniqueName: \"kubernetes.io/projected/8dcc80af-fac1-481b-aef9-1f1e6cb26b21-kube-api-access-b5n24\") pod \"kube-proxy-7jhnc\" (UID: \"8dcc80af-fac1-481b-aef9-1f1e6cb26b21\") " pod="kube-system/kube-proxy-7jhnc" Jan 29 16:57:19.020845 kubelet[2800]: I0129 16:57:19.020620 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-cilium-run\") pod \"cilium-vpxdm\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " pod="kube-system/cilium-vpxdm" Jan 29 16:57:19.070550 systemd[1]: Created slice kubepods-besteffort-podac6bb1a3_f57f_44a2_8fe0_c6befd571fa6.slice - libcontainer container kubepods-besteffort-podac6bb1a3_f57f_44a2_8fe0_c6befd571fa6.slice. Jan 29 16:57:19.121478 kubelet[2800]: I0129 16:57:19.121405 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6-cilium-config-path\") pod \"cilium-operator-5d85765b45-24d45\" (UID: \"ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6\") " pod="kube-system/cilium-operator-5d85765b45-24d45" Jan 29 16:57:19.121648 kubelet[2800]: I0129 16:57:19.121493 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8p6m\" (UniqueName: \"kubernetes.io/projected/ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6-kube-api-access-k8p6m\") pod \"cilium-operator-5d85765b45-24d45\" (UID: \"ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6\") " pod="kube-system/cilium-operator-5d85765b45-24d45" Jan 29 16:57:19.286138 containerd[1503]: time="2025-01-29T16:57:19.285225204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7jhnc,Uid:8dcc80af-fac1-481b-aef9-1f1e6cb26b21,Namespace:kube-system,Attempt:0,}" Jan 29 16:57:19.306218 containerd[1503]: time="2025-01-29T16:57:19.305564215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vpxdm,Uid:462e791b-f35b-4d7b-aece-37cd6b6c792c,Namespace:kube-system,Attempt:0,}" Jan 29 16:57:19.323926 containerd[1503]: time="2025-01-29T16:57:19.323359367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:57:19.323926 containerd[1503]: time="2025-01-29T16:57:19.323436330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:57:19.323926 containerd[1503]: time="2025-01-29T16:57:19.323457189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:19.323926 containerd[1503]: time="2025-01-29T16:57:19.323590257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:19.364181 systemd[1]: Started cri-containerd-daf1675292489a2245f3ece552b43b65327f67d81032a00e063bbb857b7eab14.scope - libcontainer container daf1675292489a2245f3ece552b43b65327f67d81032a00e063bbb857b7eab14. Jan 29 16:57:19.367634 containerd[1503]: time="2025-01-29T16:57:19.367229324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:57:19.367634 containerd[1503]: time="2025-01-29T16:57:19.367399501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:57:19.367634 containerd[1503]: time="2025-01-29T16:57:19.367423105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:19.368905 containerd[1503]: time="2025-01-29T16:57:19.368624674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:19.381126 containerd[1503]: time="2025-01-29T16:57:19.381032680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-24d45,Uid:ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6,Namespace:kube-system,Attempt:0,}" Jan 29 16:57:19.412085 systemd[1]: Started cri-containerd-89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51.scope - libcontainer container 89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51. Jan 29 16:57:19.416549 containerd[1503]: time="2025-01-29T16:57:19.416407892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7jhnc,Uid:8dcc80af-fac1-481b-aef9-1f1e6cb26b21,Namespace:kube-system,Attempt:0,} returns sandbox id \"daf1675292489a2245f3ece552b43b65327f67d81032a00e063bbb857b7eab14\"" Jan 29 16:57:19.421838 containerd[1503]: time="2025-01-29T16:57:19.421694541Z" level=info msg="CreateContainer within sandbox \"daf1675292489a2245f3ece552b43b65327f67d81032a00e063bbb857b7eab14\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:57:19.451785 containerd[1503]: time="2025-01-29T16:57:19.451104250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:57:19.452089 containerd[1503]: time="2025-01-29T16:57:19.451689360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:57:19.452089 containerd[1503]: time="2025-01-29T16:57:19.451710970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:19.452089 containerd[1503]: time="2025-01-29T16:57:19.451859767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:19.460520 containerd[1503]: time="2025-01-29T16:57:19.460368679Z" level=info msg="CreateContainer within sandbox \"daf1675292489a2245f3ece552b43b65327f67d81032a00e063bbb857b7eab14\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f908f55584231ee9a32f8eda259663decd2dc9cd72543eb3c7961fde7e7921ed\"" Jan 29 16:57:19.462483 containerd[1503]: time="2025-01-29T16:57:19.462428777Z" level=info msg="StartContainer for \"f908f55584231ee9a32f8eda259663decd2dc9cd72543eb3c7961fde7e7921ed\"" Jan 29 16:57:19.478966 containerd[1503]: time="2025-01-29T16:57:19.478767345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vpxdm,Uid:462e791b-f35b-4d7b-aece-37cd6b6c792c,Namespace:kube-system,Attempt:0,} returns sandbox id \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\"" Jan 29 16:57:19.484807 containerd[1503]: time="2025-01-29T16:57:19.484756242Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:57:19.505517 systemd[1]: Started cri-containerd-d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb.scope - libcontainer container d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb. Jan 29 16:57:19.533211 systemd[1]: Started cri-containerd-f908f55584231ee9a32f8eda259663decd2dc9cd72543eb3c7961fde7e7921ed.scope - libcontainer container f908f55584231ee9a32f8eda259663decd2dc9cd72543eb3c7961fde7e7921ed. Jan 29 16:57:19.583684 containerd[1503]: time="2025-01-29T16:57:19.583635691Z" level=info msg="StartContainer for \"f908f55584231ee9a32f8eda259663decd2dc9cd72543eb3c7961fde7e7921ed\" returns successfully" Jan 29 16:57:19.586301 containerd[1503]: time="2025-01-29T16:57:19.585971311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-24d45,Uid:ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb\"" Jan 29 16:57:20.377106 kubelet[2800]: I0129 16:57:20.376962 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7jhnc" podStartSLOduration=2.376943512 podStartE2EDuration="2.376943512s" podCreationTimestamp="2025-01-29 16:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:57:20.376037534 +0000 UTC m=+7.203656558" watchObservedRunningTime="2025-01-29 16:57:20.376943512 +0000 UTC m=+7.204562537" Jan 29 16:57:27.172517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1993313170.mount: Deactivated successfully. Jan 29 16:57:29.110486 containerd[1503]: time="2025-01-29T16:57:29.110421972Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:29.114074 containerd[1503]: time="2025-01-29T16:57:29.113664925Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 16:57:29.115142 containerd[1503]: time="2025-01-29T16:57:29.115072321Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:29.117989 containerd[1503]: time="2025-01-29T16:57:29.117747515Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.632928516s" Jan 29 16:57:29.117989 containerd[1503]: time="2025-01-29T16:57:29.117804220Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 16:57:29.123333 containerd[1503]: time="2025-01-29T16:57:29.123261136Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:57:29.134226 containerd[1503]: time="2025-01-29T16:57:29.134131647Z" level=info msg="CreateContainer within sandbox \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:57:29.295277 containerd[1503]: time="2025-01-29T16:57:29.295208469Z" level=info msg="CreateContainer within sandbox \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01\"" Jan 29 16:57:29.296832 containerd[1503]: time="2025-01-29T16:57:29.296334751Z" level=info msg="StartContainer for \"bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01\"" Jan 29 16:57:29.297733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1270153341.mount: Deactivated successfully. Jan 29 16:57:29.399134 systemd[1]: Started cri-containerd-bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01.scope - libcontainer container bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01. Jan 29 16:57:29.446441 containerd[1503]: time="2025-01-29T16:57:29.446227519Z" level=info msg="StartContainer for \"bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01\" returns successfully" Jan 29 16:57:29.469437 systemd[1]: cri-containerd-bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01.scope: Deactivated successfully. Jan 29 16:57:29.601682 containerd[1503]: time="2025-01-29T16:57:29.563945953Z" level=info msg="shim disconnected" id=bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01 namespace=k8s.io Jan 29 16:57:29.601990 containerd[1503]: time="2025-01-29T16:57:29.601682338Z" level=warning msg="cleaning up after shim disconnected" id=bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01 namespace=k8s.io Jan 29 16:57:29.601990 containerd[1503]: time="2025-01-29T16:57:29.601707314Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:57:30.284074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01-rootfs.mount: Deactivated successfully. Jan 29 16:57:30.469406 containerd[1503]: time="2025-01-29T16:57:30.468864349Z" level=info msg="CreateContainer within sandbox \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:57:30.501447 containerd[1503]: time="2025-01-29T16:57:30.499224085Z" level=info msg="CreateContainer within sandbox \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b\"" Jan 29 16:57:30.503216 containerd[1503]: time="2025-01-29T16:57:30.501935135Z" level=info msg="StartContainer for \"b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b\"" Jan 29 16:57:30.564041 systemd[1]: Started cri-containerd-b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b.scope - libcontainer container b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b. Jan 29 16:57:30.618962 containerd[1503]: time="2025-01-29T16:57:30.618891512Z" level=info msg="StartContainer for \"b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b\" returns successfully" Jan 29 16:57:30.631504 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:57:30.632269 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:57:30.632583 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:57:30.641345 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:57:30.644262 systemd[1]: cri-containerd-b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b.scope: Deactivated successfully. Jan 29 16:57:30.675320 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:57:30.688936 containerd[1503]: time="2025-01-29T16:57:30.688859929Z" level=info msg="shim disconnected" id=b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b namespace=k8s.io Jan 29 16:57:30.688936 containerd[1503]: time="2025-01-29T16:57:30.688928258Z" level=warning msg="cleaning up after shim disconnected" id=b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b namespace=k8s.io Jan 29 16:57:30.688936 containerd[1503]: time="2025-01-29T16:57:30.688937465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:57:31.285979 systemd[1]: run-containerd-runc-k8s.io-b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b-runc.jwKgkm.mount: Deactivated successfully. Jan 29 16:57:31.286563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b-rootfs.mount: Deactivated successfully. Jan 29 16:57:31.421967 containerd[1503]: time="2025-01-29T16:57:31.421009359Z" level=info msg="CreateContainer within sandbox \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:57:31.502439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3646911749.mount: Deactivated successfully. Jan 29 16:57:31.509200 containerd[1503]: time="2025-01-29T16:57:31.509063095Z" level=info msg="CreateContainer within sandbox \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92\"" Jan 29 16:57:31.509908 containerd[1503]: time="2025-01-29T16:57:31.509854803Z" level=info msg="StartContainer for \"202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92\"" Jan 29 16:57:31.551057 systemd[1]: Started cri-containerd-202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92.scope - libcontainer container 202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92. Jan 29 16:57:31.588161 containerd[1503]: time="2025-01-29T16:57:31.588121675Z" level=info msg="StartContainer for \"202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92\" returns successfully" Jan 29 16:57:31.591279 systemd[1]: cri-containerd-202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92.scope: Deactivated successfully. Jan 29 16:57:31.621410 containerd[1503]: time="2025-01-29T16:57:31.621323577Z" level=info msg="shim disconnected" id=202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92 namespace=k8s.io Jan 29 16:57:31.621410 containerd[1503]: time="2025-01-29T16:57:31.621396814Z" level=warning msg="cleaning up after shim disconnected" id=202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92 namespace=k8s.io Jan 29 16:57:31.621410 containerd[1503]: time="2025-01-29T16:57:31.621404668Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:57:32.281197 systemd[1]: run-containerd-runc-k8s.io-202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92-runc.owtk5N.mount: Deactivated successfully. Jan 29 16:57:32.281313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92-rootfs.mount: Deactivated successfully. Jan 29 16:57:32.421296 containerd[1503]: time="2025-01-29T16:57:32.421146615Z" level=info msg="CreateContainer within sandbox \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:57:32.451958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3166668019.mount: Deactivated successfully. Jan 29 16:57:32.454548 containerd[1503]: time="2025-01-29T16:57:32.454409430Z" level=info msg="CreateContainer within sandbox \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a\"" Jan 29 16:57:32.455102 containerd[1503]: time="2025-01-29T16:57:32.455081836Z" level=info msg="StartContainer for \"e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a\"" Jan 29 16:57:32.492057 systemd[1]: Started cri-containerd-e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a.scope - libcontainer container e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a. Jan 29 16:57:32.533996 systemd[1]: cri-containerd-e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a.scope: Deactivated successfully. Jan 29 16:57:32.535784 containerd[1503]: time="2025-01-29T16:57:32.535746899Z" level=info msg="StartContainer for \"e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a\" returns successfully" Jan 29 16:57:32.576531 containerd[1503]: time="2025-01-29T16:57:32.576395976Z" level=info msg="shim disconnected" id=e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a namespace=k8s.io Jan 29 16:57:32.576531 containerd[1503]: time="2025-01-29T16:57:32.576469413Z" level=warning msg="cleaning up after shim disconnected" id=e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a namespace=k8s.io Jan 29 16:57:32.576531 containerd[1503]: time="2025-01-29T16:57:32.576481856Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:57:33.282723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a-rootfs.mount: Deactivated successfully. Jan 29 16:57:33.429630 containerd[1503]: time="2025-01-29T16:57:33.429340627Z" level=info msg="CreateContainer within sandbox \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:57:33.468446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3047989054.mount: Deactivated successfully. Jan 29 16:57:33.479591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1549033704.mount: Deactivated successfully. Jan 29 16:57:33.489546 containerd[1503]: time="2025-01-29T16:57:33.489507397Z" level=info msg="CreateContainer within sandbox \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5\"" Jan 29 16:57:33.526926 containerd[1503]: time="2025-01-29T16:57:33.526853276Z" level=info msg="StartContainer for \"f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5\"" Jan 29 16:57:33.588121 systemd[1]: Started cri-containerd-f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5.scope - libcontainer container f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5. Jan 29 16:57:33.640258 containerd[1503]: time="2025-01-29T16:57:33.640016677Z" level=info msg="StartContainer for \"f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5\" returns successfully" Jan 29 16:57:33.830027 containerd[1503]: time="2025-01-29T16:57:33.829929929Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:33.846173 containerd[1503]: time="2025-01-29T16:57:33.846026582Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 16:57:33.854708 containerd[1503]: time="2025-01-29T16:57:33.854636853Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:33.857690 containerd[1503]: time="2025-01-29T16:57:33.857418517Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.734116396s" Jan 29 16:57:33.857690 containerd[1503]: time="2025-01-29T16:57:33.857445859Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 16:57:33.862750 containerd[1503]: time="2025-01-29T16:57:33.860977866Z" level=info msg="CreateContainer within sandbox \"d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:57:33.913421 containerd[1503]: time="2025-01-29T16:57:33.913293412Z" level=info msg="CreateContainer within sandbox \"d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c\"" Jan 29 16:57:33.916813 containerd[1503]: time="2025-01-29T16:57:33.915016040Z" level=info msg="StartContainer for \"6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c\"" Jan 29 16:57:33.950615 systemd[1]: Started cri-containerd-6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c.scope - libcontainer container 6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c. Jan 29 16:57:33.989706 kubelet[2800]: I0129 16:57:33.987509 2800 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 16:57:33.998009 containerd[1503]: time="2025-01-29T16:57:33.997955522Z" level=info msg="StartContainer for \"6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c\" returns successfully" Jan 29 16:57:34.051029 systemd[1]: Created slice kubepods-burstable-pod776529c7_1a76_4240_b3dc_5ea2901a5701.slice - libcontainer container kubepods-burstable-pod776529c7_1a76_4240_b3dc_5ea2901a5701.slice. Jan 29 16:57:34.064935 systemd[1]: Created slice kubepods-burstable-podbaefa3c4_1f8d_4515_ba63_a74db22f551c.slice - libcontainer container kubepods-burstable-podbaefa3c4_1f8d_4515_ba63_a74db22f551c.slice. Jan 29 16:57:34.124047 kubelet[2800]: I0129 16:57:34.123846 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/776529c7-1a76-4240-b3dc-5ea2901a5701-config-volume\") pod \"coredns-6f6b679f8f-mlwbn\" (UID: \"776529c7-1a76-4240-b3dc-5ea2901a5701\") " pod="kube-system/coredns-6f6b679f8f-mlwbn" Jan 29 16:57:34.124047 kubelet[2800]: I0129 16:57:34.123903 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/baefa3c4-1f8d-4515-ba63-a74db22f551c-config-volume\") pod \"coredns-6f6b679f8f-wrqpb\" (UID: \"baefa3c4-1f8d-4515-ba63-a74db22f551c\") " pod="kube-system/coredns-6f6b679f8f-wrqpb" Jan 29 16:57:34.124047 kubelet[2800]: I0129 16:57:34.123949 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6shbv\" (UniqueName: \"kubernetes.io/projected/baefa3c4-1f8d-4515-ba63-a74db22f551c-kube-api-access-6shbv\") pod \"coredns-6f6b679f8f-wrqpb\" (UID: \"baefa3c4-1f8d-4515-ba63-a74db22f551c\") " pod="kube-system/coredns-6f6b679f8f-wrqpb" Jan 29 16:57:34.124047 kubelet[2800]: I0129 16:57:34.123979 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg59j\" (UniqueName: \"kubernetes.io/projected/776529c7-1a76-4240-b3dc-5ea2901a5701-kube-api-access-kg59j\") pod \"coredns-6f6b679f8f-mlwbn\" (UID: \"776529c7-1a76-4240-b3dc-5ea2901a5701\") " pod="kube-system/coredns-6f6b679f8f-mlwbn" Jan 29 16:57:34.361108 containerd[1503]: time="2025-01-29T16:57:34.360397213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mlwbn,Uid:776529c7-1a76-4240-b3dc-5ea2901a5701,Namespace:kube-system,Attempt:0,}" Jan 29 16:57:34.369971 containerd[1503]: time="2025-01-29T16:57:34.369676366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wrqpb,Uid:baefa3c4-1f8d-4515-ba63-a74db22f551c,Namespace:kube-system,Attempt:0,}" Jan 29 16:57:34.526627 kubelet[2800]: I0129 16:57:34.525205 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vpxdm" podStartSLOduration=6.882848622 podStartE2EDuration="16.52518349s" podCreationTimestamp="2025-01-29 16:57:18 +0000 UTC" firstStartedPulling="2025-01-29 16:57:19.480713591 +0000 UTC m=+6.308332615" lastFinishedPulling="2025-01-29 16:57:29.123048429 +0000 UTC m=+15.950667483" observedRunningTime="2025-01-29 16:57:34.500123019 +0000 UTC m=+21.327742044" watchObservedRunningTime="2025-01-29 16:57:34.52518349 +0000 UTC m=+21.352802504" Jan 29 16:57:34.526627 kubelet[2800]: I0129 16:57:34.525783 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-24d45" podStartSLOduration=1.254767583 podStartE2EDuration="15.525776417s" podCreationTimestamp="2025-01-29 16:57:19 +0000 UTC" firstStartedPulling="2025-01-29 16:57:19.587373183 +0000 UTC m=+6.414992208" lastFinishedPulling="2025-01-29 16:57:33.858382018 +0000 UTC m=+20.686001042" observedRunningTime="2025-01-29 16:57:34.522695092 +0000 UTC m=+21.350314116" watchObservedRunningTime="2025-01-29 16:57:34.525776417 +0000 UTC m=+21.353395452" Jan 29 16:57:36.530670 systemd-networkd[1409]: cilium_host: Link UP Jan 29 16:57:36.531193 systemd-networkd[1409]: cilium_net: Link UP Jan 29 16:57:36.531563 systemd-networkd[1409]: cilium_net: Gained carrier Jan 29 16:57:36.533436 systemd-networkd[1409]: cilium_host: Gained carrier Jan 29 16:57:36.692959 systemd-networkd[1409]: cilium_vxlan: Link UP Jan 29 16:57:36.692968 systemd-networkd[1409]: cilium_vxlan: Gained carrier Jan 29 16:57:36.955157 systemd-networkd[1409]: cilium_host: Gained IPv6LL Jan 29 16:57:37.220231 kernel: NET: Registered PF_ALG protocol family Jan 29 16:57:37.405177 systemd-networkd[1409]: cilium_net: Gained IPv6LL Jan 29 16:57:38.190537 systemd-networkd[1409]: lxc_health: Link UP Jan 29 16:57:38.195109 systemd-networkd[1409]: lxc_health: Gained carrier Jan 29 16:57:38.235105 systemd-networkd[1409]: cilium_vxlan: Gained IPv6LL Jan 29 16:57:38.494282 systemd-networkd[1409]: lxc1f6ee9aa75ae: Link UP Jan 29 16:57:38.499206 kernel: eth0: renamed from tmp8e249 Jan 29 16:57:38.508674 systemd-networkd[1409]: lxc1f6ee9aa75ae: Gained carrier Jan 29 16:57:38.524684 systemd-networkd[1409]: lxc2ae67fd2c575: Link UP Jan 29 16:57:38.536920 kernel: eth0: renamed from tmp1da7b Jan 29 16:57:38.562356 systemd-networkd[1409]: lxc2ae67fd2c575: Gained carrier Jan 29 16:57:39.835457 systemd-networkd[1409]: lxc_health: Gained IPv6LL Jan 29 16:57:39.835896 systemd-networkd[1409]: lxc2ae67fd2c575: Gained IPv6LL Jan 29 16:57:40.155210 systemd-networkd[1409]: lxc1f6ee9aa75ae: Gained IPv6LL Jan 29 16:57:42.395471 containerd[1503]: time="2025-01-29T16:57:42.395177996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:57:42.395471 containerd[1503]: time="2025-01-29T16:57:42.395245251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:57:42.395471 containerd[1503]: time="2025-01-29T16:57:42.395259328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:42.395471 containerd[1503]: time="2025-01-29T16:57:42.395346331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:42.412851 containerd[1503]: time="2025-01-29T16:57:42.412648491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:57:42.413434 containerd[1503]: time="2025-01-29T16:57:42.413037859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:57:42.413434 containerd[1503]: time="2025-01-29T16:57:42.413097381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:42.415086 containerd[1503]: time="2025-01-29T16:57:42.414116266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:42.453981 systemd[1]: Started cri-containerd-8e2493838d4d3cd03b67b505e742c13074abddf3a79156d4f37b3174a5ca2566.scope - libcontainer container 8e2493838d4d3cd03b67b505e742c13074abddf3a79156d4f37b3174a5ca2566. Jan 29 16:57:42.476292 systemd[1]: Started cri-containerd-1da7b5583ef9dd865ca7d82d4a2d07e993d4f8ff59c38b7cae182784b0b458a8.scope - libcontainer container 1da7b5583ef9dd865ca7d82d4a2d07e993d4f8ff59c38b7cae182784b0b458a8. Jan 29 16:57:42.548091 containerd[1503]: time="2025-01-29T16:57:42.548055337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mlwbn,Uid:776529c7-1a76-4240-b3dc-5ea2901a5701,Namespace:kube-system,Attempt:0,} returns sandbox id \"1da7b5583ef9dd865ca7d82d4a2d07e993d4f8ff59c38b7cae182784b0b458a8\"" Jan 29 16:57:42.557820 containerd[1503]: time="2025-01-29T16:57:42.557648643Z" level=info msg="CreateContainer within sandbox \"1da7b5583ef9dd865ca7d82d4a2d07e993d4f8ff59c38b7cae182784b0b458a8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:57:42.561564 containerd[1503]: time="2025-01-29T16:57:42.561418402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wrqpb,Uid:baefa3c4-1f8d-4515-ba63-a74db22f551c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e2493838d4d3cd03b67b505e742c13074abddf3a79156d4f37b3174a5ca2566\"" Jan 29 16:57:42.568128 containerd[1503]: time="2025-01-29T16:57:42.567961396Z" level=info msg="CreateContainer within sandbox \"8e2493838d4d3cd03b67b505e742c13074abddf3a79156d4f37b3174a5ca2566\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:57:42.591155 containerd[1503]: time="2025-01-29T16:57:42.591064774Z" level=info msg="CreateContainer within sandbox \"1da7b5583ef9dd865ca7d82d4a2d07e993d4f8ff59c38b7cae182784b0b458a8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dda4d97759473dad36e5d82ce95f838e1cf4723544be24730d3f1b8585c20a3e\"" Jan 29 16:57:42.593966 containerd[1503]: time="2025-01-29T16:57:42.592834202Z" level=info msg="StartContainer for \"dda4d97759473dad36e5d82ce95f838e1cf4723544be24730d3f1b8585c20a3e\"" Jan 29 16:57:42.607183 containerd[1503]: time="2025-01-29T16:57:42.606717229Z" level=info msg="CreateContainer within sandbox \"8e2493838d4d3cd03b67b505e742c13074abddf3a79156d4f37b3174a5ca2566\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b7453eb556e72994b6763925ac6a16c4e0cb57138185bac855fe8a6a3a4051d\"" Jan 29 16:57:42.608907 containerd[1503]: time="2025-01-29T16:57:42.608833076Z" level=info msg="StartContainer for \"2b7453eb556e72994b6763925ac6a16c4e0cb57138185bac855fe8a6a3a4051d\"" Jan 29 16:57:42.645080 systemd[1]: Started cri-containerd-2b7453eb556e72994b6763925ac6a16c4e0cb57138185bac855fe8a6a3a4051d.scope - libcontainer container 2b7453eb556e72994b6763925ac6a16c4e0cb57138185bac855fe8a6a3a4051d. Jan 29 16:57:42.669074 systemd[1]: Started cri-containerd-dda4d97759473dad36e5d82ce95f838e1cf4723544be24730d3f1b8585c20a3e.scope - libcontainer container dda4d97759473dad36e5d82ce95f838e1cf4723544be24730d3f1b8585c20a3e. Jan 29 16:57:42.717442 containerd[1503]: time="2025-01-29T16:57:42.717386752Z" level=info msg="StartContainer for \"2b7453eb556e72994b6763925ac6a16c4e0cb57138185bac855fe8a6a3a4051d\" returns successfully" Jan 29 16:57:42.717746 containerd[1503]: time="2025-01-29T16:57:42.717472132Z" level=info msg="StartContainer for \"dda4d97759473dad36e5d82ce95f838e1cf4723544be24730d3f1b8585c20a3e\" returns successfully" Jan 29 16:57:43.413948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164049836.mount: Deactivated successfully. Jan 29 16:57:43.518865 kubelet[2800]: I0129 16:57:43.518101 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wrqpb" podStartSLOduration=24.518074141 podStartE2EDuration="24.518074141s" podCreationTimestamp="2025-01-29 16:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:57:43.514612007 +0000 UTC m=+30.342231031" watchObservedRunningTime="2025-01-29 16:57:43.518074141 +0000 UTC m=+30.345693205" Jan 29 16:57:43.565339 kubelet[2800]: I0129 16:57:43.565144 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-mlwbn" podStartSLOduration=24.565125164 podStartE2EDuration="24.565125164s" podCreationTimestamp="2025-01-29 16:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:57:43.535467595 +0000 UTC m=+30.363086609" watchObservedRunningTime="2025-01-29 16:57:43.565125164 +0000 UTC m=+30.392744188" Jan 29 16:59:07.549594 systemd[1]: sshd@7-116.202.14.223:22-114.218.158.114:41966.service: Deactivated successfully. Jan 29 16:59:55.073256 systemd[1]: Started sshd@8-116.202.14.223:22-147.75.109.163:54752.service - OpenSSH per-connection server daemon (147.75.109.163:54752). Jan 29 16:59:56.094199 sshd[4195]: Accepted publickey for core from 147.75.109.163 port 54752 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 16:59:56.098203 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:59:56.110924 systemd-logind[1483]: New session 8 of user core. Jan 29 16:59:56.119302 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:59:57.505379 sshd[4198]: Connection closed by 147.75.109.163 port 54752 Jan 29 16:59:57.506542 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Jan 29 16:59:57.515475 systemd[1]: sshd@8-116.202.14.223:22-147.75.109.163:54752.service: Deactivated successfully. Jan 29 16:59:57.522019 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:59:57.526751 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:59:57.530637 systemd-logind[1483]: Removed session 8. Jan 29 17:00:02.684183 systemd[1]: Started sshd@9-116.202.14.223:22-147.75.109.163:58384.service - OpenSSH per-connection server daemon (147.75.109.163:58384). Jan 29 17:00:03.699066 sshd[4211]: Accepted publickey for core from 147.75.109.163 port 58384 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:03.701870 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:03.707903 systemd-logind[1483]: New session 9 of user core. Jan 29 17:00:03.711104 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 17:00:04.533593 sshd[4213]: Connection closed by 147.75.109.163 port 58384 Jan 29 17:00:04.534749 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:04.542638 systemd[1]: sshd@9-116.202.14.223:22-147.75.109.163:58384.service: Deactivated successfully. Jan 29 17:00:04.547542 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 17:00:04.550142 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Jan 29 17:00:04.552169 systemd-logind[1483]: Removed session 9. Jan 29 17:00:09.715467 systemd[1]: Started sshd@10-116.202.14.223:22-147.75.109.163:41502.service - OpenSSH per-connection server daemon (147.75.109.163:41502). Jan 29 17:00:10.735814 sshd[4226]: Accepted publickey for core from 147.75.109.163 port 41502 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:10.739194 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:10.748568 systemd-logind[1483]: New session 10 of user core. Jan 29 17:00:10.757275 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 17:00:11.550994 sshd[4228]: Connection closed by 147.75.109.163 port 41502 Jan 29 17:00:11.552120 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:11.557993 systemd[1]: sshd@10-116.202.14.223:22-147.75.109.163:41502.service: Deactivated successfully. Jan 29 17:00:11.563044 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 17:00:11.566586 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. Jan 29 17:00:11.569438 systemd-logind[1483]: Removed session 10. Jan 29 17:00:11.737524 systemd[1]: Started sshd@11-116.202.14.223:22-147.75.109.163:41510.service - OpenSSH per-connection server daemon (147.75.109.163:41510). Jan 29 17:00:12.743427 sshd[4241]: Accepted publickey for core from 147.75.109.163 port 41510 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:12.747157 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:12.757985 systemd-logind[1483]: New session 11 of user core. Jan 29 17:00:12.766153 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 17:00:13.568043 sshd[4243]: Connection closed by 147.75.109.163 port 41510 Jan 29 17:00:13.569053 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:13.575991 systemd[1]: sshd@11-116.202.14.223:22-147.75.109.163:41510.service: Deactivated successfully. Jan 29 17:00:13.581536 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 17:00:13.583316 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. Jan 29 17:00:13.585372 systemd-logind[1483]: Removed session 11. Jan 29 17:00:13.747029 systemd[1]: Started sshd@12-116.202.14.223:22-147.75.109.163:41516.service - OpenSSH per-connection server daemon (147.75.109.163:41516). Jan 29 17:00:14.755149 sshd[4255]: Accepted publickey for core from 147.75.109.163 port 41516 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:14.757020 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:14.762067 systemd-logind[1483]: New session 12 of user core. Jan 29 17:00:14.773177 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 17:00:15.551101 sshd[4257]: Connection closed by 147.75.109.163 port 41516 Jan 29 17:00:15.552063 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:15.559940 systemd[1]: sshd@12-116.202.14.223:22-147.75.109.163:41516.service: Deactivated successfully. Jan 29 17:00:15.565283 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 17:00:15.567546 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. Jan 29 17:00:15.569689 systemd-logind[1483]: Removed session 12. Jan 29 17:00:20.735384 systemd[1]: Started sshd@13-116.202.14.223:22-147.75.109.163:40130.service - OpenSSH per-connection server daemon (147.75.109.163:40130). Jan 29 17:00:21.735467 sshd[4271]: Accepted publickey for core from 147.75.109.163 port 40130 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:21.738408 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:21.746177 systemd-logind[1483]: New session 13 of user core. Jan 29 17:00:21.754040 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 17:00:22.545321 sshd[4273]: Connection closed by 147.75.109.163 port 40130 Jan 29 17:00:22.546587 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:22.552097 systemd[1]: sshd@13-116.202.14.223:22-147.75.109.163:40130.service: Deactivated successfully. Jan 29 17:00:22.555410 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 17:00:22.559127 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. Jan 29 17:00:22.562196 systemd-logind[1483]: Removed session 13. Jan 29 17:00:22.736227 systemd[1]: Started sshd@14-116.202.14.223:22-147.75.109.163:40138.service - OpenSSH per-connection server daemon (147.75.109.163:40138). Jan 29 17:00:23.713701 sshd[4285]: Accepted publickey for core from 147.75.109.163 port 40138 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:23.715971 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:23.727156 systemd-logind[1483]: New session 14 of user core. Jan 29 17:00:23.730137 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 17:00:24.791786 sshd[4287]: Connection closed by 147.75.109.163 port 40138 Jan 29 17:00:24.794080 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:24.805030 systemd[1]: sshd@14-116.202.14.223:22-147.75.109.163:40138.service: Deactivated successfully. Jan 29 17:00:24.812108 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 17:00:24.813836 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. Jan 29 17:00:24.817006 systemd-logind[1483]: Removed session 14. Jan 29 17:00:24.973034 systemd[1]: Started sshd@15-116.202.14.223:22-147.75.109.163:40140.service - OpenSSH per-connection server daemon (147.75.109.163:40140). Jan 29 17:00:25.973779 sshd[4297]: Accepted publickey for core from 147.75.109.163 port 40140 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:25.976635 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:25.985440 systemd-logind[1483]: New session 15 of user core. Jan 29 17:00:25.990094 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 17:00:28.612344 sshd[4299]: Connection closed by 147.75.109.163 port 40140 Jan 29 17:00:28.613288 sshd-session[4297]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:28.621976 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. Jan 29 17:00:28.623474 systemd[1]: sshd@15-116.202.14.223:22-147.75.109.163:40140.service: Deactivated successfully. Jan 29 17:00:28.626123 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 17:00:28.627797 systemd-logind[1483]: Removed session 15. Jan 29 17:00:28.799303 systemd[1]: Started sshd@16-116.202.14.223:22-147.75.109.163:44596.service - OpenSSH per-connection server daemon (147.75.109.163:44596). Jan 29 17:00:29.799687 sshd[4316]: Accepted publickey for core from 147.75.109.163 port 44596 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:29.802000 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:29.810149 systemd-logind[1483]: New session 16 of user core. Jan 29 17:00:29.817238 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 17:00:30.815760 sshd[4318]: Connection closed by 147.75.109.163 port 44596 Jan 29 17:00:30.817583 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:30.822843 systemd[1]: sshd@16-116.202.14.223:22-147.75.109.163:44596.service: Deactivated successfully. Jan 29 17:00:30.826501 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 17:00:30.828365 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. Jan 29 17:00:30.831435 systemd-logind[1483]: Removed session 16. Jan 29 17:00:31.003406 systemd[1]: Started sshd@17-116.202.14.223:22-147.75.109.163:44598.service - OpenSSH per-connection server daemon (147.75.109.163:44598). Jan 29 17:00:32.015273 sshd[4328]: Accepted publickey for core from 147.75.109.163 port 44598 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:32.019181 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:32.028414 systemd-logind[1483]: New session 17 of user core. Jan 29 17:00:32.037144 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 17:00:32.799580 sshd[4330]: Connection closed by 147.75.109.163 port 44598 Jan 29 17:00:32.801301 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:32.810237 systemd[1]: sshd@17-116.202.14.223:22-147.75.109.163:44598.service: Deactivated successfully. Jan 29 17:00:32.814724 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 17:00:32.816257 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. Jan 29 17:00:32.818037 systemd-logind[1483]: Removed session 17. Jan 29 17:00:37.982319 systemd[1]: Started sshd@18-116.202.14.223:22-147.75.109.163:50714.service - OpenSSH per-connection server daemon (147.75.109.163:50714). Jan 29 17:00:38.999972 sshd[4345]: Accepted publickey for core from 147.75.109.163 port 50714 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:39.002575 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:39.013175 systemd-logind[1483]: New session 18 of user core. Jan 29 17:00:39.021287 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 17:00:39.750033 sshd[4347]: Connection closed by 147.75.109.163 port 50714 Jan 29 17:00:39.750816 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:39.755510 systemd[1]: sshd@18-116.202.14.223:22-147.75.109.163:50714.service: Deactivated successfully. Jan 29 17:00:39.758092 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 17:00:39.758988 systemd-logind[1483]: Session 18 logged out. Waiting for processes to exit. Jan 29 17:00:39.760633 systemd-logind[1483]: Removed session 18. Jan 29 17:00:44.930481 systemd[1]: Started sshd@19-116.202.14.223:22-147.75.109.163:50720.service - OpenSSH per-connection server daemon (147.75.109.163:50720). Jan 29 17:00:45.927005 sshd[4359]: Accepted publickey for core from 147.75.109.163 port 50720 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:45.929740 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:45.940687 systemd-logind[1483]: New session 19 of user core. Jan 29 17:00:45.949614 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 17:00:46.720268 sshd[4361]: Connection closed by 147.75.109.163 port 50720 Jan 29 17:00:46.721412 sshd-session[4359]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:46.730277 systemd[1]: sshd@19-116.202.14.223:22-147.75.109.163:50720.service: Deactivated successfully. Jan 29 17:00:46.736367 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 17:00:46.738788 systemd-logind[1483]: Session 19 logged out. Waiting for processes to exit. Jan 29 17:00:46.742005 systemd-logind[1483]: Removed session 19. Jan 29 17:00:46.901456 systemd[1]: Started sshd@20-116.202.14.223:22-147.75.109.163:50730.service - OpenSSH per-connection server daemon (147.75.109.163:50730). Jan 29 17:00:47.885548 sshd[4373]: Accepted publickey for core from 147.75.109.163 port 50730 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:47.889016 sshd-session[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:47.900029 systemd-logind[1483]: New session 20 of user core. Jan 29 17:00:47.907208 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 17:00:49.901075 systemd[1]: run-containerd-runc-k8s.io-f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5-runc.VIV1ti.mount: Deactivated successfully. Jan 29 17:00:49.916015 containerd[1503]: time="2025-01-29T17:00:49.915494254Z" level=info msg="StopContainer for \"6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c\" with timeout 30 (s)" Jan 29 17:00:49.925205 containerd[1503]: time="2025-01-29T17:00:49.925110249Z" level=info msg="Stop container \"6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c\" with signal terminated" Jan 29 17:00:49.928950 containerd[1503]: time="2025-01-29T17:00:49.928726847Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 17:00:49.941784 containerd[1503]: time="2025-01-29T17:00:49.941724246Z" level=info msg="StopContainer for \"f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5\" with timeout 2 (s)" Jan 29 17:00:49.942198 containerd[1503]: time="2025-01-29T17:00:49.942140611Z" level=info msg="Stop container \"f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5\" with signal terminated" Jan 29 17:00:49.956799 systemd-networkd[1409]: lxc_health: Link DOWN Jan 29 17:00:49.956810 systemd-networkd[1409]: lxc_health: Lost carrier Jan 29 17:00:49.959379 systemd[1]: cri-containerd-6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c.scope: Deactivated successfully. Jan 29 17:00:49.986795 systemd[1]: cri-containerd-f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5.scope: Deactivated successfully. Jan 29 17:00:49.987467 systemd[1]: cri-containerd-f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5.scope: Consumed 8.884s CPU time, 188.5M memory peak, 68.6M read from disk, 13.3M written to disk. Jan 29 17:00:50.014441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5-rootfs.mount: Deactivated successfully. Jan 29 17:00:50.018686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c-rootfs.mount: Deactivated successfully. Jan 29 17:00:50.035590 containerd[1503]: time="2025-01-29T17:00:50.035472892Z" level=info msg="shim disconnected" id=f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5 namespace=k8s.io Jan 29 17:00:50.035590 containerd[1503]: time="2025-01-29T17:00:50.035537344Z" level=warning msg="cleaning up after shim disconnected" id=f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5 namespace=k8s.io Jan 29 17:00:50.035590 containerd[1503]: time="2025-01-29T17:00:50.035549156Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:00:50.037219 containerd[1503]: time="2025-01-29T17:00:50.036071238Z" level=info msg="shim disconnected" id=6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c namespace=k8s.io Jan 29 17:00:50.037219 containerd[1503]: time="2025-01-29T17:00:50.036120050Z" level=warning msg="cleaning up after shim disconnected" id=6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c namespace=k8s.io Jan 29 17:00:50.037219 containerd[1503]: time="2025-01-29T17:00:50.036130129Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:00:50.067310 containerd[1503]: time="2025-01-29T17:00:50.067187498Z" level=info msg="StopContainer for \"6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c\" returns successfully" Jan 29 17:00:50.067928 containerd[1503]: time="2025-01-29T17:00:50.067907264Z" level=info msg="StopPodSandbox for \"d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb\"" Jan 29 17:00:50.071573 containerd[1503]: time="2025-01-29T17:00:50.070752961Z" level=warning msg="cleanup warnings time=\"2025-01-29T17:00:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 17:00:50.074694 containerd[1503]: time="2025-01-29T17:00:50.070541772Z" level=info msg="Container to stop \"6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 17:00:50.074694 containerd[1503]: time="2025-01-29T17:00:50.074551041Z" level=info msg="StopContainer for \"f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5\" returns successfully" Jan 29 17:00:50.077707 containerd[1503]: time="2025-01-29T17:00:50.075056553Z" level=info msg="StopPodSandbox for \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\"" Jan 29 17:00:50.077707 containerd[1503]: time="2025-01-29T17:00:50.075086900Z" level=info msg="Container to stop \"bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 17:00:50.077707 containerd[1503]: time="2025-01-29T17:00:50.075132485Z" level=info msg="Container to stop \"b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 17:00:50.077707 containerd[1503]: time="2025-01-29T17:00:50.075142575Z" level=info msg="Container to stop \"e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 17:00:50.077707 containerd[1503]: time="2025-01-29T17:00:50.075152974Z" level=info msg="Container to stop \"f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 17:00:50.077707 containerd[1503]: time="2025-01-29T17:00:50.075163444Z" level=info msg="Container to stop \"202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 17:00:50.077808 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb-shm.mount: Deactivated successfully. Jan 29 17:00:50.084229 systemd[1]: cri-containerd-89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51.scope: Deactivated successfully. Jan 29 17:00:50.087244 systemd[1]: cri-containerd-d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb.scope: Deactivated successfully. Jan 29 17:00:50.125421 containerd[1503]: time="2025-01-29T17:00:50.125315230Z" level=info msg="shim disconnected" id=d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb namespace=k8s.io Jan 29 17:00:50.125421 containerd[1503]: time="2025-01-29T17:00:50.125377538Z" level=warning msg="cleaning up after shim disconnected" id=d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb namespace=k8s.io Jan 29 17:00:50.125421 containerd[1503]: time="2025-01-29T17:00:50.125386985Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:00:50.127911 containerd[1503]: time="2025-01-29T17:00:50.126445558Z" level=info msg="shim disconnected" id=89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51 namespace=k8s.io Jan 29 17:00:50.127911 containerd[1503]: time="2025-01-29T17:00:50.126499109Z" level=warning msg="cleaning up after shim disconnected" id=89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51 namespace=k8s.io Jan 29 17:00:50.127911 containerd[1503]: time="2025-01-29T17:00:50.126507986Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:00:50.143099 containerd[1503]: time="2025-01-29T17:00:50.143030151Z" level=warning msg="cleanup warnings time=\"2025-01-29T17:00:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 17:00:50.146398 containerd[1503]: time="2025-01-29T17:00:50.146261454Z" level=info msg="TearDown network for sandbox \"d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb\" successfully" Jan 29 17:00:50.146398 containerd[1503]: time="2025-01-29T17:00:50.146290778Z" level=info msg="StopPodSandbox for \"d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb\" returns successfully" Jan 29 17:00:50.153262 containerd[1503]: time="2025-01-29T17:00:50.153164389Z" level=info msg="TearDown network for sandbox \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\" successfully" Jan 29 17:00:50.153262 containerd[1503]: time="2025-01-29T17:00:50.153192832Z" level=info msg="StopPodSandbox for \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\" returns successfully" Jan 29 17:00:50.298467 kubelet[2800]: I0129 17:00:50.298351 2800 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6" (UID: "ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:50.299632 kubelet[2800]: I0129 17:00:50.299593 2800 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6-cilium-config-path\") pod \"ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6\" (UID: \"ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6\") " Jan 29 17:00:50.299744 kubelet[2800]: I0129 17:00:50.299656 2800 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8p6m\" (UniqueName: \"kubernetes.io/projected/ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6-kube-api-access-k8p6m\") pod \"ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6\" (UID: \"ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6\") " Jan 29 17:00:50.299744 kubelet[2800]: I0129 17:00:50.299697 2800 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-cni-path\") pod \"462e791b-f35b-4d7b-aece-37cd6b6c792c\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " Jan 29 17:00:50.299744 kubelet[2800]: I0129 17:00:50.299719 2800 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-host-proc-sys-kernel\") pod \"462e791b-f35b-4d7b-aece-37cd6b6c792c\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " Jan 29 17:00:50.299744 kubelet[2800]: I0129 17:00:50.299742 2800 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zns67\" (UniqueName: \"kubernetes.io/projected/462e791b-f35b-4d7b-aece-37cd6b6c792c-kube-api-access-zns67\") pod \"462e791b-f35b-4d7b-aece-37cd6b6c792c\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " Jan 29 17:00:50.300010 kubelet[2800]: I0129 17:00:50.299760 2800 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-etc-cni-netd\") pod \"462e791b-f35b-4d7b-aece-37cd6b6c792c\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " Jan 29 17:00:50.300010 kubelet[2800]: I0129 17:00:50.299781 2800 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/462e791b-f35b-4d7b-aece-37cd6b6c792c-clustermesh-secrets\") pod \"462e791b-f35b-4d7b-aece-37cd6b6c792c\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " Jan 29 17:00:50.300010 kubelet[2800]: I0129 17:00:50.299799 2800 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-host-proc-sys-net\") pod \"462e791b-f35b-4d7b-aece-37cd6b6c792c\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " Jan 29 17:00:50.300201 kubelet[2800]: I0129 17:00:50.300042 2800 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "462e791b-f35b-4d7b-aece-37cd6b6c792c" (UID: "462e791b-f35b-4d7b-aece-37cd6b6c792c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:00:50.303135 kubelet[2800]: I0129 17:00:50.302462 2800 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6-cilium-config-path\") on node \"ci-4230-0-0-b-bb52c92a60\" DevicePath \"\"" Jan 29 17:00:50.303135 kubelet[2800]: I0129 17:00:50.302573 2800 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "462e791b-f35b-4d7b-aece-37cd6b6c792c" (UID: "462e791b-f35b-4d7b-aece-37cd6b6c792c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:00:50.305096 kubelet[2800]: I0129 17:00:50.304926 2800 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6-kube-api-access-k8p6m" (OuterVolumeSpecName: "kube-api-access-k8p6m") pod "ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6" (UID: "ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6"). InnerVolumeSpecName "kube-api-access-k8p6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:50.305096 kubelet[2800]: I0129 17:00:50.305010 2800 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-cni-path" (OuterVolumeSpecName: "cni-path") pod "462e791b-f35b-4d7b-aece-37cd6b6c792c" (UID: "462e791b-f35b-4d7b-aece-37cd6b6c792c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:00:50.305096 kubelet[2800]: I0129 17:00:50.305039 2800 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "462e791b-f35b-4d7b-aece-37cd6b6c792c" (UID: "462e791b-f35b-4d7b-aece-37cd6b6c792c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:00:50.309999 kubelet[2800]: I0129 17:00:50.309961 2800 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462e791b-f35b-4d7b-aece-37cd6b6c792c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "462e791b-f35b-4d7b-aece-37cd6b6c792c" (UID: "462e791b-f35b-4d7b-aece-37cd6b6c792c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:50.312516 kubelet[2800]: I0129 17:00:50.312461 2800 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/462e791b-f35b-4d7b-aece-37cd6b6c792c-kube-api-access-zns67" (OuterVolumeSpecName: "kube-api-access-zns67") pod "462e791b-f35b-4d7b-aece-37cd6b6c792c" (UID: "462e791b-f35b-4d7b-aece-37cd6b6c792c"). InnerVolumeSpecName "kube-api-access-zns67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:50.403097 kubelet[2800]: I0129 17:00:50.402991 2800 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/462e791b-f35b-4d7b-aece-37cd6b6c792c-cilium-config-path\") pod \"462e791b-f35b-4d7b-aece-37cd6b6c792c\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " Jan 29 17:00:50.403097 kubelet[2800]: I0129 17:00:50.403065 2800 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-cilium-cgroup\") pod \"462e791b-f35b-4d7b-aece-37cd6b6c792c\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " Jan 29 17:00:50.403097 kubelet[2800]: I0129 17:00:50.403107 2800 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/462e791b-f35b-4d7b-aece-37cd6b6c792c-hubble-tls\") pod \"462e791b-f35b-4d7b-aece-37cd6b6c792c\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " Jan 29 17:00:50.403619 kubelet[2800]: I0129 17:00:50.403136 2800 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-xtables-lock\") pod \"462e791b-f35b-4d7b-aece-37cd6b6c792c\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " Jan 29 17:00:50.403619 kubelet[2800]: I0129 17:00:50.403163 2800 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-hostproc\") pod \"462e791b-f35b-4d7b-aece-37cd6b6c792c\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " Jan 29 17:00:50.403619 kubelet[2800]: I0129 17:00:50.403191 2800 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-lib-modules\") pod \"462e791b-f35b-4d7b-aece-37cd6b6c792c\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " Jan 29 17:00:50.403619 kubelet[2800]: I0129 17:00:50.403225 2800 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-cilium-run\") pod \"462e791b-f35b-4d7b-aece-37cd6b6c792c\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " Jan 29 17:00:50.403619 kubelet[2800]: I0129 17:00:50.403254 2800 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-bpf-maps\") pod \"462e791b-f35b-4d7b-aece-37cd6b6c792c\" (UID: \"462e791b-f35b-4d7b-aece-37cd6b6c792c\") " Jan 29 17:00:50.403619 kubelet[2800]: I0129 17:00:50.403311 2800 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-k8p6m\" (UniqueName: \"kubernetes.io/projected/ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6-kube-api-access-k8p6m\") on node \"ci-4230-0-0-b-bb52c92a60\" DevicePath \"\"" Jan 29 17:00:50.403619 kubelet[2800]: I0129 17:00:50.403329 2800 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-cni-path\") on node \"ci-4230-0-0-b-bb52c92a60\" DevicePath \"\"" Jan 29 17:00:50.404215 kubelet[2800]: I0129 17:00:50.403348 2800 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-host-proc-sys-kernel\") on node \"ci-4230-0-0-b-bb52c92a60\" DevicePath \"\"" Jan 29 17:00:50.404215 kubelet[2800]: I0129 17:00:50.403366 2800 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zns67\" (UniqueName: \"kubernetes.io/projected/462e791b-f35b-4d7b-aece-37cd6b6c792c-kube-api-access-zns67\") on node \"ci-4230-0-0-b-bb52c92a60\" DevicePath \"\"" Jan 29 17:00:50.404215 kubelet[2800]: I0129 17:00:50.403385 2800 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-etc-cni-netd\") on node \"ci-4230-0-0-b-bb52c92a60\" DevicePath \"\"" Jan 29 17:00:50.404215 kubelet[2800]: I0129 17:00:50.403402 2800 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/462e791b-f35b-4d7b-aece-37cd6b6c792c-clustermesh-secrets\") on node \"ci-4230-0-0-b-bb52c92a60\" DevicePath \"\"" Jan 29 17:00:50.404215 kubelet[2800]: I0129 17:00:50.403421 2800 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-host-proc-sys-net\") on node \"ci-4230-0-0-b-bb52c92a60\" DevicePath \"\"" Jan 29 17:00:50.404215 kubelet[2800]: I0129 17:00:50.403480 2800 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "462e791b-f35b-4d7b-aece-37cd6b6c792c" (UID: "462e791b-f35b-4d7b-aece-37cd6b6c792c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:00:50.407966 kubelet[2800]: I0129 17:00:50.406088 2800 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "462e791b-f35b-4d7b-aece-37cd6b6c792c" (UID: "462e791b-f35b-4d7b-aece-37cd6b6c792c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:00:50.407966 kubelet[2800]: I0129 17:00:50.406164 2800 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "462e791b-f35b-4d7b-aece-37cd6b6c792c" (UID: "462e791b-f35b-4d7b-aece-37cd6b6c792c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:00:50.412184 kubelet[2800]: I0129 17:00:50.411943 2800 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/462e791b-f35b-4d7b-aece-37cd6b6c792c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "462e791b-f35b-4d7b-aece-37cd6b6c792c" (UID: "462e791b-f35b-4d7b-aece-37cd6b6c792c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:50.412184 kubelet[2800]: I0129 17:00:50.412034 2800 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-hostproc" (OuterVolumeSpecName: "hostproc") pod "462e791b-f35b-4d7b-aece-37cd6b6c792c" (UID: "462e791b-f35b-4d7b-aece-37cd6b6c792c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:00:50.412184 kubelet[2800]: I0129 17:00:50.412071 2800 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "462e791b-f35b-4d7b-aece-37cd6b6c792c" (UID: "462e791b-f35b-4d7b-aece-37cd6b6c792c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:00:50.412184 kubelet[2800]: I0129 17:00:50.412106 2800 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "462e791b-f35b-4d7b-aece-37cd6b6c792c" (UID: "462e791b-f35b-4d7b-aece-37cd6b6c792c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:00:50.413585 kubelet[2800]: I0129 17:00:50.413530 2800 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/462e791b-f35b-4d7b-aece-37cd6b6c792c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "462e791b-f35b-4d7b-aece-37cd6b6c792c" (UID: "462e791b-f35b-4d7b-aece-37cd6b6c792c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:50.504209 kubelet[2800]: I0129 17:00:50.504112 2800 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-xtables-lock\") on node \"ci-4230-0-0-b-bb52c92a60\" DevicePath \"\"" Jan 29 17:00:50.504209 kubelet[2800]: I0129 17:00:50.504173 2800 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-hostproc\") on node \"ci-4230-0-0-b-bb52c92a60\" DevicePath \"\"" Jan 29 17:00:50.504209 kubelet[2800]: I0129 17:00:50.504208 2800 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-lib-modules\") on node \"ci-4230-0-0-b-bb52c92a60\" DevicePath \"\"" Jan 29 17:00:50.504209 kubelet[2800]: I0129 17:00:50.504231 2800 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-cilium-run\") on node \"ci-4230-0-0-b-bb52c92a60\" DevicePath \"\"" Jan 29 17:00:50.504604 kubelet[2800]: I0129 17:00:50.504255 2800 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-bpf-maps\") on node \"ci-4230-0-0-b-bb52c92a60\" DevicePath \"\"" Jan 29 17:00:50.504604 kubelet[2800]: I0129 17:00:50.504278 2800 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/462e791b-f35b-4d7b-aece-37cd6b6c792c-cilium-config-path\") on node \"ci-4230-0-0-b-bb52c92a60\" DevicePath \"\"" Jan 29 17:00:50.504604 kubelet[2800]: I0129 17:00:50.504300 2800 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/462e791b-f35b-4d7b-aece-37cd6b6c792c-cilium-cgroup\") on node \"ci-4230-0-0-b-bb52c92a60\" DevicePath \"\"" Jan 29 17:00:50.504604 kubelet[2800]: I0129 17:00:50.504318 2800 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/462e791b-f35b-4d7b-aece-37cd6b6c792c-hubble-tls\") on node \"ci-4230-0-0-b-bb52c92a60\" DevicePath \"\"" Jan 29 17:00:50.896183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb-rootfs.mount: Deactivated successfully. Jan 29 17:00:50.896823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51-rootfs.mount: Deactivated successfully. Jan 29 17:00:50.897260 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51-shm.mount: Deactivated successfully. Jan 29 17:00:50.898136 systemd[1]: var-lib-kubelet-pods-ac6bb1a3\x2df57f\x2d44a2\x2d8fe0\x2dc6befd571fa6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk8p6m.mount: Deactivated successfully. Jan 29 17:00:50.898328 systemd[1]: var-lib-kubelet-pods-462e791b\x2df35b\x2d4d7b\x2daece\x2d37cd6b6c792c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzns67.mount: Deactivated successfully. Jan 29 17:00:50.898552 systemd[1]: var-lib-kubelet-pods-462e791b\x2df35b\x2d4d7b\x2daece\x2d37cd6b6c792c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 17:00:50.898817 systemd[1]: var-lib-kubelet-pods-462e791b\x2df35b\x2d4d7b\x2daece\x2d37cd6b6c792c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 17:00:50.989295 kubelet[2800]: I0129 17:00:50.986938 2800 scope.go:117] "RemoveContainer" containerID="6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c" Jan 29 17:00:50.996241 systemd[1]: Removed slice kubepods-besteffort-podac6bb1a3_f57f_44a2_8fe0_c6befd571fa6.slice - libcontainer container kubepods-besteffort-podac6bb1a3_f57f_44a2_8fe0_c6befd571fa6.slice. Jan 29 17:00:51.010079 containerd[1503]: time="2025-01-29T17:00:51.010005107Z" level=info msg="RemoveContainer for \"6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c\"" Jan 29 17:00:51.027206 containerd[1503]: time="2025-01-29T17:00:51.026226674Z" level=info msg="RemoveContainer for \"6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c\" returns successfully" Jan 29 17:00:51.044338 kubelet[2800]: I0129 17:00:51.042099 2800 scope.go:117] "RemoveContainer" containerID="6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c" Jan 29 17:00:51.043152 systemd[1]: Removed slice kubepods-burstable-pod462e791b_f35b_4d7b_aece_37cd6b6c792c.slice - libcontainer container kubepods-burstable-pod462e791b_f35b_4d7b_aece_37cd6b6c792c.slice. Jan 29 17:00:51.044671 containerd[1503]: time="2025-01-29T17:00:51.042434517Z" level=error msg="ContainerStatus for \"6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c\": not found" Jan 29 17:00:51.043392 systemd[1]: kubepods-burstable-pod462e791b_f35b_4d7b_aece_37cd6b6c792c.slice: Consumed 9.015s CPU time, 188.8M memory peak, 68.6M read from disk, 13.3M written to disk. Jan 29 17:00:51.052842 kubelet[2800]: E0129 17:00:51.052639 2800 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c\": not found" containerID="6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c" Jan 29 17:00:51.053278 kubelet[2800]: I0129 17:00:51.052779 2800 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c"} err="failed to get container status \"6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e7d168c2b80a59f6493f185f99f67edee6867b048b16d82c77e2e635656880c\": not found" Jan 29 17:00:51.053278 kubelet[2800]: I0129 17:00:51.053198 2800 scope.go:117] "RemoveContainer" containerID="f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5" Jan 29 17:00:51.056858 containerd[1503]: time="2025-01-29T17:00:51.056191455Z" level=info msg="RemoveContainer for \"f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5\"" Jan 29 17:00:51.063866 containerd[1503]: time="2025-01-29T17:00:51.063647512Z" level=info msg="RemoveContainer for \"f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5\" returns successfully" Jan 29 17:00:51.064623 kubelet[2800]: I0129 17:00:51.064230 2800 scope.go:117] "RemoveContainer" containerID="e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a" Jan 29 17:00:51.066929 containerd[1503]: time="2025-01-29T17:00:51.066481286Z" level=info msg="RemoveContainer for \"e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a\"" Jan 29 17:00:51.073845 containerd[1503]: time="2025-01-29T17:00:51.073786028Z" level=info msg="RemoveContainer for \"e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a\" returns successfully" Jan 29 17:00:51.074701 kubelet[2800]: I0129 17:00:51.074233 2800 scope.go:117] "RemoveContainer" containerID="202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92" Jan 29 17:00:51.076175 containerd[1503]: time="2025-01-29T17:00:51.076030444Z" level=info msg="RemoveContainer for \"202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92\"" Jan 29 17:00:51.083172 containerd[1503]: time="2025-01-29T17:00:51.082312450Z" level=info msg="RemoveContainer for \"202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92\" returns successfully" Jan 29 17:00:51.085036 kubelet[2800]: I0129 17:00:51.084532 2800 scope.go:117] "RemoveContainer" containerID="b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b" Jan 29 17:00:51.087009 containerd[1503]: time="2025-01-29T17:00:51.086942156Z" level=info msg="RemoveContainer for \"b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b\"" Jan 29 17:00:51.091795 containerd[1503]: time="2025-01-29T17:00:51.091756870Z" level=info msg="RemoveContainer for \"b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b\" returns successfully" Jan 29 17:00:51.092108 kubelet[2800]: I0129 17:00:51.091909 2800 scope.go:117] "RemoveContainer" containerID="bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01" Jan 29 17:00:51.092869 containerd[1503]: time="2025-01-29T17:00:51.092839247Z" level=info msg="RemoveContainer for \"bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01\"" Jan 29 17:00:51.097142 containerd[1503]: time="2025-01-29T17:00:51.097108685Z" level=info msg="RemoveContainer for \"bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01\" returns successfully" Jan 29 17:00:51.097394 kubelet[2800]: I0129 17:00:51.097311 2800 scope.go:117] "RemoveContainer" containerID="f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5" Jan 29 17:00:51.097565 containerd[1503]: time="2025-01-29T17:00:51.097497567Z" level=error msg="ContainerStatus for \"f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5\": not found" Jan 29 17:00:51.097668 kubelet[2800]: E0129 17:00:51.097637 2800 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5\": not found" containerID="f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5" Jan 29 17:00:51.097798 kubelet[2800]: I0129 17:00:51.097667 2800 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5"} err="failed to get container status \"f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"f48f964f6a528ee9212bd299ca17920e3e6e8249cb65d766fb9a820451e799a5\": not found" Jan 29 17:00:51.097798 kubelet[2800]: I0129 17:00:51.097711 2800 scope.go:117] "RemoveContainer" containerID="e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a" Jan 29 17:00:51.098000 containerd[1503]: time="2025-01-29T17:00:51.097847936Z" level=error msg="ContainerStatus for \"e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a\": not found" Jan 29 17:00:51.098065 kubelet[2800]: E0129 17:00:51.098003 2800 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a\": not found" containerID="e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a" Jan 29 17:00:51.098065 kubelet[2800]: I0129 17:00:51.098025 2800 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a"} err="failed to get container status \"e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3e9b435b6f85d4d22c629891f208229fb7a1b41a3e1e2c1ac4bbc925e6ac87a\": not found" Jan 29 17:00:51.098065 kubelet[2800]: I0129 17:00:51.098045 2800 scope.go:117] "RemoveContainer" containerID="202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92" Jan 29 17:00:51.098265 containerd[1503]: time="2025-01-29T17:00:51.098193877Z" level=error msg="ContainerStatus for \"202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92\": not found" Jan 29 17:00:51.098458 kubelet[2800]: E0129 17:00:51.098373 2800 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92\": not found" containerID="202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92" Jan 29 17:00:51.098458 kubelet[2800]: I0129 17:00:51.098391 2800 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92"} err="failed to get container status \"202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92\": rpc error: code = NotFound desc = an error occurred when try to find container \"202ff66262d62786ed72d20f6263246634d69f06073702e3d9e90408e22a1a92\": not found" Jan 29 17:00:51.098458 kubelet[2800]: I0129 17:00:51.098404 2800 scope.go:117] "RemoveContainer" containerID="b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b" Jan 29 17:00:51.098849 containerd[1503]: time="2025-01-29T17:00:51.098637543Z" level=error msg="ContainerStatus for \"b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b\": not found" Jan 29 17:00:51.098927 kubelet[2800]: E0129 17:00:51.098814 2800 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b\": not found" containerID="b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b" Jan 29 17:00:51.099072 kubelet[2800]: I0129 17:00:51.099005 2800 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b"} err="failed to get container status \"b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b\": rpc error: code = NotFound desc = an error occurred when try to find container \"b292f252f37a7c7541236cc0ca9135b14de09335210eaeb700177e2c2bfe7f9b\": not found" Jan 29 17:00:51.099072 kubelet[2800]: I0129 17:00:51.099029 2800 scope.go:117] "RemoveContainer" containerID="bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01" Jan 29 17:00:51.099342 containerd[1503]: time="2025-01-29T17:00:51.099296604Z" level=error msg="ContainerStatus for \"bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01\": not found" Jan 29 17:00:51.099576 kubelet[2800]: E0129 17:00:51.099544 2800 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01\": not found" containerID="bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01" Jan 29 17:00:51.099645 kubelet[2800]: I0129 17:00:51.099582 2800 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01"} err="failed to get container status \"bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb1ee914111f8b5181a0e4982488af17288749a3009fc34db097ec85d5deaf01\": not found" Jan 29 17:00:51.314051 kubelet[2800]: I0129 17:00:51.313964 2800 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="462e791b-f35b-4d7b-aece-37cd6b6c792c" path="/var/lib/kubelet/pods/462e791b-f35b-4d7b-aece-37cd6b6c792c/volumes" Jan 29 17:00:51.315513 kubelet[2800]: I0129 17:00:51.315475 2800 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6" path="/var/lib/kubelet/pods/ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6/volumes" Jan 29 17:00:51.876481 sshd[4375]: Connection closed by 147.75.109.163 port 50730 Jan 29 17:00:51.877291 sshd-session[4373]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:51.883405 systemd[1]: sshd@20-116.202.14.223:22-147.75.109.163:50730.service: Deactivated successfully. Jan 29 17:00:51.887053 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 17:00:51.888296 systemd-logind[1483]: Session 20 logged out. Waiting for processes to exit. Jan 29 17:00:51.890180 systemd-logind[1483]: Removed session 20. Jan 29 17:00:52.056957 systemd[1]: Started sshd@21-116.202.14.223:22-147.75.109.163:50802.service - OpenSSH per-connection server daemon (147.75.109.163:50802). Jan 29 17:00:53.033995 sshd[4540]: Accepted publickey for core from 147.75.109.163 port 50802 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:53.037156 sshd-session[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:53.046853 systemd-logind[1483]: New session 21 of user core. Jan 29 17:00:53.057165 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 17:00:53.457953 kubelet[2800]: E0129 17:00:53.457832 2800 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 17:00:54.332388 kubelet[2800]: E0129 17:00:54.330859 2800 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="462e791b-f35b-4d7b-aece-37cd6b6c792c" containerName="mount-cgroup" Jan 29 17:00:54.332632 kubelet[2800]: E0129 17:00:54.332608 2800 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="462e791b-f35b-4d7b-aece-37cd6b6c792c" containerName="cilium-agent" Jan 29 17:00:54.332783 kubelet[2800]: E0129 17:00:54.332764 2800 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6" containerName="cilium-operator" Jan 29 17:00:54.332914 kubelet[2800]: E0129 17:00:54.332870 2800 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="462e791b-f35b-4d7b-aece-37cd6b6c792c" containerName="mount-bpf-fs" Jan 29 17:00:54.333118 kubelet[2800]: E0129 17:00:54.333101 2800 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="462e791b-f35b-4d7b-aece-37cd6b6c792c" containerName="clean-cilium-state" Jan 29 17:00:54.333220 kubelet[2800]: E0129 17:00:54.333205 2800 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="462e791b-f35b-4d7b-aece-37cd6b6c792c" containerName="apply-sysctl-overwrites" Jan 29 17:00:54.333356 kubelet[2800]: I0129 17:00:54.333340 2800 memory_manager.go:354] "RemoveStaleState removing state" podUID="462e791b-f35b-4d7b-aece-37cd6b6c792c" containerName="cilium-agent" Jan 29 17:00:54.333459 kubelet[2800]: I0129 17:00:54.333444 2800 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac6bb1a3-f57f-44a2-8fe0-c6befd571fa6" containerName="cilium-operator" Jan 29 17:00:54.366074 systemd[1]: Created slice kubepods-burstable-pod21d2224c_a188_4359_a1d7_ee59875b925f.slice - libcontainer container kubepods-burstable-pod21d2224c_a188_4359_a1d7_ee59875b925f.slice. Jan 29 17:00:54.444410 kubelet[2800]: I0129 17:00:54.444312 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21d2224c-a188-4359-a1d7-ee59875b925f-bpf-maps\") pod \"cilium-rb69b\" (UID: \"21d2224c-a188-4359-a1d7-ee59875b925f\") " pod="kube-system/cilium-rb69b" Jan 29 17:00:54.444410 kubelet[2800]: I0129 17:00:54.444376 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21d2224c-a188-4359-a1d7-ee59875b925f-hostproc\") pod \"cilium-rb69b\" (UID: \"21d2224c-a188-4359-a1d7-ee59875b925f\") " pod="kube-system/cilium-rb69b" Jan 29 17:00:54.444410 kubelet[2800]: I0129 17:00:54.444415 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6dwg\" (UniqueName: \"kubernetes.io/projected/21d2224c-a188-4359-a1d7-ee59875b925f-kube-api-access-c6dwg\") pod \"cilium-rb69b\" (UID: \"21d2224c-a188-4359-a1d7-ee59875b925f\") " pod="kube-system/cilium-rb69b" Jan 29 17:00:54.444782 kubelet[2800]: I0129 17:00:54.444451 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21d2224c-a188-4359-a1d7-ee59875b925f-host-proc-sys-net\") pod \"cilium-rb69b\" (UID: \"21d2224c-a188-4359-a1d7-ee59875b925f\") " pod="kube-system/cilium-rb69b" Jan 29 17:00:54.444782 kubelet[2800]: I0129 17:00:54.444484 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21d2224c-a188-4359-a1d7-ee59875b925f-cilium-cgroup\") pod \"cilium-rb69b\" (UID: \"21d2224c-a188-4359-a1d7-ee59875b925f\") " pod="kube-system/cilium-rb69b" Jan 29 17:00:54.444782 kubelet[2800]: I0129 17:00:54.444515 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21d2224c-a188-4359-a1d7-ee59875b925f-clustermesh-secrets\") pod \"cilium-rb69b\" (UID: \"21d2224c-a188-4359-a1d7-ee59875b925f\") " pod="kube-system/cilium-rb69b" Jan 29 17:00:54.444782 kubelet[2800]: I0129 17:00:54.444544 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21d2224c-a188-4359-a1d7-ee59875b925f-etc-cni-netd\") pod \"cilium-rb69b\" (UID: \"21d2224c-a188-4359-a1d7-ee59875b925f\") " pod="kube-system/cilium-rb69b" Jan 29 17:00:54.444782 kubelet[2800]: I0129 17:00:54.444572 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21d2224c-a188-4359-a1d7-ee59875b925f-xtables-lock\") pod \"cilium-rb69b\" (UID: \"21d2224c-a188-4359-a1d7-ee59875b925f\") " pod="kube-system/cilium-rb69b" Jan 29 17:00:54.444782 kubelet[2800]: I0129 17:00:54.444601 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21d2224c-a188-4359-a1d7-ee59875b925f-cni-path\") pod \"cilium-rb69b\" (UID: \"21d2224c-a188-4359-a1d7-ee59875b925f\") " pod="kube-system/cilium-rb69b" Jan 29 17:00:54.445109 kubelet[2800]: I0129 17:00:54.444639 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21d2224c-a188-4359-a1d7-ee59875b925f-hubble-tls\") pod \"cilium-rb69b\" (UID: \"21d2224c-a188-4359-a1d7-ee59875b925f\") " pod="kube-system/cilium-rb69b" Jan 29 17:00:54.445109 kubelet[2800]: I0129 17:00:54.444670 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/21d2224c-a188-4359-a1d7-ee59875b925f-cilium-ipsec-secrets\") pod \"cilium-rb69b\" (UID: \"21d2224c-a188-4359-a1d7-ee59875b925f\") " pod="kube-system/cilium-rb69b" Jan 29 17:00:54.445109 kubelet[2800]: I0129 17:00:54.444726 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21d2224c-a188-4359-a1d7-ee59875b925f-host-proc-sys-kernel\") pod \"cilium-rb69b\" (UID: \"21d2224c-a188-4359-a1d7-ee59875b925f\") " pod="kube-system/cilium-rb69b" Jan 29 17:00:54.445109 kubelet[2800]: I0129 17:00:54.444757 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21d2224c-a188-4359-a1d7-ee59875b925f-cilium-run\") pod \"cilium-rb69b\" (UID: \"21d2224c-a188-4359-a1d7-ee59875b925f\") " pod="kube-system/cilium-rb69b" Jan 29 17:00:54.445109 kubelet[2800]: I0129 17:00:54.444789 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21d2224c-a188-4359-a1d7-ee59875b925f-lib-modules\") pod \"cilium-rb69b\" (UID: \"21d2224c-a188-4359-a1d7-ee59875b925f\") " pod="kube-system/cilium-rb69b" Jan 29 17:00:54.445109 kubelet[2800]: I0129 17:00:54.444816 2800 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21d2224c-a188-4359-a1d7-ee59875b925f-cilium-config-path\") pod \"cilium-rb69b\" (UID: \"21d2224c-a188-4359-a1d7-ee59875b925f\") " pod="kube-system/cilium-rb69b" Jan 29 17:00:54.519860 sshd[4542]: Connection closed by 147.75.109.163 port 50802 Jan 29 17:00:54.521359 sshd-session[4540]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:54.529426 systemd[1]: sshd@21-116.202.14.223:22-147.75.109.163:50802.service: Deactivated successfully. Jan 29 17:00:54.535273 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 17:00:54.536843 systemd-logind[1483]: Session 21 logged out. Waiting for processes to exit. Jan 29 17:00:54.539098 systemd-logind[1483]: Removed session 21. Jan 29 17:00:54.670049 containerd[1503]: time="2025-01-29T17:00:54.669769753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rb69b,Uid:21d2224c-a188-4359-a1d7-ee59875b925f,Namespace:kube-system,Attempt:0,}" Jan 29 17:00:54.725005 containerd[1503]: time="2025-01-29T17:00:54.722662319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 17:00:54.725005 containerd[1503]: time="2025-01-29T17:00:54.722859880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 17:00:54.725005 containerd[1503]: time="2025-01-29T17:00:54.723011125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 17:00:54.725005 containerd[1503]: time="2025-01-29T17:00:54.723174193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 17:00:54.724012 systemd[1]: Started sshd@22-116.202.14.223:22-147.75.109.163:50814.service - OpenSSH per-connection server daemon (147.75.109.163:50814). Jan 29 17:00:54.757399 systemd[1]: Started cri-containerd-6f5efe233c67581c9b165c2e8319f6d95534298c2fa4cdad401f226876a1b4d0.scope - libcontainer container 6f5efe233c67581c9b165c2e8319f6d95534298c2fa4cdad401f226876a1b4d0. Jan 29 17:00:54.804106 containerd[1503]: time="2025-01-29T17:00:54.804059492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rb69b,Uid:21d2224c-a188-4359-a1d7-ee59875b925f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f5efe233c67581c9b165c2e8319f6d95534298c2fa4cdad401f226876a1b4d0\"" Jan 29 17:00:54.813483 containerd[1503]: time="2025-01-29T17:00:54.813384577Z" level=info msg="CreateContainer within sandbox \"6f5efe233c67581c9b165c2e8319f6d95534298c2fa4cdad401f226876a1b4d0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 17:00:54.829402 containerd[1503]: time="2025-01-29T17:00:54.829288324Z" level=info msg="CreateContainer within sandbox \"6f5efe233c67581c9b165c2e8319f6d95534298c2fa4cdad401f226876a1b4d0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"98b7cc7f948adb730dbfabf10f226910f55e4df0c35f4722a9635782b50d0477\"" Jan 29 17:00:54.832498 containerd[1503]: time="2025-01-29T17:00:54.830217824Z" level=info msg="StartContainer for \"98b7cc7f948adb730dbfabf10f226910f55e4df0c35f4722a9635782b50d0477\"" Jan 29 17:00:54.862050 systemd[1]: Started cri-containerd-98b7cc7f948adb730dbfabf10f226910f55e4df0c35f4722a9635782b50d0477.scope - libcontainer container 98b7cc7f948adb730dbfabf10f226910f55e4df0c35f4722a9635782b50d0477. Jan 29 17:00:54.894755 containerd[1503]: time="2025-01-29T17:00:54.894688575Z" level=info msg="StartContainer for \"98b7cc7f948adb730dbfabf10f226910f55e4df0c35f4722a9635782b50d0477\" returns successfully" Jan 29 17:00:54.914143 systemd[1]: cri-containerd-98b7cc7f948adb730dbfabf10f226910f55e4df0c35f4722a9635782b50d0477.scope: Deactivated successfully. Jan 29 17:00:54.914621 systemd[1]: cri-containerd-98b7cc7f948adb730dbfabf10f226910f55e4df0c35f4722a9635782b50d0477.scope: Consumed 27ms CPU time, 9.6M memory peak, 3M read from disk. Jan 29 17:00:54.970066 containerd[1503]: time="2025-01-29T17:00:54.969819073Z" level=info msg="shim disconnected" id=98b7cc7f948adb730dbfabf10f226910f55e4df0c35f4722a9635782b50d0477 namespace=k8s.io Jan 29 17:00:54.970655 containerd[1503]: time="2025-01-29T17:00:54.970360733Z" level=warning msg="cleaning up after shim disconnected" id=98b7cc7f948adb730dbfabf10f226910f55e4df0c35f4722a9635782b50d0477 namespace=k8s.io Jan 29 17:00:54.970655 containerd[1503]: time="2025-01-29T17:00:54.970387182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:00:55.042908 containerd[1503]: time="2025-01-29T17:00:55.042792778Z" level=info msg="CreateContainer within sandbox \"6f5efe233c67581c9b165c2e8319f6d95534298c2fa4cdad401f226876a1b4d0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 17:00:55.066155 containerd[1503]: time="2025-01-29T17:00:55.066071327Z" level=info msg="CreateContainer within sandbox \"6f5efe233c67581c9b165c2e8319f6d95534298c2fa4cdad401f226876a1b4d0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"44157d5169b344fefea7048cf84ff196f68abb54e21176c665650d59edc8390b\"" Jan 29 17:00:55.069130 containerd[1503]: time="2025-01-29T17:00:55.068972298Z" level=info msg="StartContainer for \"44157d5169b344fefea7048cf84ff196f68abb54e21176c665650d59edc8390b\"" Jan 29 17:00:55.112173 systemd[1]: Started cri-containerd-44157d5169b344fefea7048cf84ff196f68abb54e21176c665650d59edc8390b.scope - libcontainer container 44157d5169b344fefea7048cf84ff196f68abb54e21176c665650d59edc8390b. Jan 29 17:00:55.158947 containerd[1503]: time="2025-01-29T17:00:55.158224494Z" level=info msg="StartContainer for \"44157d5169b344fefea7048cf84ff196f68abb54e21176c665650d59edc8390b\" returns successfully" Jan 29 17:00:55.173250 systemd[1]: cri-containerd-44157d5169b344fefea7048cf84ff196f68abb54e21176c665650d59edc8390b.scope: Deactivated successfully. Jan 29 17:00:55.173828 systemd[1]: cri-containerd-44157d5169b344fefea7048cf84ff196f68abb54e21176c665650d59edc8390b.scope: Consumed 32ms CPU time, 7.6M memory peak, 2.1M read from disk. Jan 29 17:00:55.214179 containerd[1503]: time="2025-01-29T17:00:55.214049528Z" level=info msg="shim disconnected" id=44157d5169b344fefea7048cf84ff196f68abb54e21176c665650d59edc8390b namespace=k8s.io Jan 29 17:00:55.214179 containerd[1503]: time="2025-01-29T17:00:55.214136673Z" level=warning msg="cleaning up after shim disconnected" id=44157d5169b344fefea7048cf84ff196f68abb54e21176c665650d59edc8390b namespace=k8s.io Jan 29 17:00:55.214179 containerd[1503]: time="2025-01-29T17:00:55.214177118Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:00:55.753821 sshd[4571]: Accepted publickey for core from 147.75.109.163 port 50814 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:55.757558 sshd-session[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:55.769062 systemd-logind[1483]: New session 22 of user core. Jan 29 17:00:55.776208 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 17:00:56.049241 containerd[1503]: time="2025-01-29T17:00:56.048771819Z" level=info msg="CreateContainer within sandbox \"6f5efe233c67581c9b165c2e8319f6d95534298c2fa4cdad401f226876a1b4d0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 17:00:56.079826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2174072637.mount: Deactivated successfully. Jan 29 17:00:56.086379 containerd[1503]: time="2025-01-29T17:00:56.086305921Z" level=info msg="CreateContainer within sandbox \"6f5efe233c67581c9b165c2e8319f6d95534298c2fa4cdad401f226876a1b4d0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dcf44dd85c728b157a3aff6d4b753e95ded7cefc8761eb52667169bd010c3ccf\"" Jan 29 17:00:56.087455 containerd[1503]: time="2025-01-29T17:00:56.087088715Z" level=info msg="StartContainer for \"dcf44dd85c728b157a3aff6d4b753e95ded7cefc8761eb52667169bd010c3ccf\"" Jan 29 17:00:56.130157 systemd[1]: Started cri-containerd-dcf44dd85c728b157a3aff6d4b753e95ded7cefc8761eb52667169bd010c3ccf.scope - libcontainer container dcf44dd85c728b157a3aff6d4b753e95ded7cefc8761eb52667169bd010c3ccf. Jan 29 17:00:56.191080 containerd[1503]: time="2025-01-29T17:00:56.191015991Z" level=info msg="StartContainer for \"dcf44dd85c728b157a3aff6d4b753e95ded7cefc8761eb52667169bd010c3ccf\" returns successfully" Jan 29 17:00:56.199249 systemd[1]: cri-containerd-dcf44dd85c728b157a3aff6d4b753e95ded7cefc8761eb52667169bd010c3ccf.scope: Deactivated successfully. Jan 29 17:00:56.234012 containerd[1503]: time="2025-01-29T17:00:56.233922265Z" level=info msg="shim disconnected" id=dcf44dd85c728b157a3aff6d4b753e95ded7cefc8761eb52667169bd010c3ccf namespace=k8s.io Jan 29 17:00:56.234012 containerd[1503]: time="2025-01-29T17:00:56.233974363Z" level=warning msg="cleaning up after shim disconnected" id=dcf44dd85c728b157a3aff6d4b753e95ded7cefc8761eb52667169bd010c3ccf namespace=k8s.io Jan 29 17:00:56.234012 containerd[1503]: time="2025-01-29T17:00:56.233984001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:00:56.440231 sshd[4724]: Connection closed by 147.75.109.163 port 50814 Jan 29 17:00:56.441512 sshd-session[4571]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:56.448413 systemd[1]: sshd@22-116.202.14.223:22-147.75.109.163:50814.service: Deactivated successfully. Jan 29 17:00:56.454025 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 17:00:56.456096 systemd-logind[1483]: Session 22 logged out. Waiting for processes to exit. Jan 29 17:00:56.460312 systemd-logind[1483]: Removed session 22. Jan 29 17:00:56.570738 systemd[1]: run-containerd-runc-k8s.io-dcf44dd85c728b157a3aff6d4b753e95ded7cefc8761eb52667169bd010c3ccf-runc.AnUSC0.mount: Deactivated successfully. Jan 29 17:00:56.571025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcf44dd85c728b157a3aff6d4b753e95ded7cefc8761eb52667169bd010c3ccf-rootfs.mount: Deactivated successfully. Jan 29 17:00:56.627429 systemd[1]: Started sshd@23-116.202.14.223:22-147.75.109.163:50824.service - OpenSSH per-connection server daemon (147.75.109.163:50824). Jan 29 17:00:57.056174 containerd[1503]: time="2025-01-29T17:00:57.056053268Z" level=info msg="CreateContainer within sandbox \"6f5efe233c67581c9b165c2e8319f6d95534298c2fa4cdad401f226876a1b4d0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 17:00:57.099085 containerd[1503]: time="2025-01-29T17:00:57.095293299Z" level=info msg="CreateContainer within sandbox \"6f5efe233c67581c9b165c2e8319f6d95534298c2fa4cdad401f226876a1b4d0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a5ac9c91a7d92eac1a42ab4d95d20650e3662d918b877a80eb2cb731a4ce7d34\"" Jan 29 17:00:57.099085 containerd[1503]: time="2025-01-29T17:00:57.096748929Z" level=info msg="StartContainer for \"a5ac9c91a7d92eac1a42ab4d95d20650e3662d918b877a80eb2cb731a4ce7d34\"" Jan 29 17:00:57.176205 systemd[1]: Started cri-containerd-a5ac9c91a7d92eac1a42ab4d95d20650e3662d918b877a80eb2cb731a4ce7d34.scope - libcontainer container a5ac9c91a7d92eac1a42ab4d95d20650e3662d918b877a80eb2cb731a4ce7d34. Jan 29 17:00:57.212567 systemd[1]: cri-containerd-a5ac9c91a7d92eac1a42ab4d95d20650e3662d918b877a80eb2cb731a4ce7d34.scope: Deactivated successfully. Jan 29 17:00:57.214715 containerd[1503]: time="2025-01-29T17:00:57.214542821Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21d2224c_a188_4359_a1d7_ee59875b925f.slice/cri-containerd-a5ac9c91a7d92eac1a42ab4d95d20650e3662d918b877a80eb2cb731a4ce7d34.scope/memory.events\": no such file or directory" Jan 29 17:00:57.218140 containerd[1503]: time="2025-01-29T17:00:57.217980130Z" level=info msg="StartContainer for \"a5ac9c91a7d92eac1a42ab4d95d20650e3662d918b877a80eb2cb731a4ce7d34\" returns successfully" Jan 29 17:00:57.254292 containerd[1503]: time="2025-01-29T17:00:57.254192824Z" level=info msg="shim disconnected" id=a5ac9c91a7d92eac1a42ab4d95d20650e3662d918b877a80eb2cb731a4ce7d34 namespace=k8s.io Jan 29 17:00:57.254292 containerd[1503]: time="2025-01-29T17:00:57.254277744Z" level=warning msg="cleaning up after shim disconnected" id=a5ac9c91a7d92eac1a42ab4d95d20650e3662d918b877a80eb2cb731a4ce7d34 namespace=k8s.io Jan 29 17:00:57.254292 containerd[1503]: time="2025-01-29T17:00:57.254291640Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:00:57.571554 systemd[1]: run-containerd-runc-k8s.io-a5ac9c91a7d92eac1a42ab4d95d20650e3662d918b877a80eb2cb731a4ce7d34-runc.CX0bCF.mount: Deactivated successfully. Jan 29 17:00:57.571810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5ac9c91a7d92eac1a42ab4d95d20650e3662d918b877a80eb2cb731a4ce7d34-rootfs.mount: Deactivated successfully. Jan 29 17:00:57.636618 sshd[4788]: Accepted publickey for core from 147.75.109.163 port 50824 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:57.639378 sshd-session[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:57.644985 systemd-logind[1483]: New session 23 of user core. Jan 29 17:00:57.650107 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 17:00:58.069692 containerd[1503]: time="2025-01-29T17:00:58.069289030Z" level=info msg="CreateContainer within sandbox \"6f5efe233c67581c9b165c2e8319f6d95534298c2fa4cdad401f226876a1b4d0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 17:00:58.109194 containerd[1503]: time="2025-01-29T17:00:58.109113910Z" level=info msg="CreateContainer within sandbox \"6f5efe233c67581c9b165c2e8319f6d95534298c2fa4cdad401f226876a1b4d0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c9d7ae6d0e6da73dbb434440fad425141807a66ebe298b68738d8fbf90f80b11\"" Jan 29 17:00:58.114076 containerd[1503]: time="2025-01-29T17:00:58.112174070Z" level=info msg="StartContainer for \"c9d7ae6d0e6da73dbb434440fad425141807a66ebe298b68738d8fbf90f80b11\"" Jan 29 17:00:58.175200 systemd[1]: Started cri-containerd-c9d7ae6d0e6da73dbb434440fad425141807a66ebe298b68738d8fbf90f80b11.scope - libcontainer container c9d7ae6d0e6da73dbb434440fad425141807a66ebe298b68738d8fbf90f80b11. Jan 29 17:00:58.238783 containerd[1503]: time="2025-01-29T17:00:58.238696076Z" level=info msg="StartContainer for \"c9d7ae6d0e6da73dbb434440fad425141807a66ebe298b68738d8fbf90f80b11\" returns successfully" Jan 29 17:00:58.348267 kubelet[2800]: I0129 17:00:58.348018 2800 setters.go:600] "Node became not ready" node="ci-4230-0-0-b-bb52c92a60" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T17:00:58Z","lastTransitionTime":"2025-01-29T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 17:00:58.979915 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 17:00:59.090686 kubelet[2800]: I0129 17:00:59.090362 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rb69b" podStartSLOduration=5.090333785 podStartE2EDuration="5.090333785s" podCreationTimestamp="2025-01-29 17:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 17:00:59.084209317 +0000 UTC m=+225.911828341" watchObservedRunningTime="2025-01-29 17:00:59.090333785 +0000 UTC m=+225.917952809" Jan 29 17:01:00.668095 systemd[1]: run-containerd-runc-k8s.io-c9d7ae6d0e6da73dbb434440fad425141807a66ebe298b68738d8fbf90f80b11-runc.HAycFX.mount: Deactivated successfully. Jan 29 17:01:02.464622 systemd-networkd[1409]: lxc_health: Link UP Jan 29 17:01:02.480095 systemd-networkd[1409]: lxc_health: Gained carrier Jan 29 17:01:02.888458 systemd[1]: run-containerd-runc-k8s.io-c9d7ae6d0e6da73dbb434440fad425141807a66ebe298b68738d8fbf90f80b11-runc.QRhCkm.mount: Deactivated successfully. Jan 29 17:01:03.611095 systemd-networkd[1409]: lxc_health: Gained IPv6LL Jan 29 17:01:09.675640 sshd[4846]: Connection closed by 147.75.109.163 port 50824 Jan 29 17:01:09.677617 sshd-session[4788]: pam_unix(sshd:session): session closed for user core Jan 29 17:01:09.684978 systemd[1]: sshd@23-116.202.14.223:22-147.75.109.163:50824.service: Deactivated successfully. Jan 29 17:01:09.691555 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 17:01:09.695356 systemd-logind[1483]: Session 23 logged out. Waiting for processes to exit. Jan 29 17:01:09.697688 systemd-logind[1483]: Removed session 23. Jan 29 17:01:13.291402 containerd[1503]: time="2025-01-29T17:01:13.291310039Z" level=info msg="StopPodSandbox for \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\"" Jan 29 17:01:13.292390 containerd[1503]: time="2025-01-29T17:01:13.291469920Z" level=info msg="TearDown network for sandbox \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\" successfully" Jan 29 17:01:13.292390 containerd[1503]: time="2025-01-29T17:01:13.291491612Z" level=info msg="StopPodSandbox for \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\" returns successfully" Jan 29 17:01:13.292390 containerd[1503]: time="2025-01-29T17:01:13.291963961Z" level=info msg="RemovePodSandbox for \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\"" Jan 29 17:01:13.292390 containerd[1503]: time="2025-01-29T17:01:13.292013062Z" level=info msg="Forcibly stopping sandbox \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\"" Jan 29 17:01:13.292390 containerd[1503]: time="2025-01-29T17:01:13.292095498Z" level=info msg="TearDown network for sandbox \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\" successfully" Jan 29 17:01:13.301234 containerd[1503]: time="2025-01-29T17:01:13.300955109Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 17:01:13.301427 containerd[1503]: time="2025-01-29T17:01:13.301255544Z" level=info msg="RemovePodSandbox \"89191dd1a0f698f0a1495bc13a788c72aa9b72ad9c2fc684fabf94ce4f6e0a51\" returns successfully" Jan 29 17:01:13.302035 containerd[1503]: time="2025-01-29T17:01:13.301978645Z" level=info msg="StopPodSandbox for \"d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb\"" Jan 29 17:01:13.302139 containerd[1503]: time="2025-01-29T17:01:13.302099713Z" level=info msg="TearDown network for sandbox \"d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb\" successfully" Jan 29 17:01:13.302139 containerd[1503]: time="2025-01-29T17:01:13.302118669Z" level=info msg="StopPodSandbox for \"d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb\" returns successfully" Jan 29 17:01:13.307034 containerd[1503]: time="2025-01-29T17:01:13.302542306Z" level=info msg="RemovePodSandbox for \"d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb\"" Jan 29 17:01:13.307034 containerd[1503]: time="2025-01-29T17:01:13.302583553Z" level=info msg="Forcibly stopping sandbox \"d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb\"" Jan 29 17:01:13.307034 containerd[1503]: time="2025-01-29T17:01:13.302679885Z" level=info msg="TearDown network for sandbox \"d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb\" successfully" Jan 29 17:01:13.316348 containerd[1503]: time="2025-01-29T17:01:13.316258695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 17:01:13.316761 containerd[1503]: time="2025-01-29T17:01:13.316627730Z" level=info msg="RemovePodSandbox \"d5158af7e701d5efd3d5a9e5f09866569dc3c4d791e2102469f69c98e38bdabb\" returns successfully" Jan 29 17:01:25.825955 kubelet[2800]: E0129 17:01:25.824835 2800 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:45498->10.0.0.2:2379: read: connection timed out" Jan 29 17:01:25.842964 systemd[1]: cri-containerd-9c6ace63ef8c3e9b9b8940e7e42cd7a85b7d53f7ee0228a1dedfc6f3bace713f.scope: Deactivated successfully. Jan 29 17:01:25.843545 systemd[1]: cri-containerd-9c6ace63ef8c3e9b9b8940e7e42cd7a85b7d53f7ee0228a1dedfc6f3bace713f.scope: Consumed 1.661s CPU time, 27.4M memory peak, 9M read from disk. Jan 29 17:01:25.894324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c6ace63ef8c3e9b9b8940e7e42cd7a85b7d53f7ee0228a1dedfc6f3bace713f-rootfs.mount: Deactivated successfully. Jan 29 17:01:25.912772 containerd[1503]: time="2025-01-29T17:01:25.912683910Z" level=info msg="shim disconnected" id=9c6ace63ef8c3e9b9b8940e7e42cd7a85b7d53f7ee0228a1dedfc6f3bace713f namespace=k8s.io Jan 29 17:01:25.912772 containerd[1503]: time="2025-01-29T17:01:25.912760755Z" level=warning msg="cleaning up after shim disconnected" id=9c6ace63ef8c3e9b9b8940e7e42cd7a85b7d53f7ee0228a1dedfc6f3bace713f namespace=k8s.io Jan 29 17:01:25.912772 containerd[1503]: time="2025-01-29T17:01:25.912773458Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:01:26.069140 systemd[1]: cri-containerd-d7a1fe073b50b6c14c0656a30a0fef2be73a996f9551e1178007368e8f5f9771.scope: Deactivated successfully. Jan 29 17:01:26.070457 systemd[1]: cri-containerd-d7a1fe073b50b6c14c0656a30a0fef2be73a996f9551e1178007368e8f5f9771.scope: Consumed 6.087s CPU time, 70.1M memory peak, 19.3M read from disk. Jan 29 17:01:26.138039 kubelet[2800]: I0129 17:01:26.137541 2800 scope.go:117] "RemoveContainer" containerID="9c6ace63ef8c3e9b9b8940e7e42cd7a85b7d53f7ee0228a1dedfc6f3bace713f" Jan 29 17:01:26.148951 containerd[1503]: time="2025-01-29T17:01:26.145698117Z" level=info msg="CreateContainer within sandbox \"b255604e20606c96fe4e3bacfc09eaa4f903f2fd3a887ed161ea0c6297fb6303\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 29 17:01:26.150087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7a1fe073b50b6c14c0656a30a0fef2be73a996f9551e1178007368e8f5f9771-rootfs.mount: Deactivated successfully. Jan 29 17:01:26.159395 containerd[1503]: time="2025-01-29T17:01:26.159325665Z" level=info msg="shim disconnected" id=d7a1fe073b50b6c14c0656a30a0fef2be73a996f9551e1178007368e8f5f9771 namespace=k8s.io Jan 29 17:01:26.159395 containerd[1503]: time="2025-01-29T17:01:26.159383844Z" level=warning msg="cleaning up after shim disconnected" id=d7a1fe073b50b6c14c0656a30a0fef2be73a996f9551e1178007368e8f5f9771 namespace=k8s.io Jan 29 17:01:26.159395 containerd[1503]: time="2025-01-29T17:01:26.159392150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:01:26.183154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2633033990.mount: Deactivated successfully. Jan 29 17:01:26.190404 containerd[1503]: time="2025-01-29T17:01:26.190341143Z" level=info msg="CreateContainer within sandbox \"b255604e20606c96fe4e3bacfc09eaa4f903f2fd3a887ed161ea0c6297fb6303\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1a7eb84381caa202a92a19e488e21d868099c786f0acd89d0a6ca6d7e93c723e\"" Jan 29 17:01:26.191010 containerd[1503]: time="2025-01-29T17:01:26.190986748Z" level=info msg="StartContainer for \"1a7eb84381caa202a92a19e488e21d868099c786f0acd89d0a6ca6d7e93c723e\"" Jan 29 17:01:26.227141 systemd[1]: Started cri-containerd-1a7eb84381caa202a92a19e488e21d868099c786f0acd89d0a6ca6d7e93c723e.scope - libcontainer container 1a7eb84381caa202a92a19e488e21d868099c786f0acd89d0a6ca6d7e93c723e. Jan 29 17:01:26.286803 containerd[1503]: time="2025-01-29T17:01:26.286705331Z" level=info msg="StartContainer for \"1a7eb84381caa202a92a19e488e21d868099c786f0acd89d0a6ca6d7e93c723e\" returns successfully" Jan 29 17:01:27.146321 kubelet[2800]: I0129 17:01:27.146255 2800 scope.go:117] "RemoveContainer" containerID="d7a1fe073b50b6c14c0656a30a0fef2be73a996f9551e1178007368e8f5f9771" Jan 29 17:01:27.150347 containerd[1503]: time="2025-01-29T17:01:27.150244516Z" level=info msg="CreateContainer within sandbox \"9e2e0fdb1ed81f797241f6dfaecb411104fc9b6f08b0e1cf663cc10114c250c7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 29 17:01:27.179334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount967804507.mount: Deactivated successfully. Jan 29 17:01:27.180463 containerd[1503]: time="2025-01-29T17:01:27.180292794Z" level=info msg="CreateContainer within sandbox \"9e2e0fdb1ed81f797241f6dfaecb411104fc9b6f08b0e1cf663cc10114c250c7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"71a7f01b3e52d543d5b40d06741ffe933719b36d7ceea4075d8feccc63fe0143\"" Jan 29 17:01:27.181984 containerd[1503]: time="2025-01-29T17:01:27.181090124Z" level=info msg="StartContainer for \"71a7f01b3e52d543d5b40d06741ffe933719b36d7ceea4075d8feccc63fe0143\"" Jan 29 17:01:27.231142 systemd[1]: Started cri-containerd-71a7f01b3e52d543d5b40d06741ffe933719b36d7ceea4075d8feccc63fe0143.scope - libcontainer container 71a7f01b3e52d543d5b40d06741ffe933719b36d7ceea4075d8feccc63fe0143. Jan 29 17:01:27.297021 containerd[1503]: time="2025-01-29T17:01:27.296829058Z" level=info msg="StartContainer for \"71a7f01b3e52d543d5b40d06741ffe933719b36d7ceea4075d8feccc63fe0143\" returns successfully" Jan 29 17:01:28.870479 systemd[1]: Started sshd@24-116.202.14.223:22-47.108.95.236:59580.service - OpenSSH per-connection server daemon (47.108.95.236:59580). Jan 29 17:01:28.910614 sshd[5674]: Connection closed by 47.108.95.236 port 59580 Jan 29 17:01:28.912582 systemd[1]: sshd@24-116.202.14.223:22-47.108.95.236:59580.service: Deactivated successfully. Jan 29 17:01:30.115751 kubelet[2800]: E0129 17:01:30.109652 2800 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:45292->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-0-0-b-bb52c92a60.181f387a6c23de32 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-0-0-b-bb52c92a60,UID:cfb23053a384317632dc58b95a1ac53a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-b-bb52c92a60,},FirstTimestamp:2025-01-29 17:01:19.639756338 +0000 UTC m=+246.467375402,LastTimestamp:2025-01-29 17:01:19.639756338 +0000 UTC m=+246.467375402,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-b-bb52c92a60,}"