Jan 30 05:27:42.106833 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 05:27:42.106857 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 05:27:42.106868 kernel: BIOS-provided physical RAM map: Jan 30 05:27:42.106875 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 05:27:42.106882 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 05:27:42.106888 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 05:27:42.106896 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jan 30 05:27:42.106902 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jan 30 05:27:42.106911 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 05:27:42.106918 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 05:27:42.106924 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 05:27:42.106931 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 05:27:42.106937 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 05:27:42.106944 kernel: NX (Execute Disable) protection: active Jan 30 05:27:42.106954 kernel: APIC: Static calls initialized Jan 30 05:27:42.106962 kernel: SMBIOS 3.0.0 present. Jan 30 05:27:42.106969 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 30 05:27:42.106976 kernel: Hypervisor detected: KVM Jan 30 05:27:42.106983 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 05:27:42.106990 kernel: kvm-clock: using sched offset of 4070281919 cycles Jan 30 05:27:42.106997 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 05:27:42.107004 kernel: tsc: Detected 2495.310 MHz processor Jan 30 05:27:42.107012 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 05:27:42.107020 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 05:27:42.107029 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jan 30 05:27:42.107037 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 05:27:42.107056 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 05:27:42.107064 kernel: Using GB pages for direct mapping Jan 30 05:27:42.107071 kernel: ACPI: Early table checksum verification disabled Jan 30 05:27:42.107078 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Jan 30 05:27:42.107085 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:27:42.107093 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:27:42.107100 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:27:42.107109 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jan 30 05:27:42.107117 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:27:42.107124 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:27:42.107131 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:27:42.107138 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:27:42.107146 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Jan 30 05:27:42.107153 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Jan 30 05:27:42.107165 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jan 30 05:27:42.107173 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Jan 30 05:27:42.107180 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Jan 30 05:27:42.107187 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Jan 30 05:27:42.107195 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Jan 30 05:27:42.107202 kernel: No NUMA configuration found Jan 30 05:27:42.107209 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jan 30 05:27:42.107219 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jan 30 05:27:42.107226 kernel: Zone ranges: Jan 30 05:27:42.107234 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 05:27:42.107241 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jan 30 05:27:42.107248 kernel: Normal empty Jan 30 05:27:42.107256 kernel: Movable zone start for each node Jan 30 05:27:42.107263 kernel: Early memory node ranges Jan 30 05:27:42.107270 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 05:27:42.107278 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jan 30 05:27:42.107287 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jan 30 05:27:42.107294 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 05:27:42.107302 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 05:27:42.107309 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 30 05:27:42.107316 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 05:27:42.107324 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 05:27:42.107331 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 05:27:42.107339 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 05:27:42.107346 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 05:27:42.107356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 05:27:42.107363 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 05:27:42.107371 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 05:27:42.107378 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 05:27:42.107386 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 05:27:42.107393 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 05:27:42.107400 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 05:27:42.107408 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 05:27:42.107415 kernel: Booting paravirtualized kernel on KVM Jan 30 05:27:42.107423 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 05:27:42.107432 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 05:27:42.107440 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 05:27:42.107448 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 05:27:42.107455 kernel: pcpu-alloc: [0] 0 1 Jan 30 05:27:42.107462 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 05:27:42.107471 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 05:27:42.107479 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 05:27:42.107489 kernel: random: crng init done Jan 30 05:27:42.107496 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 05:27:42.107504 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 05:27:42.107511 kernel: Fallback order for Node 0: 0 Jan 30 05:27:42.107519 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jan 30 05:27:42.107526 kernel: Policy zone: DMA32 Jan 30 05:27:42.107533 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 05:27:42.107553 kernel: Memory: 1920004K/2047464K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 127200K reserved, 0K cma-reserved) Jan 30 05:27:42.107561 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 05:27:42.107571 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 05:27:42.107578 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 05:27:42.107585 kernel: Dynamic Preempt: voluntary Jan 30 05:27:42.107593 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 05:27:42.107601 kernel: rcu: RCU event tracing is enabled. Jan 30 05:27:42.107609 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 05:27:42.107616 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 05:27:42.107624 kernel: Rude variant of Tasks RCU enabled. Jan 30 05:27:42.107631 kernel: Tracing variant of Tasks RCU enabled. Jan 30 05:27:42.107639 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 05:27:42.107649 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 05:27:42.107675 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 05:27:42.107685 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 05:27:42.107695 kernel: Console: colour VGA+ 80x25 Jan 30 05:27:42.107703 kernel: printk: console [tty0] enabled Jan 30 05:27:42.107711 kernel: printk: console [ttyS0] enabled Jan 30 05:27:42.107719 kernel: ACPI: Core revision 20230628 Jan 30 05:27:42.107729 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 05:27:42.107739 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 05:27:42.107750 kernel: x2apic enabled Jan 30 05:27:42.107758 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 05:27:42.107765 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 05:27:42.107773 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 05:27:42.107781 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495310) Jan 30 05:27:42.107789 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 05:27:42.107796 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 05:27:42.107804 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 05:27:42.107820 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 05:27:42.107828 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 05:27:42.107835 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 05:27:42.107845 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 05:27:42.107853 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 05:27:42.107861 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 05:27:42.107868 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 05:27:42.107876 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 05:27:42.107884 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 05:27:42.107895 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 05:27:42.107903 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 05:27:42.107911 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 05:27:42.107918 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 05:27:42.107926 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 05:27:42.107934 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 05:27:42.107942 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 05:27:42.107951 kernel: Freeing SMP alternatives memory: 32K Jan 30 05:27:42.107959 kernel: pid_max: default: 32768 minimum: 301 Jan 30 05:27:42.107967 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 05:27:42.107974 kernel: landlock: Up and running. Jan 30 05:27:42.107982 kernel: SELinux: Initializing. Jan 30 05:27:42.107990 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 05:27:42.107997 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 05:27:42.108005 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 05:27:42.108013 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:27:42.108023 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:27:42.108031 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:27:42.108076 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 05:27:42.108085 kernel: ... version: 0 Jan 30 05:27:42.108093 kernel: ... bit width: 48 Jan 30 05:27:42.108100 kernel: ... generic registers: 6 Jan 30 05:27:42.108108 kernel: ... value mask: 0000ffffffffffff Jan 30 05:27:42.108115 kernel: ... max period: 00007fffffffffff Jan 30 05:27:42.108123 kernel: ... fixed-purpose events: 0 Jan 30 05:27:42.108134 kernel: ... event mask: 000000000000003f Jan 30 05:27:42.108142 kernel: signal: max sigframe size: 1776 Jan 30 05:27:42.108149 kernel: rcu: Hierarchical SRCU implementation. Jan 30 05:27:42.108157 kernel: rcu: Max phase no-delay instances is 400. Jan 30 05:27:42.108165 kernel: smp: Bringing up secondary CPUs ... Jan 30 05:27:42.108173 kernel: smpboot: x86: Booting SMP configuration: Jan 30 05:27:42.108181 kernel: .... node #0, CPUs: #1 Jan 30 05:27:42.108188 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 05:27:42.108196 kernel: smpboot: Max logical packages: 1 Jan 30 05:27:42.108204 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Jan 30 05:27:42.108213 kernel: devtmpfs: initialized Jan 30 05:27:42.108221 kernel: x86/mm: Memory block size: 128MB Jan 30 05:27:42.108229 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 05:27:42.108237 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 05:27:42.108244 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 05:27:42.108252 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 05:27:42.108260 kernel: audit: initializing netlink subsys (disabled) Jan 30 05:27:42.108267 kernel: audit: type=2000 audit(1738214860.765:1): state=initialized audit_enabled=0 res=1 Jan 30 05:27:42.108275 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 05:27:42.108285 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 05:27:42.108293 kernel: cpuidle: using governor menu Jan 30 05:27:42.108301 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 05:27:42.108308 kernel: dca service started, version 1.12.1 Jan 30 05:27:42.108317 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 05:27:42.108326 kernel: PCI: Using configuration type 1 for base access Jan 30 05:27:42.108335 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 05:27:42.108344 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 05:27:42.108352 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 05:27:42.108362 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 05:27:42.108370 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 05:27:42.108378 kernel: ACPI: Added _OSI(Module Device) Jan 30 05:27:42.108385 kernel: ACPI: Added _OSI(Processor Device) Jan 30 05:27:42.108393 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 05:27:42.108401 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 05:27:42.108409 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 05:27:42.108416 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 05:27:42.108424 kernel: ACPI: Interpreter enabled Jan 30 05:27:42.108435 kernel: ACPI: PM: (supports S0 S5) Jan 30 05:27:42.108445 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 05:27:42.108456 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 05:27:42.108466 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 05:27:42.108476 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 05:27:42.108483 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 05:27:42.108690 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 05:27:42.108817 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 05:27:42.108940 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 05:27:42.108951 kernel: PCI host bridge to bus 0000:00 Jan 30 05:27:42.109110 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 05:27:42.109223 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 05:27:42.109333 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 05:27:42.109446 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jan 30 05:27:42.109570 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 05:27:42.109689 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 30 05:27:42.109802 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 05:27:42.109955 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 05:27:42.110131 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 30 05:27:42.110290 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jan 30 05:27:42.110439 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jan 30 05:27:42.110588 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jan 30 05:27:42.110713 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jan 30 05:27:42.110836 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 05:27:42.110972 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 05:27:42.111124 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jan 30 05:27:42.111257 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 05:27:42.111384 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jan 30 05:27:42.111518 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 05:27:42.111657 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jan 30 05:27:42.111822 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 05:27:42.111949 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jan 30 05:27:42.112115 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 05:27:42.112246 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jan 30 05:27:42.112385 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 05:27:42.112511 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jan 30 05:27:42.112678 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 05:27:42.112807 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jan 30 05:27:42.112943 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 05:27:42.114274 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jan 30 05:27:42.114593 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 30 05:27:42.114797 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jan 30 05:27:42.115204 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 05:27:42.115405 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 05:27:42.115644 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 05:27:42.115835 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jan 30 05:27:42.116035 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jan 30 05:27:42.116298 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 05:27:42.116508 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 05:27:42.116743 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 05:27:42.116939 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jan 30 05:27:42.117173 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 30 05:27:42.117389 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jan 30 05:27:42.117644 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 05:27:42.117840 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 05:27:42.118883 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 05:27:42.119082 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 05:27:42.119218 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jan 30 05:27:42.119341 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 05:27:42.119471 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 05:27:42.119614 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 05:27:42.119787 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 30 05:27:42.119924 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jan 30 05:27:42.120075 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jan 30 05:27:42.120203 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 05:27:42.120331 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 05:27:42.120465 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 05:27:42.120621 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 30 05:27:42.120753 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 30 05:27:42.123118 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 05:27:42.123253 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 05:27:42.123410 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 05:27:42.123619 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 05:27:42.123792 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jan 30 05:27:42.123929 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jan 30 05:27:42.124071 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 05:27:42.124203 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 05:27:42.124327 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 05:27:42.124474 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 30 05:27:42.124671 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jan 30 05:27:42.124808 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jan 30 05:27:42.124939 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 05:27:42.127116 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 05:27:42.127252 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 05:27:42.127264 kernel: acpiphp: Slot [0] registered Jan 30 05:27:42.127407 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 05:27:42.127573 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jan 30 05:27:42.127737 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jan 30 05:27:42.127880 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jan 30 05:27:42.128002 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 05:27:42.128754 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 05:27:42.128910 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 05:27:42.128924 kernel: acpiphp: Slot [0-2] registered Jan 30 05:27:42.129064 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 05:27:42.129189 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 30 05:27:42.129309 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 05:27:42.129325 kernel: acpiphp: Slot [0-3] registered Jan 30 05:27:42.132071 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 05:27:42.132215 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 05:27:42.132340 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 05:27:42.132350 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 05:27:42.132359 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 05:27:42.132367 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 05:27:42.132375 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 05:27:42.132383 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 05:27:42.132396 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 05:27:42.132404 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 05:27:42.132412 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 05:27:42.132420 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 05:27:42.132428 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 05:27:42.132436 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 05:27:42.132444 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 05:27:42.132452 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 05:27:42.132461 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 05:27:42.132471 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 05:27:42.132480 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 05:27:42.132488 kernel: iommu: Default domain type: Translated Jan 30 05:27:42.132496 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 05:27:42.132504 kernel: PCI: Using ACPI for IRQ routing Jan 30 05:27:42.132512 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 05:27:42.132521 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 05:27:42.132529 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jan 30 05:27:42.132670 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 05:27:42.132797 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 05:27:42.132916 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 05:27:42.132927 kernel: vgaarb: loaded Jan 30 05:27:42.132935 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 05:27:42.132943 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 05:27:42.132952 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 05:27:42.132960 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 05:27:42.132968 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 05:27:42.132976 kernel: pnp: PnP ACPI init Jan 30 05:27:42.133146 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 05:27:42.133159 kernel: pnp: PnP ACPI: found 5 devices Jan 30 05:27:42.133168 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 05:27:42.133176 kernel: NET: Registered PF_INET protocol family Jan 30 05:27:42.133184 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 05:27:42.133192 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 05:27:42.133201 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 05:27:42.133209 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 05:27:42.133221 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 05:27:42.133229 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 05:27:42.133238 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 05:27:42.133246 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 05:27:42.133254 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 05:27:42.133262 kernel: NET: Registered PF_XDP protocol family Jan 30 05:27:42.133380 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 05:27:42.133499 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 05:27:42.133637 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 05:27:42.133757 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 05:27:42.133875 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 05:27:42.133994 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 05:27:42.136065 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 05:27:42.136202 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 05:27:42.136327 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 05:27:42.136467 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 05:27:42.136617 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 05:27:42.136744 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 05:27:42.136892 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 05:27:42.137025 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 05:27:42.137185 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 05:27:42.137341 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 05:27:42.137497 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 05:27:42.137687 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 05:27:42.137839 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 05:27:42.138014 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 05:27:42.138941 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 05:27:42.140189 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 05:27:42.140327 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 05:27:42.140465 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 05:27:42.140803 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 05:27:42.140939 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 30 05:27:42.142119 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 05:27:42.142263 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 05:27:42.142402 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 05:27:42.142559 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 30 05:27:42.142723 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 30 05:27:42.142869 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 05:27:42.143015 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 05:27:42.145194 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 30 05:27:42.145322 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 05:27:42.145443 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 05:27:42.145584 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 05:27:42.145705 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 05:27:42.145815 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 05:27:42.145926 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jan 30 05:27:42.146035 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 05:27:42.146186 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 30 05:27:42.146319 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 30 05:27:42.146437 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 05:27:42.146586 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 30 05:27:42.146706 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 05:27:42.146836 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 30 05:27:42.146956 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 05:27:42.148606 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 30 05:27:42.148734 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 05:27:42.148868 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 30 05:27:42.148984 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 05:27:42.149164 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jan 30 05:27:42.149281 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 05:27:42.149415 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 30 05:27:42.149566 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 30 05:27:42.149710 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 05:27:42.149838 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 30 05:27:42.149952 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jan 30 05:27:42.151088 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 05:27:42.151217 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 30 05:27:42.151331 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 30 05:27:42.151444 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 05:27:42.151461 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 05:27:42.151472 kernel: PCI: CLS 0 bytes, default 64 Jan 30 05:27:42.151481 kernel: Initialise system trusted keyrings Jan 30 05:27:42.151492 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 05:27:42.151501 kernel: Key type asymmetric registered Jan 30 05:27:42.151509 kernel: Asymmetric key parser 'x509' registered Jan 30 05:27:42.151518 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 05:27:42.151526 kernel: io scheduler mq-deadline registered Jan 30 05:27:42.151535 kernel: io scheduler kyber registered Jan 30 05:27:42.151558 kernel: io scheduler bfq registered Jan 30 05:27:42.151680 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 30 05:27:42.151800 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 30 05:27:42.151921 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 30 05:27:42.152053 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 30 05:27:42.153227 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 30 05:27:42.153387 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 30 05:27:42.153560 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 30 05:27:42.153682 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 30 05:27:42.153809 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 30 05:27:42.153929 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 30 05:27:42.155074 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 30 05:27:42.155210 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 30 05:27:42.155331 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 30 05:27:42.155572 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 30 05:27:42.155771 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 30 05:27:42.157131 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 30 05:27:42.157155 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 05:27:42.157385 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 30 05:27:42.157513 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 30 05:27:42.157525 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 05:27:42.157534 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 30 05:27:42.157556 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 05:27:42.157564 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 05:27:42.157573 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 05:27:42.157582 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 05:27:42.157595 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 05:27:42.157604 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 05:27:42.157741 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 05:27:42.157858 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 05:27:42.158005 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T05:27:41 UTC (1738214861) Jan 30 05:27:42.158166 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 05:27:42.158185 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 05:27:42.158197 kernel: NET: Registered PF_INET6 protocol family Jan 30 05:27:42.158205 kernel: Segment Routing with IPv6 Jan 30 05:27:42.158214 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 05:27:42.158222 kernel: NET: Registered PF_PACKET protocol family Jan 30 05:27:42.158231 kernel: Key type dns_resolver registered Jan 30 05:27:42.158239 kernel: IPI shorthand broadcast: enabled Jan 30 05:27:42.158247 kernel: sched_clock: Marking stable (1505008148, 145330871)->(1663874803, -13535784) Jan 30 05:27:42.158256 kernel: registered taskstats version 1 Jan 30 05:27:42.158265 kernel: Loading compiled-in X.509 certificates Jan 30 05:27:42.158273 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 05:27:42.158286 kernel: Key type .fscrypt registered Jan 30 05:27:42.158294 kernel: Key type fscrypt-provisioning registered Jan 30 05:27:42.158302 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 05:27:42.158311 kernel: ima: Allocated hash algorithm: sha1 Jan 30 05:27:42.158320 kernel: ima: No architecture policies found Jan 30 05:27:42.158328 kernel: clk: Disabling unused clocks Jan 30 05:27:42.158337 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 05:27:42.158346 kernel: Write protecting the kernel read-only data: 38912k Jan 30 05:27:42.158357 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 05:27:42.158365 kernel: Run /init as init process Jan 30 05:27:42.158374 kernel: with arguments: Jan 30 05:27:42.158383 kernel: /init Jan 30 05:27:42.158392 kernel: with environment: Jan 30 05:27:42.158400 kernel: HOME=/ Jan 30 05:27:42.158408 kernel: TERM=linux Jan 30 05:27:42.158417 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 05:27:42.158429 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 05:27:42.158444 systemd[1]: Detected virtualization kvm. Jan 30 05:27:42.158454 systemd[1]: Detected architecture x86-64. Jan 30 05:27:42.158462 systemd[1]: Running in initrd. Jan 30 05:27:42.158471 systemd[1]: No hostname configured, using default hostname. Jan 30 05:27:42.158480 systemd[1]: Hostname set to . Jan 30 05:27:42.158489 systemd[1]: Initializing machine ID from VM UUID. Jan 30 05:27:42.158498 systemd[1]: Queued start job for default target initrd.target. Jan 30 05:27:42.158510 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:27:42.158520 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:27:42.158530 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 05:27:42.158550 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 05:27:42.158561 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 05:27:42.158570 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 05:27:42.158582 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 05:27:42.158595 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 05:27:42.158605 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:27:42.158614 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:27:42.158622 systemd[1]: Reached target paths.target - Path Units. Jan 30 05:27:42.158632 systemd[1]: Reached target slices.target - Slice Units. Jan 30 05:27:42.158641 systemd[1]: Reached target swap.target - Swaps. Jan 30 05:27:42.158650 systemd[1]: Reached target timers.target - Timer Units. Jan 30 05:27:42.158659 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 05:27:42.158669 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 05:27:42.158681 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 05:27:42.158690 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 05:27:42.158699 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:27:42.158708 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 05:27:42.158717 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:27:42.158726 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 05:27:42.158735 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 05:27:42.158744 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 05:27:42.158757 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 05:27:42.158766 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 05:27:42.158775 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 05:27:42.158784 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 05:27:42.158823 systemd-journald[188]: Collecting audit messages is disabled. Jan 30 05:27:42.158852 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:27:42.158862 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 05:27:42.158871 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:27:42.158881 systemd-journald[188]: Journal started Jan 30 05:27:42.158906 systemd-journald[188]: Runtime Journal (/run/log/journal/d2b19c639ff142058d385298a063efa9) is 4.8M, max 38.3M, 33.5M free. Jan 30 05:27:42.157425 systemd-modules-load[190]: Inserted module 'overlay' Jan 30 05:27:42.194956 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 05:27:42.194986 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 05:27:42.195002 kernel: Bridge firewalling registered Jan 30 05:27:42.193431 systemd-modules-load[190]: Inserted module 'br_netfilter' Jan 30 05:27:42.195162 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 05:27:42.196369 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 05:27:42.197351 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:27:42.206210 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:27:42.207955 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:27:42.216245 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 05:27:42.217455 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 05:27:42.236374 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 05:27:42.238355 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:27:42.239022 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:27:42.247282 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 05:27:42.260615 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 05:27:42.261624 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:27:42.266179 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 05:27:42.275116 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:27:42.283406 dracut-cmdline[218]: dracut-dracut-053 Jan 30 05:27:42.286873 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 05:27:42.313963 systemd-resolved[222]: Positive Trust Anchors: Jan 30 05:27:42.313982 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 05:27:42.314013 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 05:27:42.320339 systemd-resolved[222]: Defaulting to hostname 'linux'. Jan 30 05:27:42.322462 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 05:27:42.323021 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:27:42.384123 kernel: SCSI subsystem initialized Jan 30 05:27:42.394096 kernel: Loading iSCSI transport class v2.0-870. Jan 30 05:27:42.413110 kernel: iscsi: registered transport (tcp) Jan 30 05:27:42.445482 kernel: iscsi: registered transport (qla4xxx) Jan 30 05:27:42.446819 kernel: QLogic iSCSI HBA Driver Jan 30 05:27:42.528141 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 05:27:42.541242 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 05:27:42.577909 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 05:27:42.578004 kernel: device-mapper: uevent: version 1.0.3 Jan 30 05:27:42.581096 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 05:27:42.630127 kernel: raid6: avx2x4 gen() 25346 MB/s Jan 30 05:27:42.647098 kernel: raid6: avx2x2 gen() 25816 MB/s Jan 30 05:27:42.664380 kernel: raid6: avx2x1 gen() 22400 MB/s Jan 30 05:27:42.664491 kernel: raid6: using algorithm avx2x2 gen() 25816 MB/s Jan 30 05:27:42.684143 kernel: raid6: .... xor() 18947 MB/s, rmw enabled Jan 30 05:27:42.684302 kernel: raid6: using avx2x2 recovery algorithm Jan 30 05:27:42.707118 kernel: xor: automatically using best checksumming function avx Jan 30 05:27:42.943114 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 05:27:42.964481 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 05:27:42.972416 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:27:42.986491 systemd-udevd[406]: Using default interface naming scheme 'v255'. Jan 30 05:27:42.991398 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:27:43.004265 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 05:27:43.035434 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Jan 30 05:27:43.103504 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 05:27:43.112287 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 05:27:43.213991 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:27:43.224720 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 05:27:43.269400 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 05:27:43.273920 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 05:27:43.274842 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:27:43.277817 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 05:27:43.284731 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 05:27:43.316102 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 05:27:43.331097 kernel: scsi host0: Virtio SCSI HBA Jan 30 05:27:43.337077 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 05:27:43.343225 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 30 05:27:43.392821 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 05:27:43.392953 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:27:43.395215 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:27:43.395747 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:27:43.396645 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:27:43.399177 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:27:43.411833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:27:43.422075 kernel: ACPI: bus type USB registered Jan 30 05:27:43.424083 kernel: usbcore: registered new interface driver usbfs Jan 30 05:27:43.428348 kernel: usbcore: registered new interface driver hub Jan 30 05:27:43.428388 kernel: usbcore: registered new device driver usb Jan 30 05:27:43.448077 kernel: libata version 3.00 loaded. Jan 30 05:27:43.456075 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 05:27:43.538366 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 05:27:43.538389 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 05:27:43.538586 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 05:27:43.538729 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 05:27:43.538740 kernel: AES CTR mode by8 optimization enabled Jan 30 05:27:43.538751 kernel: scsi host1: ahci Jan 30 05:27:43.538916 kernel: scsi host2: ahci Jan 30 05:27:43.539409 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 05:27:43.547286 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 30 05:27:43.547477 kernel: scsi host3: ahci Jan 30 05:27:43.547708 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 05:27:43.547908 kernel: scsi host4: ahci Jan 30 05:27:43.548144 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 05:27:43.548297 kernel: scsi host5: ahci Jan 30 05:27:43.548449 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 30 05:27:43.548618 kernel: scsi host6: ahci Jan 30 05:27:43.548790 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 30 05:27:43.548938 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Jan 30 05:27:43.548950 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Jan 30 05:27:43.548961 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Jan 30 05:27:43.548972 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Jan 30 05:27:43.548983 kernel: hub 1-0:1.0: USB hub found Jan 30 05:27:43.549306 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Jan 30 05:27:43.549324 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Jan 30 05:27:43.549335 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 30 05:27:43.565619 kernel: hub 1-0:1.0: 4 ports detected Jan 30 05:27:43.565866 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 05:27:43.566069 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 30 05:27:43.566255 kernel: hub 2-0:1.0: USB hub found Jan 30 05:27:43.566427 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 05:27:43.566624 kernel: hub 2-0:1.0: 4 ports detected Jan 30 05:27:43.566772 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 30 05:27:43.566928 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 05:27:43.567117 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 05:27:43.567129 kernel: GPT:17805311 != 80003071 Jan 30 05:27:43.567139 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 05:27:43.567150 kernel: GPT:17805311 != 80003071 Jan 30 05:27:43.567159 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 05:27:43.567174 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 05:27:43.567184 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 05:27:43.491567 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:27:43.503198 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:27:43.557537 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:27:43.778142 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 05:27:43.851123 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 05:27:43.851270 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 05:27:43.851296 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 05:27:43.853087 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 05:27:43.868061 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 05:27:43.868107 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 05:27:43.868131 kernel: ata1.00: applying bridge limits Jan 30 05:27:43.872748 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 05:27:43.873082 kernel: ata1.00: configured for UDMA/100 Jan 30 05:27:43.880084 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 05:27:43.957120 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 05:27:43.978098 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 05:27:43.996819 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 05:27:43.996847 kernel: usbcore: registered new interface driver usbhid Jan 30 05:27:43.996864 kernel: usbhid: USB HID core driver Jan 30 05:27:43.996892 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 30 05:27:44.008147 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 30 05:27:44.019112 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 30 05:27:44.028541 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (467) Jan 30 05:27:44.035442 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 30 05:27:44.036246 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (457) Jan 30 05:27:44.051166 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 30 05:27:44.057655 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 05:27:44.063004 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 30 05:27:44.064400 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 30 05:27:44.073221 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 05:27:44.080124 disk-uuid[578]: Primary Header is updated. Jan 30 05:27:44.080124 disk-uuid[578]: Secondary Entries is updated. Jan 30 05:27:44.080124 disk-uuid[578]: Secondary Header is updated. Jan 30 05:27:44.092071 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 05:27:45.112102 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 05:27:45.112556 disk-uuid[579]: The operation has completed successfully. Jan 30 05:27:45.211463 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 05:27:45.211679 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 05:27:45.237256 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 05:27:45.258453 sh[596]: Success Jan 30 05:27:45.287090 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 05:27:45.381003 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 05:27:45.394222 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 05:27:45.398761 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 05:27:45.446343 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 05:27:45.446439 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:27:45.449945 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 05:27:45.453545 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 05:27:45.456355 kernel: BTRFS info (device dm-0): using free space tree Jan 30 05:27:45.473119 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 05:27:45.477033 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 05:27:45.479804 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 05:27:45.487354 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 05:27:45.495391 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 05:27:45.529459 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 05:27:45.529539 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:27:45.534008 kernel: BTRFS info (device sda6): using free space tree Jan 30 05:27:45.540844 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 05:27:45.540952 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 05:27:45.557169 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 05:27:45.557833 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 05:27:45.567906 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 05:27:45.577499 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 05:27:45.634394 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 05:27:45.660926 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 05:27:45.702270 systemd-networkd[777]: lo: Link UP Jan 30 05:27:45.702280 systemd-networkd[777]: lo: Gained carrier Jan 30 05:27:45.707137 systemd-networkd[777]: Enumeration completed Jan 30 05:27:45.707393 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 05:27:45.708874 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:27:45.708878 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 05:27:45.711003 systemd[1]: Reached target network.target - Network. Jan 30 05:27:45.713074 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:27:45.713078 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 05:27:45.720738 systemd-networkd[777]: eth0: Link UP Jan 30 05:27:45.720750 systemd-networkd[777]: eth0: Gained carrier Jan 30 05:27:45.720765 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:27:45.727997 systemd-networkd[777]: eth1: Link UP Jan 30 05:27:45.728005 systemd-networkd[777]: eth1: Gained carrier Jan 30 05:27:45.728021 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:27:45.763958 ignition[726]: Ignition 2.20.0 Jan 30 05:27:45.763973 ignition[726]: Stage: fetch-offline Jan 30 05:27:45.764032 ignition[726]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:27:45.764056 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:27:45.764176 ignition[726]: parsed url from cmdline: "" Jan 30 05:27:45.764180 ignition[726]: no config URL provided Jan 30 05:27:45.764185 ignition[726]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 05:27:45.766890 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 05:27:45.764195 ignition[726]: no config at "/usr/lib/ignition/user.ign" Jan 30 05:27:45.764201 ignition[726]: failed to fetch config: resource requires networking Jan 30 05:27:45.765150 ignition[726]: Ignition finished successfully Jan 30 05:27:45.791485 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 05:27:45.793114 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 05:27:45.800130 systemd-networkd[777]: eth0: DHCPv4 address 91.107.218.70/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 05:27:45.821954 ignition[785]: Ignition 2.20.0 Jan 30 05:27:45.822923 ignition[785]: Stage: fetch Jan 30 05:27:45.823526 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:27:45.824065 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:27:45.824793 ignition[785]: parsed url from cmdline: "" Jan 30 05:27:45.824840 ignition[785]: no config URL provided Jan 30 05:27:45.825306 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 05:27:45.825934 ignition[785]: no config at "/usr/lib/ignition/user.ign" Jan 30 05:27:45.826425 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 30 05:27:45.841802 ignition[785]: GET result: OK Jan 30 05:27:45.842084 ignition[785]: parsing config with SHA512: 0192a28d198e0c2c657867afd042eda5704f7b1e787670e67f9389285071287a71676c23a110136dabebd57cfeb19612028b8d1e35c4c701614da8e9c0b76f48 Jan 30 05:27:45.851790 unknown[785]: fetched base config from "system" Jan 30 05:27:45.851815 unknown[785]: fetched base config from "system" Jan 30 05:27:45.852557 ignition[785]: fetch: fetch complete Jan 30 05:27:45.851829 unknown[785]: fetched user config from "hetzner" Jan 30 05:27:45.852582 ignition[785]: fetch: fetch passed Jan 30 05:27:45.852663 ignition[785]: Ignition finished successfully Jan 30 05:27:45.859942 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 05:27:45.870363 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 05:27:45.927867 ignition[793]: Ignition 2.20.0 Jan 30 05:27:45.927894 ignition[793]: Stage: kargs Jan 30 05:27:45.928303 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:27:45.928334 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:27:45.930312 ignition[793]: kargs: kargs passed Jan 30 05:27:45.930410 ignition[793]: Ignition finished successfully Jan 30 05:27:45.935626 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 05:27:45.942392 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 05:27:45.985337 ignition[799]: Ignition 2.20.0 Jan 30 05:27:45.985362 ignition[799]: Stage: disks Jan 30 05:27:45.985716 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:27:45.985738 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:27:45.991300 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 05:27:45.987483 ignition[799]: disks: disks passed Jan 30 05:27:45.992679 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 05:27:45.987598 ignition[799]: Ignition finished successfully Jan 30 05:27:45.994542 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 05:27:45.996480 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 05:27:45.998639 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 05:27:46.000325 systemd[1]: Reached target basic.target - Basic System. Jan 30 05:27:46.011647 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 05:27:46.041482 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 05:27:46.048359 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 05:27:46.065888 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 05:27:46.208092 kernel: EXT4-fs (sda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 05:27:46.209916 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 05:27:46.212410 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 05:27:46.220206 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 05:27:46.224428 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 05:27:46.236142 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 05:27:46.239217 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 05:27:46.240794 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (815) Jan 30 05:27:46.240260 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 05:27:46.246255 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 05:27:46.246282 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:27:46.246293 kernel: BTRFS info (device sda6): using free space tree Jan 30 05:27:46.248036 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 05:27:46.253834 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 05:27:46.262227 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 05:27:46.262305 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 05:27:46.269833 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 05:27:46.351116 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 05:27:46.359473 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jan 30 05:27:46.360727 coreos-metadata[817]: Jan 30 05:27:46.360 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 30 05:27:46.363441 coreos-metadata[817]: Jan 30 05:27:46.361 INFO Fetch successful Jan 30 05:27:46.363441 coreos-metadata[817]: Jan 30 05:27:46.361 INFO wrote hostname ci-4186-1-0-3-26ada394c1 to /sysroot/etc/hostname Jan 30 05:27:46.366915 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 05:27:46.368995 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 05:27:46.373211 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 05:27:46.514465 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 05:27:46.522166 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 05:27:46.531348 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 05:27:46.545714 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 05:27:46.549923 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 05:27:46.579562 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 05:27:46.584087 ignition[933]: INFO : Ignition 2.20.0 Jan 30 05:27:46.584087 ignition[933]: INFO : Stage: mount Jan 30 05:27:46.584087 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:27:46.584087 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:27:46.589469 ignition[933]: INFO : mount: mount passed Jan 30 05:27:46.589469 ignition[933]: INFO : Ignition finished successfully Jan 30 05:27:46.588926 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 05:27:46.596191 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 05:27:46.619251 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 05:27:46.634117 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (945) Jan 30 05:27:46.637091 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 05:27:46.637128 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:27:46.639266 kernel: BTRFS info (device sda6): using free space tree Jan 30 05:27:46.644378 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 05:27:46.644423 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 05:27:46.649847 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 05:27:46.679954 ignition[962]: INFO : Ignition 2.20.0 Jan 30 05:27:46.679954 ignition[962]: INFO : Stage: files Jan 30 05:27:46.681833 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:27:46.681833 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:27:46.683975 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Jan 30 05:27:46.683975 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 05:27:46.685941 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 05:27:46.690193 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 05:27:46.691390 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 05:27:46.691390 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 05:27:46.691340 unknown[962]: wrote ssh authorized keys file for user: core Jan 30 05:27:46.694777 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 05:27:46.694777 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 05:27:46.806687 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 05:27:47.029914 systemd-networkd[777]: eth0: Gained IPv6LL Jan 30 05:27:47.248502 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 05:27:47.248502 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 05:27:47.253116 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 05:27:47.349669 systemd-networkd[777]: eth1: Gained IPv6LL Jan 30 05:27:47.820923 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 05:27:47.990456 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 05:27:47.991939 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 05:27:47.991939 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 05:27:47.991939 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 05:27:47.991939 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 05:27:47.991939 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 05:27:47.991939 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 05:27:47.991939 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 05:27:47.991939 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 05:27:48.001689 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 05:27:48.001689 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 05:27:48.001689 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:27:48.001689 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:27:48.001689 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:27:48.001689 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 05:27:48.545737 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 05:27:49.192268 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:27:49.192268 ignition[962]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 05:27:49.197342 ignition[962]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 05:27:49.197342 ignition[962]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 05:27:49.197342 ignition[962]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 05:27:49.197342 ignition[962]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 30 05:27:49.197342 ignition[962]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 05:27:49.197342 ignition[962]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 05:27:49.197342 ignition[962]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 30 05:27:49.197342 ignition[962]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 05:27:49.197342 ignition[962]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 05:27:49.197342 ignition[962]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 05:27:49.197342 ignition[962]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 05:27:49.197342 ignition[962]: INFO : files: files passed Jan 30 05:27:49.197342 ignition[962]: INFO : Ignition finished successfully Jan 30 05:27:49.198746 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 05:27:49.212360 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 05:27:49.225407 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 05:27:49.230088 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 05:27:49.231316 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 05:27:49.250469 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:27:49.250469 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:27:49.254364 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:27:49.258155 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 05:27:49.260126 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 05:27:49.270321 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 05:27:49.334118 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 05:27:49.334389 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 05:27:49.337752 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 05:27:49.338946 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 05:27:49.341164 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 05:27:49.348319 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 05:27:49.387911 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 05:27:49.396283 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 05:27:49.443819 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:27:49.445440 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:27:49.448125 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 05:27:49.450416 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 05:27:49.450674 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 05:27:49.453348 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 05:27:49.455103 systemd[1]: Stopped target basic.target - Basic System. Jan 30 05:27:49.457608 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 05:27:49.459840 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 05:27:49.462242 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 05:27:49.464821 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 05:27:49.467355 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 05:27:49.470210 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 05:27:49.472769 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 05:27:49.475652 systemd[1]: Stopped target swap.target - Swaps. Jan 30 05:27:49.478095 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 05:27:49.478380 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 05:27:49.481434 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:27:49.483956 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:27:49.486118 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 05:27:49.487279 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:27:49.490093 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 05:27:49.490321 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 05:27:49.493534 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 05:27:49.493899 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 05:27:49.496541 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 05:27:49.496854 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 05:27:49.498987 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 05:27:49.499253 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 05:27:49.509637 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 05:27:49.511894 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 05:27:49.512235 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:27:49.528417 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 05:27:49.530526 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 05:27:49.530826 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:27:49.536421 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 05:27:49.536677 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 05:27:49.552570 ignition[1015]: INFO : Ignition 2.20.0 Jan 30 05:27:49.552570 ignition[1015]: INFO : Stage: umount Jan 30 05:27:49.552570 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:27:49.552570 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:27:49.552570 ignition[1015]: INFO : umount: umount passed Jan 30 05:27:49.552570 ignition[1015]: INFO : Ignition finished successfully Jan 30 05:27:49.562510 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 05:27:49.562730 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 05:27:49.569796 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 05:27:49.570423 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 05:27:49.572624 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 05:27:49.572791 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 05:27:49.576208 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 05:27:49.576282 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 05:27:49.577071 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 05:27:49.577138 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 05:27:49.577827 systemd[1]: Stopped target network.target - Network. Jan 30 05:27:49.579109 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 05:27:49.579187 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 05:27:49.583902 systemd[1]: Stopped target paths.target - Path Units. Jan 30 05:27:49.584694 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 05:27:49.590290 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:27:49.591771 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 05:27:49.592496 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 05:27:49.595715 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 05:27:49.595825 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 05:27:49.599230 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 05:27:49.599303 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 05:27:49.600289 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 05:27:49.600405 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 05:27:49.602385 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 05:27:49.602494 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 05:27:49.605034 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 05:27:49.606986 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 05:27:49.613701 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 05:27:49.614100 systemd-networkd[777]: eth0: DHCPv6 lease lost Jan 30 05:27:49.617318 systemd-networkd[777]: eth1: DHCPv6 lease lost Jan 30 05:27:49.617526 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 05:27:49.617698 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 05:27:49.628871 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 05:27:49.629214 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 05:27:49.637262 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 05:27:49.637342 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:27:49.646195 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 05:27:49.649575 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 05:27:49.650836 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 05:27:49.653695 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 05:27:49.653788 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:27:49.655346 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 05:27:49.655431 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 05:27:49.659166 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 05:27:49.659248 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:27:49.661376 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:27:49.688268 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 05:27:49.690015 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:27:49.691176 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 05:27:49.691304 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 05:27:49.696864 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 05:27:49.696948 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 05:27:49.697451 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 05:27:49.697497 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:27:49.701116 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 05:27:49.701175 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 05:27:49.702454 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 05:27:49.702612 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 05:27:49.704339 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 05:27:49.704445 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:27:49.716351 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 05:27:49.717762 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 05:27:49.717892 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:27:49.720345 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:27:49.720438 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:27:49.725925 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 05:27:49.726149 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 05:27:49.728842 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 05:27:49.728975 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 05:27:49.730870 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 05:27:49.732032 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 05:27:49.732112 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 05:27:49.739242 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 05:27:49.750386 systemd[1]: Switching root. Jan 30 05:27:49.797180 systemd-journald[188]: Journal stopped Jan 30 05:27:51.223869 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Jan 30 05:27:51.223962 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 05:27:51.223977 kernel: SELinux: policy capability open_perms=1 Jan 30 05:27:51.223988 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 05:27:51.223999 kernel: SELinux: policy capability always_check_network=0 Jan 30 05:27:51.224010 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 05:27:51.224025 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 05:27:51.226486 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 05:27:51.226502 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 05:27:51.226514 kernel: audit: type=1403 audit(1738214870.032:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 05:27:51.226534 systemd[1]: Successfully loaded SELinux policy in 69.863ms. Jan 30 05:27:51.226569 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.419ms. Jan 30 05:27:51.226583 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 05:27:51.226610 systemd[1]: Detected virtualization kvm. Jan 30 05:27:51.226622 systemd[1]: Detected architecture x86-64. Jan 30 05:27:51.226637 systemd[1]: Detected first boot. Jan 30 05:27:51.226650 systemd[1]: Hostname set to . Jan 30 05:27:51.226662 systemd[1]: Initializing machine ID from VM UUID. Jan 30 05:27:51.226674 zram_generator::config[1058]: No configuration found. Jan 30 05:27:51.226687 systemd[1]: Populated /etc with preset unit settings. Jan 30 05:27:51.226700 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 05:27:51.226712 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 05:27:51.226725 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 05:27:51.226740 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 05:27:51.226760 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 05:27:51.226772 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 05:27:51.226784 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 05:27:51.226797 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 05:27:51.226810 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 05:27:51.226822 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 05:27:51.226834 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 05:27:51.226847 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:27:51.226863 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:27:51.226876 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 05:27:51.226888 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 05:27:51.226900 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 05:27:51.226920 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 05:27:51.226933 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 05:27:51.226945 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:27:51.226958 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 05:27:51.226973 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 05:27:51.226986 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 05:27:51.227006 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 05:27:51.227028 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:27:51.228979 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 05:27:51.228996 systemd[1]: Reached target slices.target - Slice Units. Jan 30 05:27:51.229009 systemd[1]: Reached target swap.target - Swaps. Jan 30 05:27:51.229025 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 05:27:51.229037 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 05:27:51.229065 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:27:51.229077 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 05:27:51.229089 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:27:51.229101 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 05:27:51.229113 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 05:27:51.229133 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 05:27:51.229152 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 05:27:51.229167 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:27:51.229180 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 05:27:51.229192 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 05:27:51.229205 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 05:27:51.229218 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 05:27:51.229232 systemd[1]: Reached target machines.target - Containers. Jan 30 05:27:51.229244 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 05:27:51.229256 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:27:51.229268 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 05:27:51.229283 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 05:27:51.229295 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:27:51.229307 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 05:27:51.229325 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:27:51.229337 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 05:27:51.229352 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:27:51.229365 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 05:27:51.229377 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 05:27:51.229389 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 05:27:51.229401 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 05:27:51.229413 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 05:27:51.229425 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 05:27:51.229438 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 05:27:51.229451 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 05:27:51.229466 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 05:27:51.229478 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 05:27:51.229490 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 05:27:51.229502 systemd[1]: Stopped verity-setup.service. Jan 30 05:27:51.229516 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:27:51.229528 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 05:27:51.229540 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 05:27:51.229553 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 05:27:51.229568 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 05:27:51.229579 kernel: loop: module loaded Jan 30 05:27:51.229591 kernel: fuse: init (API version 7.39) Jan 30 05:27:51.229616 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 05:27:51.229628 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 05:27:51.229644 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 05:27:51.229657 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:27:51.229669 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 05:27:51.229681 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 05:27:51.229694 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:27:51.229705 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:27:51.229717 kernel: ACPI: bus type drm_connector registered Jan 30 05:27:51.229732 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 05:27:51.229743 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 05:27:51.229755 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:27:51.229768 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:27:51.229780 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 05:27:51.229792 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 05:27:51.229805 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:27:51.229820 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:27:51.229832 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 05:27:51.229845 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 05:27:51.229857 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 05:27:51.229869 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 05:27:51.229902 systemd-journald[1134]: Collecting audit messages is disabled. Jan 30 05:27:51.229925 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 05:27:51.229939 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 05:27:51.229955 systemd-journald[1134]: Journal started Jan 30 05:27:51.229978 systemd-journald[1134]: Runtime Journal (/run/log/journal/d2b19c639ff142058d385298a063efa9) is 4.8M, max 38.3M, 33.5M free. Jan 30 05:27:51.234094 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 05:27:51.234121 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 05:27:50.724927 systemd[1]: Queued start job for default target multi-user.target. Jan 30 05:27:50.751026 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 05:27:50.752498 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 05:27:51.239053 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 05:27:51.248061 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 05:27:51.257360 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 05:27:51.257402 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:27:51.267449 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 05:27:51.267488 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:27:51.275074 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 05:27:51.279971 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:27:51.294612 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:27:51.294820 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 05:27:51.326083 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 05:27:51.326184 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 05:27:51.324364 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 05:27:51.331832 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 05:27:51.332785 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 05:27:51.333831 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 05:27:51.409352 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 05:27:51.420081 kernel: loop0: detected capacity change from 0 to 138184 Jan 30 05:27:51.422309 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 05:27:51.425209 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 05:27:51.444130 systemd-journald[1134]: Time spent on flushing to /var/log/journal/d2b19c639ff142058d385298a063efa9 is 65.596ms for 1140 entries. Jan 30 05:27:51.444130 systemd-journald[1134]: System Journal (/var/log/journal/d2b19c639ff142058d385298a063efa9) is 8.0M, max 584.8M, 576.8M free. Jan 30 05:27:51.549251 systemd-journald[1134]: Received client request to flush runtime journal. Jan 30 05:27:51.549300 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 05:27:51.549327 kernel: loop1: detected capacity change from 0 to 141000 Jan 30 05:27:51.470100 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:27:51.516253 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 05:27:51.526692 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:27:51.539633 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 05:27:51.570519 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 05:27:51.575798 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 05:27:51.577140 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 05:27:51.578965 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 05:27:51.598169 udevadm[1195]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 05:27:51.615301 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 30 05:27:51.615322 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 30 05:27:51.623885 kernel: loop2: detected capacity change from 0 to 8 Jan 30 05:27:51.622341 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:27:51.648145 kernel: loop3: detected capacity change from 0 to 210664 Jan 30 05:27:51.699153 kernel: loop4: detected capacity change from 0 to 138184 Jan 30 05:27:51.739081 kernel: loop5: detected capacity change from 0 to 141000 Jan 30 05:27:51.759080 kernel: loop6: detected capacity change from 0 to 8 Jan 30 05:27:51.763485 kernel: loop7: detected capacity change from 0 to 210664 Jan 30 05:27:51.797140 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 30 05:27:51.798312 (sd-merge)[1203]: Merged extensions into '/usr'. Jan 30 05:27:51.805122 systemd[1]: Reloading requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 05:27:51.805271 systemd[1]: Reloading... Jan 30 05:27:51.916077 zram_generator::config[1229]: No configuration found. Jan 30 05:27:52.003549 ldconfig[1156]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 05:27:52.085658 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:27:52.140497 systemd[1]: Reloading finished in 334 ms. Jan 30 05:27:52.164832 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 05:27:52.165953 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 05:27:52.178315 systemd[1]: Starting ensure-sysext.service... Jan 30 05:27:52.185212 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 05:27:52.206278 systemd[1]: Reloading requested from client PID 1272 ('systemctl') (unit ensure-sysext.service)... Jan 30 05:27:52.206304 systemd[1]: Reloading... Jan 30 05:27:52.227527 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 05:27:52.227852 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 05:27:52.228873 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 05:27:52.229246 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Jan 30 05:27:52.229433 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Jan 30 05:27:52.234958 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 05:27:52.234971 systemd-tmpfiles[1273]: Skipping /boot Jan 30 05:27:52.250061 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 05:27:52.252234 systemd-tmpfiles[1273]: Skipping /boot Jan 30 05:27:52.286075 zram_generator::config[1299]: No configuration found. Jan 30 05:27:52.415488 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:27:52.470473 systemd[1]: Reloading finished in 263 ms. Jan 30 05:27:52.492249 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 05:27:52.493273 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:27:52.510563 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 05:27:52.515935 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 05:27:52.522232 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 05:27:52.525016 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 05:27:52.527442 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:27:52.531203 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 05:27:52.537470 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:27:52.537700 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:27:52.541784 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:27:52.553718 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:27:52.561250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:27:52.562969 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:27:52.563653 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:27:52.567623 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:27:52.573265 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:27:52.589354 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:27:52.601311 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 05:27:52.603699 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:27:52.603942 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:27:52.612353 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:27:52.613326 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:27:52.613435 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:27:52.614382 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 05:27:52.615624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:27:52.616446 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:27:52.617951 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:27:52.618727 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:27:52.639564 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:27:52.645923 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:27:52.646617 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Jan 30 05:27:52.647159 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:27:52.653491 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:27:52.661277 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 05:27:52.664170 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:27:52.665341 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:27:52.665494 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:27:52.667182 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:27:52.667555 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:27:52.668913 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:27:52.674975 systemd[1]: Finished ensure-sysext.service. Jan 30 05:27:52.688277 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 05:27:52.705184 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 05:27:52.715460 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 05:27:52.717587 augenrules[1390]: No rules Jan 30 05:27:52.718109 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 05:27:52.719229 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:27:52.719918 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:27:52.738453 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 05:27:52.738774 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 05:27:52.752542 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:27:52.752765 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:27:52.756983 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:27:52.763097 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 05:27:52.763890 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 05:27:52.766150 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 05:27:52.766334 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 05:27:52.780801 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:27:52.789181 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 05:27:52.794570 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 05:27:52.823383 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 05:27:52.824072 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 05:27:52.848564 systemd-resolved[1348]: Positive Trust Anchors: Jan 30 05:27:52.849155 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 05:27:52.849249 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 05:27:52.861455 systemd-resolved[1348]: Using system hostname 'ci-4186-1-0-3-26ada394c1'. Jan 30 05:27:52.863013 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 05:27:52.863906 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:27:52.903851 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 05:27:52.916170 systemd-networkd[1403]: lo: Link UP Jan 30 05:27:52.916185 systemd-networkd[1403]: lo: Gained carrier Jan 30 05:27:52.919475 systemd-networkd[1403]: Enumeration completed Jan 30 05:27:52.919578 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 05:27:52.920344 systemd[1]: Reached target network.target - Network. Jan 30 05:27:52.922661 systemd-networkd[1403]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:27:52.922674 systemd-networkd[1403]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 05:27:52.927238 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 05:27:52.927846 systemd-networkd[1403]: eth1: Link UP Jan 30 05:27:52.927859 systemd-networkd[1403]: eth1: Gained carrier Jan 30 05:27:52.927880 systemd-networkd[1403]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:27:52.944422 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:27:52.944440 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 05:27:52.949227 systemd-networkd[1403]: eth0: Link UP Jan 30 05:27:52.949243 systemd-networkd[1403]: eth0: Gained carrier Jan 30 05:27:52.949265 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:27:52.975147 systemd-networkd[1403]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 05:27:52.976830 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Jan 30 05:27:52.990600 systemd-networkd[1403]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:27:53.009101 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1415) Jan 30 05:27:53.012107 systemd-networkd[1403]: eth0: DHCPv4 address 91.107.218.70/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 05:27:53.012594 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Jan 30 05:27:53.013174 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Jan 30 05:27:53.058117 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 05:27:53.063098 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 05:27:53.079746 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 05:27:53.082529 kernel: ACPI: button: Power Button [PWRF] Jan 30 05:27:53.091676 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 05:27:53.095175 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 30 05:27:53.095390 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:27:53.095599 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:27:53.103203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:27:53.106415 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:27:53.109180 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:27:53.109758 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:27:53.109790 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 05:27:53.109802 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:27:53.125331 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:27:53.125746 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:27:53.126574 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:27:53.128114 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:27:53.131433 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:27:53.132635 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:27:53.141098 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 05:27:53.141966 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:27:53.142641 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:27:53.150238 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 30 05:27:53.150270 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 30 05:27:53.154335 kernel: Console: switching to colour dummy device 80x25 Jan 30 05:27:53.155360 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 05:27:53.155408 kernel: [drm] features: -context_init Jan 30 05:27:53.159460 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 05:27:53.160254 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 05:27:53.160447 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 05:27:53.179490 kernel: [drm] number of scanouts: 1 Jan 30 05:27:53.179565 kernel: [drm] number of cap sets: 0 Jan 30 05:27:53.182071 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 05:27:53.189148 kernel: EDAC MC: Ver: 3.0.0 Jan 30 05:27:53.189196 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 30 05:27:53.198563 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:27:53.218495 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:27:53.218873 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:27:53.231806 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 05:27:53.231847 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 05:27:53.234868 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:27:53.236199 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 05:27:53.248163 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:27:53.248471 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:27:53.255514 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:27:53.352143 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:27:53.402110 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 05:27:53.413393 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 05:27:53.445544 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 05:27:53.497829 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 05:27:53.498843 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:27:53.498965 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 05:27:53.499406 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 05:27:53.500251 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 05:27:53.500701 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 05:27:53.500980 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 05:27:53.501099 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 05:27:53.501193 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 05:27:53.501227 systemd[1]: Reached target paths.target - Path Units. Jan 30 05:27:53.501341 systemd[1]: Reached target timers.target - Timer Units. Jan 30 05:27:53.504257 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 05:27:53.506886 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 05:27:53.514186 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 05:27:53.516331 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 05:27:53.518509 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 05:27:53.519261 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 05:27:53.519367 systemd[1]: Reached target basic.target - Basic System. Jan 30 05:27:53.522067 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 05:27:53.522094 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 05:27:53.528925 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 05:27:53.533211 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 05:27:53.547390 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 05:27:53.557373 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 05:27:53.574220 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 05:27:53.579254 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 05:27:53.579929 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 05:27:53.591662 jq[1475]: false Jan 30 05:27:53.591433 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 05:27:53.596066 coreos-metadata[1471]: Jan 30 05:27:53.595 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 30 05:27:53.600360 coreos-metadata[1471]: Jan 30 05:27:53.600 INFO Fetch successful Jan 30 05:27:53.600723 coreos-metadata[1471]: Jan 30 05:27:53.600 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 30 05:27:53.602350 coreos-metadata[1471]: Jan 30 05:27:53.601 INFO Fetch successful Jan 30 05:27:53.604237 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 05:27:53.610225 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 30 05:27:53.618194 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 05:27:53.622225 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 05:27:53.633223 dbus-daemon[1472]: [system] SELinux support is enabled Jan 30 05:27:53.637122 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 05:27:53.638221 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 05:27:53.638799 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 05:27:53.643204 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 05:27:53.648974 extend-filesystems[1476]: Found loop4 Jan 30 05:27:53.648974 extend-filesystems[1476]: Found loop5 Jan 30 05:27:53.648974 extend-filesystems[1476]: Found loop6 Jan 30 05:27:53.648974 extend-filesystems[1476]: Found loop7 Jan 30 05:27:53.648974 extend-filesystems[1476]: Found sda Jan 30 05:27:53.648974 extend-filesystems[1476]: Found sda1 Jan 30 05:27:53.648974 extend-filesystems[1476]: Found sda2 Jan 30 05:27:53.648974 extend-filesystems[1476]: Found sda3 Jan 30 05:27:53.648974 extend-filesystems[1476]: Found usr Jan 30 05:27:53.648974 extend-filesystems[1476]: Found sda4 Jan 30 05:27:53.648974 extend-filesystems[1476]: Found sda6 Jan 30 05:27:53.648974 extend-filesystems[1476]: Found sda7 Jan 30 05:27:53.648974 extend-filesystems[1476]: Found sda9 Jan 30 05:27:53.648974 extend-filesystems[1476]: Checking size of /dev/sda9 Jan 30 05:27:53.663166 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 05:27:53.674938 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 05:27:53.701605 jq[1488]: true Jan 30 05:27:53.702639 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 05:27:53.709473 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 05:27:53.709742 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 05:27:53.725136 extend-filesystems[1476]: Resized partition /dev/sda9 Jan 30 05:27:53.725252 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 05:27:53.725962 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 05:27:53.735774 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 05:27:53.737005 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 05:27:53.738667 extend-filesystems[1504]: resize2fs 1.47.1 (20-May-2024) Jan 30 05:27:53.749223 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 30 05:27:53.758594 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 05:27:53.758661 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 05:27:53.761536 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 05:27:53.761564 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 05:27:53.789182 (ntainerd)[1503]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 05:27:53.792133 systemd-logind[1483]: New seat seat0. Jan 30 05:27:53.793980 update_engine[1486]: I20250130 05:27:53.793452 1486 main.cc:92] Flatcar Update Engine starting Jan 30 05:27:53.794600 jq[1498]: true Jan 30 05:27:53.796022 update_engine[1486]: I20250130 05:27:53.795995 1486 update_check_scheduler.cc:74] Next update check in 8m42s Jan 30 05:27:53.796729 tar[1497]: linux-amd64/helm Jan 30 05:27:53.802921 systemd[1]: Started update-engine.service - Update Engine. Jan 30 05:27:53.812100 systemd-logind[1483]: Watching system buttons on /dev/input/event2 (Power Button) Jan 30 05:27:53.812131 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 05:27:53.813385 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 05:27:53.825929 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 05:27:53.842087 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1409) Jan 30 05:27:53.917517 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 05:27:53.924024 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 05:27:53.941326 systemd-networkd[1403]: eth1: Gained IPv6LL Jan 30 05:27:53.942488 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Jan 30 05:27:53.948794 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 05:27:53.953384 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 05:27:53.966552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:27:53.982160 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 05:27:54.038547 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Jan 30 05:27:54.044522 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 05:27:54.050206 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 30 05:27:54.064570 systemd[1]: Starting sshkeys.service... Jan 30 05:27:54.088318 extend-filesystems[1504]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 05:27:54.088318 extend-filesystems[1504]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 30 05:27:54.088318 extend-filesystems[1504]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 30 05:27:54.099295 extend-filesystems[1476]: Resized filesystem in /dev/sda9 Jan 30 05:27:54.099295 extend-filesystems[1476]: Found sr0 Jan 30 05:27:54.098836 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 05:27:54.099259 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 05:27:54.118308 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 05:27:54.131452 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 05:27:54.135239 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 05:27:54.190278 coreos-metadata[1560]: Jan 30 05:27:54.190 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 30 05:27:54.199059 coreos-metadata[1560]: Jan 30 05:27:54.197 INFO Fetch successful Jan 30 05:27:54.209867 containerd[1503]: time="2025-01-30T05:27:54.206931049Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 05:27:54.212712 unknown[1560]: wrote ssh authorized keys file for user: core Jan 30 05:27:54.270110 update-ssh-keys[1567]: Updated "/home/core/.ssh/authorized_keys" Jan 30 05:27:54.273119 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 05:27:54.280004 systemd[1]: Finished sshkeys.service. Jan 30 05:27:54.297743 containerd[1503]: time="2025-01-30T05:27:54.297678955Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:27:54.299963 containerd[1503]: time="2025-01-30T05:27:54.299930218Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:27:54.300108 containerd[1503]: time="2025-01-30T05:27:54.300091931Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 05:27:54.300163 containerd[1503]: time="2025-01-30T05:27:54.300151032Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 05:27:54.300381 containerd[1503]: time="2025-01-30T05:27:54.300365223Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 05:27:54.300442 containerd[1503]: time="2025-01-30T05:27:54.300429584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 05:27:54.300564 containerd[1503]: time="2025-01-30T05:27:54.300548017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:27:54.300647 containerd[1503]: time="2025-01-30T05:27:54.300632415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:27:54.300901 containerd[1503]: time="2025-01-30T05:27:54.300883085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:27:54.300960 containerd[1503]: time="2025-01-30T05:27:54.300947956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 05:27:54.301009 containerd[1503]: time="2025-01-30T05:27:54.300996578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:27:54.301079 containerd[1503]: time="2025-01-30T05:27:54.301066318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 05:27:54.301231 containerd[1503]: time="2025-01-30T05:27:54.301212893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:27:54.301529 containerd[1503]: time="2025-01-30T05:27:54.301512074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:27:54.301714 containerd[1503]: time="2025-01-30T05:27:54.301697883Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:27:54.301765 containerd[1503]: time="2025-01-30T05:27:54.301752977Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 05:27:54.301924 containerd[1503]: time="2025-01-30T05:27:54.301908538Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 05:27:54.302020 containerd[1503]: time="2025-01-30T05:27:54.302007003Z" level=info msg="metadata content store policy set" policy=shared Jan 30 05:27:54.307388 containerd[1503]: time="2025-01-30T05:27:54.307321962Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 05:27:54.307828 containerd[1503]: time="2025-01-30T05:27:54.307489496Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 05:27:54.307828 containerd[1503]: time="2025-01-30T05:27:54.307510926Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 05:27:54.307828 containerd[1503]: time="2025-01-30T05:27:54.307527728Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 05:27:54.307828 containerd[1503]: time="2025-01-30T05:27:54.307582700Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 05:27:54.307828 containerd[1503]: time="2025-01-30T05:27:54.307769461Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 05:27:54.308317 locksmithd[1522]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 05:27:54.309445 containerd[1503]: time="2025-01-30T05:27:54.308923626Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 05:27:54.311342 containerd[1503]: time="2025-01-30T05:27:54.309057406Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 05:27:54.311941 containerd[1503]: time="2025-01-30T05:27:54.311580950Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 05:27:54.312548 containerd[1503]: time="2025-01-30T05:27:54.312503560Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 05:27:54.312593 containerd[1503]: time="2025-01-30T05:27:54.312552602Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 05:27:54.312593 containerd[1503]: time="2025-01-30T05:27:54.312573281Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 05:27:54.312593 containerd[1503]: time="2025-01-30T05:27:54.312587458Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 05:27:54.312675 containerd[1503]: time="2025-01-30T05:27:54.312604770Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 05:27:54.312675 containerd[1503]: time="2025-01-30T05:27:54.312639365Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 05:27:54.312675 containerd[1503]: time="2025-01-30T05:27:54.312652810Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 05:27:54.312675 containerd[1503]: time="2025-01-30T05:27:54.312665854Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 05:27:54.312764 containerd[1503]: time="2025-01-30T05:27:54.312693076Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 05:27:54.312764 containerd[1503]: time="2025-01-30T05:27:54.312716449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 05:27:54.312764 containerd[1503]: time="2025-01-30T05:27:54.312731308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 05:27:54.312764 containerd[1503]: time="2025-01-30T05:27:54.312744021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 05:27:54.312764 containerd[1503]: time="2025-01-30T05:27:54.312758709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 05:27:54.312912 containerd[1503]: time="2025-01-30T05:27:54.312770952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 05:27:54.312912 containerd[1503]: time="2025-01-30T05:27:54.312784216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 05:27:54.312912 containerd[1503]: time="2025-01-30T05:27:54.312798303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 05:27:54.312912 containerd[1503]: time="2025-01-30T05:27:54.312811378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 05:27:54.312912 containerd[1503]: time="2025-01-30T05:27:54.312823931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 05:27:54.312912 containerd[1503]: time="2025-01-30T05:27:54.312839180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 05:27:54.312912 containerd[1503]: time="2025-01-30T05:27:54.312850752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 05:27:54.312912 containerd[1503]: time="2025-01-30T05:27:54.312861832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 05:27:54.312912 containerd[1503]: time="2025-01-30T05:27:54.312882952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 05:27:54.312912 containerd[1503]: time="2025-01-30T05:27:54.312899012Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 05:27:54.313107 containerd[1503]: time="2025-01-30T05:27:54.312929900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 05:27:54.313107 containerd[1503]: time="2025-01-30T05:27:54.312944628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 05:27:54.313107 containerd[1503]: time="2025-01-30T05:27:54.312954836Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 05:27:54.313107 containerd[1503]: time="2025-01-30T05:27:54.313004841Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 05:27:54.313107 containerd[1503]: time="2025-01-30T05:27:54.313023576Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 05:27:54.313107 containerd[1503]: time="2025-01-30T05:27:54.313036219Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 05:27:54.313107 containerd[1503]: time="2025-01-30T05:27:54.313064502Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 05:27:54.313107 containerd[1503]: time="2025-01-30T05:27:54.313075132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 05:27:54.313107 containerd[1503]: time="2025-01-30T05:27:54.313087876Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 05:27:54.313107 containerd[1503]: time="2025-01-30T05:27:54.313098236Z" level=info msg="NRI interface is disabled by configuration." Jan 30 05:27:54.313107 containerd[1503]: time="2025-01-30T05:27:54.313107883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 05:27:54.316073 containerd[1503]: time="2025-01-30T05:27:54.313408387Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 05:27:54.316073 containerd[1503]: time="2025-01-30T05:27:54.313472688Z" level=info msg="Connect containerd service" Jan 30 05:27:54.316073 containerd[1503]: time="2025-01-30T05:27:54.313508655Z" level=info msg="using legacy CRI server" Jan 30 05:27:54.316073 containerd[1503]: time="2025-01-30T05:27:54.313515438Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 05:27:54.316073 containerd[1503]: time="2025-01-30T05:27:54.313624222Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 05:27:54.320643 containerd[1503]: time="2025-01-30T05:27:54.320599274Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 05:27:54.322167 containerd[1503]: time="2025-01-30T05:27:54.322125838Z" level=info msg="Start subscribing containerd event" Jan 30 05:27:54.322203 containerd[1503]: time="2025-01-30T05:27:54.322177485Z" level=info msg="Start recovering state" Jan 30 05:27:54.322260 containerd[1503]: time="2025-01-30T05:27:54.322238239Z" level=info msg="Start event monitor" Jan 30 05:27:54.322294 containerd[1503]: time="2025-01-30T05:27:54.322263416Z" level=info msg="Start snapshots syncer" Jan 30 05:27:54.322294 containerd[1503]: time="2025-01-30T05:27:54.322272032Z" level=info msg="Start cni network conf syncer for default" Jan 30 05:27:54.322294 containerd[1503]: time="2025-01-30T05:27:54.322279536Z" level=info msg="Start streaming server" Jan 30 05:27:54.323933 containerd[1503]: time="2025-01-30T05:27:54.323908131Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 05:27:54.323987 containerd[1503]: time="2025-01-30T05:27:54.323966670Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 05:27:54.324069 containerd[1503]: time="2025-01-30T05:27:54.324034989Z" level=info msg="containerd successfully booted in 0.119292s" Jan 30 05:27:54.325158 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 05:27:54.462866 sshd_keygen[1518]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 05:27:54.501320 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 05:27:54.515277 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 05:27:54.524120 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 05:27:54.524342 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 05:27:54.535668 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 05:27:54.553860 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 05:27:54.565756 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 05:27:54.572522 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 05:27:54.573325 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 05:27:54.642004 tar[1497]: linux-amd64/LICENSE Jan 30 05:27:54.643496 tar[1497]: linux-amd64/README.md Jan 30 05:27:54.656872 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 05:27:54.709483 systemd-networkd[1403]: eth0: Gained IPv6LL Jan 30 05:27:54.710526 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Jan 30 05:27:55.467280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:27:55.470714 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:27:55.471133 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 05:27:55.474683 systemd[1]: Startup finished in 1.714s (kernel) + 8.197s (initrd) + 5.510s (userspace) = 15.421s. Jan 30 05:27:55.499098 agetty[1593]: failed to open credentials directory Jan 30 05:27:55.508485 agetty[1592]: failed to open credentials directory Jan 30 05:27:56.411570 kubelet[1602]: E0130 05:27:56.411477 1602 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:27:56.415491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:27:56.415902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:27:56.416500 systemd[1]: kubelet.service: Consumed 1.450s CPU time. Jan 30 05:28:06.667159 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 05:28:06.674941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:06.892502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:06.897584 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:28:06.959829 kubelet[1621]: E0130 05:28:06.959588 1621 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:28:06.966253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:28:06.966919 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:28:17.218087 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 05:28:17.226492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:17.447331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:17.462488 (kubelet)[1637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:28:17.508885 kubelet[1637]: E0130 05:28:17.508701 1637 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:28:17.515814 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:28:17.516015 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:28:25.605677 systemd-resolved[1348]: Clock change detected. Flushing caches. Jan 30 05:28:25.605961 systemd-timesyncd[1382]: Contacted time server 85.25.148.4:123 (2.flatcar.pool.ntp.org). Jan 30 05:28:25.606057 systemd-timesyncd[1382]: Initial clock synchronization to Thu 2025-01-30 05:28:25.605554 UTC. Jan 30 05:28:28.399601 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 05:28:28.405865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:28.581771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:28.583801 (kubelet)[1654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:28:28.633781 kubelet[1654]: E0130 05:28:28.633688 1654 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:28:28.637253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:28:28.637606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:28:38.730236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 05:28:38.737789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:38.985816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:38.987679 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:28:39.027947 kubelet[1669]: E0130 05:28:39.027869 1669 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:28:39.033788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:28:39.034205 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:28:39.725904 update_engine[1486]: I20250130 05:28:39.725662 1486 update_attempter.cc:509] Updating boot flags... Jan 30 05:28:39.825556 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1686) Jan 30 05:28:39.905817 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1687) Jan 30 05:28:39.965571 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1687) Jan 30 05:28:49.229841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 05:28:49.235988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:49.532887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:49.536703 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:28:49.618791 kubelet[1706]: E0130 05:28:49.618686 1706 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:28:49.627825 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:28:49.628063 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:28:59.730231 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 05:28:59.738014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:59.926807 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:59.927635 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:29:00.016590 kubelet[1723]: E0130 05:29:00.016291 1723 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:29:00.025032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:29:00.025265 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:29:10.230337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 30 05:29:10.238882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:29:10.471785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:29:10.476885 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:29:10.519826 kubelet[1740]: E0130 05:29:10.519651 1740 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:29:10.527587 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:29:10.527803 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:29:20.729917 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 30 05:29:20.736903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:29:20.934423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:29:20.939659 (kubelet)[1756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:29:20.980472 kubelet[1756]: E0130 05:29:20.980334 1756 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:29:20.984364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:29:20.984646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:29:31.229895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 30 05:29:31.236848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:29:31.419761 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:29:31.421871 (kubelet)[1773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:29:31.468004 kubelet[1773]: E0130 05:29:31.467866 1773 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:29:31.471121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:29:31.471563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:29:41.480750 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 30 05:29:41.493946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:29:41.729778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:29:41.731256 (kubelet)[1789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:29:41.781906 kubelet[1789]: E0130 05:29:41.781790 1789 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:29:41.790292 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:29:41.790547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:29:45.592008 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 05:29:45.602195 systemd[1]: Started sshd@0-91.107.218.70:22-139.178.89.65:58546.service - OpenSSH per-connection server daemon (139.178.89.65:58546). Jan 30 05:29:46.622716 sshd[1798]: Accepted publickey for core from 139.178.89.65 port 58546 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:29:46.626791 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:29:46.647315 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 05:29:46.653953 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 05:29:46.658906 systemd-logind[1483]: New session 1 of user core. Jan 30 05:29:46.695472 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 05:29:46.708037 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 05:29:46.727890 (systemd)[1802]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 05:29:46.887294 systemd[1802]: Queued start job for default target default.target. Jan 30 05:29:46.896841 systemd[1802]: Created slice app.slice - User Application Slice. Jan 30 05:29:46.896865 systemd[1802]: Reached target paths.target - Paths. Jan 30 05:29:46.896879 systemd[1802]: Reached target timers.target - Timers. Jan 30 05:29:46.898951 systemd[1802]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 05:29:46.922794 systemd[1802]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 05:29:46.923044 systemd[1802]: Reached target sockets.target - Sockets. Jan 30 05:29:46.923076 systemd[1802]: Reached target basic.target - Basic System. Jan 30 05:29:46.923157 systemd[1802]: Reached target default.target - Main User Target. Jan 30 05:29:46.923227 systemd[1802]: Startup finished in 181ms. Jan 30 05:29:46.923722 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 05:29:46.931669 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 05:29:47.634332 systemd[1]: Started sshd@1-91.107.218.70:22-139.178.89.65:58554.service - OpenSSH per-connection server daemon (139.178.89.65:58554). Jan 30 05:29:48.638792 sshd[1813]: Accepted publickey for core from 139.178.89.65 port 58554 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:29:48.642551 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:29:48.651134 systemd-logind[1483]: New session 2 of user core. Jan 30 05:29:48.661781 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 05:29:49.323622 sshd[1815]: Connection closed by 139.178.89.65 port 58554 Jan 30 05:29:49.324994 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Jan 30 05:29:49.333979 systemd[1]: sshd@1-91.107.218.70:22-139.178.89.65:58554.service: Deactivated successfully. Jan 30 05:29:49.339240 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 05:29:49.340727 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Jan 30 05:29:49.342920 systemd-logind[1483]: Removed session 2. Jan 30 05:29:49.505940 systemd[1]: Started sshd@2-91.107.218.70:22-139.178.89.65:58560.service - OpenSSH per-connection server daemon (139.178.89.65:58560). Jan 30 05:29:50.511419 sshd[1820]: Accepted publickey for core from 139.178.89.65 port 58560 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:29:50.514839 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:29:50.524598 systemd-logind[1483]: New session 3 of user core. Jan 30 05:29:50.535784 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 05:29:51.188210 sshd[1822]: Connection closed by 139.178.89.65 port 58560 Jan 30 05:29:51.189693 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Jan 30 05:29:51.197816 systemd[1]: sshd@2-91.107.218.70:22-139.178.89.65:58560.service: Deactivated successfully. Jan 30 05:29:51.204115 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 05:29:51.205695 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Jan 30 05:29:51.207761 systemd-logind[1483]: Removed session 3. Jan 30 05:29:51.374193 systemd[1]: Started sshd@3-91.107.218.70:22-139.178.89.65:37194.service - OpenSSH per-connection server daemon (139.178.89.65:37194). Jan 30 05:29:51.979740 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 30 05:29:51.991317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:29:52.190165 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:29:52.195091 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:29:52.280235 kubelet[1837]: E0130 05:29:52.279938 1837 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:29:52.288058 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:29:52.288538 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:29:52.402996 sshd[1827]: Accepted publickey for core from 139.178.89.65 port 37194 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:29:52.405898 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:29:52.414198 systemd-logind[1483]: New session 4 of user core. Jan 30 05:29:52.423847 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 05:29:53.091071 sshd[1845]: Connection closed by 139.178.89.65 port 37194 Jan 30 05:29:53.092276 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Jan 30 05:29:53.100828 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Jan 30 05:29:53.101393 systemd[1]: sshd@3-91.107.218.70:22-139.178.89.65:37194.service: Deactivated successfully. Jan 30 05:29:53.106181 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 05:29:53.108402 systemd-logind[1483]: Removed session 4. Jan 30 05:29:53.269999 systemd[1]: Started sshd@4-91.107.218.70:22-139.178.89.65:37206.service - OpenSSH per-connection server daemon (139.178.89.65:37206). Jan 30 05:29:54.284141 sshd[1850]: Accepted publickey for core from 139.178.89.65 port 37206 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:29:54.287598 sshd-session[1850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:29:54.297152 systemd-logind[1483]: New session 5 of user core. Jan 30 05:29:54.307916 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 05:29:54.832855 sudo[1853]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 05:29:54.833872 sudo[1853]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:29:54.857761 sudo[1853]: pam_unix(sudo:session): session closed for user root Jan 30 05:29:55.020545 sshd[1852]: Connection closed by 139.178.89.65 port 37206 Jan 30 05:29:55.021275 sshd-session[1850]: pam_unix(sshd:session): session closed for user core Jan 30 05:29:55.034284 systemd[1]: sshd@4-91.107.218.70:22-139.178.89.65:37206.service: Deactivated successfully. Jan 30 05:29:55.039006 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 05:29:55.043841 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Jan 30 05:29:55.046185 systemd-logind[1483]: Removed session 5. Jan 30 05:29:55.204558 systemd[1]: Started sshd@5-91.107.218.70:22-139.178.89.65:37216.service - OpenSSH per-connection server daemon (139.178.89.65:37216). Jan 30 05:29:55.300964 systemd[1]: Started sshd@6-91.107.218.70:22-211.60.122.138:48563.service - OpenSSH per-connection server daemon (211.60.122.138:48563). Jan 30 05:29:56.232685 sshd[1858]: Accepted publickey for core from 139.178.89.65 port 37216 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:29:56.235932 sshd-session[1858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:29:56.245234 systemd-logind[1483]: New session 6 of user core. Jan 30 05:29:56.251713 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 05:29:56.770356 sudo[1865]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 05:29:56.771331 sudo[1865]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:29:56.779695 sudo[1865]: pam_unix(sudo:session): session closed for user root Jan 30 05:29:56.794137 sudo[1864]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 05:29:56.794984 sudo[1864]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:29:56.818959 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 05:29:56.907218 augenrules[1887]: No rules Jan 30 05:29:56.909469 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 05:29:56.910105 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 05:29:56.913284 sudo[1864]: pam_unix(sudo:session): session closed for user root Jan 30 05:29:57.076560 sshd[1863]: Connection closed by 139.178.89.65 port 37216 Jan 30 05:29:57.078973 sshd-session[1858]: pam_unix(sshd:session): session closed for user core Jan 30 05:29:57.088025 systemd[1]: sshd@5-91.107.218.70:22-139.178.89.65:37216.service: Deactivated successfully. Jan 30 05:29:57.092685 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 05:29:57.094906 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Jan 30 05:29:57.097340 systemd-logind[1483]: Removed session 6. Jan 30 05:29:57.256314 systemd[1]: Started sshd@7-91.107.218.70:22-139.178.89.65:37228.service - OpenSSH per-connection server daemon (139.178.89.65:37228). Jan 30 05:29:58.252069 sshd[1895]: Accepted publickey for core from 139.178.89.65 port 37228 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:29:58.255322 sshd-session[1895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:29:58.262067 systemd-logind[1483]: New session 7 of user core. Jan 30 05:29:58.272690 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 05:29:58.777231 sudo[1898]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 05:29:58.778270 sudo[1898]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:29:59.633086 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 05:29:59.647413 (dockerd)[1917]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 05:30:00.544462 dockerd[1917]: time="2025-01-30T05:30:00.544326404Z" level=info msg="Starting up" Jan 30 05:30:00.841517 dockerd[1917]: time="2025-01-30T05:30:00.841266908Z" level=info msg="Loading containers: start." Jan 30 05:30:01.105914 kernel: Initializing XFRM netlink socket Jan 30 05:30:01.285184 systemd-networkd[1403]: docker0: Link UP Jan 30 05:30:01.338398 dockerd[1917]: time="2025-01-30T05:30:01.338293177Z" level=info msg="Loading containers: done." Jan 30 05:30:01.373599 dockerd[1917]: time="2025-01-30T05:30:01.373441954Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 05:30:01.374044 dockerd[1917]: time="2025-01-30T05:30:01.373722030Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 05:30:01.374044 dockerd[1917]: time="2025-01-30T05:30:01.374015009Z" level=info msg="Daemon has completed initialization" Jan 30 05:30:01.433769 dockerd[1917]: time="2025-01-30T05:30:01.433545957Z" level=info msg="API listen on /run/docker.sock" Jan 30 05:30:01.434250 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 05:30:02.479954 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 30 05:30:02.491385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:30:02.770892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:30:02.784472 (kubelet)[2113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:30:02.871990 kubelet[2113]: E0130 05:30:02.871905 2113 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:30:02.877954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:30:02.878356 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:30:03.191992 containerd[1503]: time="2025-01-30T05:30:03.191783232Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 05:30:03.913213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3245538477.mount: Deactivated successfully. Jan 30 05:30:05.299659 sshd[1861]: Connection closed by 211.60.122.138 port 48563 [preauth] Jan 30 05:30:05.302221 systemd[1]: sshd@6-91.107.218.70:22-211.60.122.138:48563.service: Deactivated successfully. Jan 30 05:30:05.608015 containerd[1503]: time="2025-01-30T05:30:05.607435606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:05.609165 containerd[1503]: time="2025-01-30T05:30:05.608882546Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677104" Jan 30 05:30:05.610821 containerd[1503]: time="2025-01-30T05:30:05.610737601Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:05.613465 containerd[1503]: time="2025-01-30T05:30:05.613407205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:05.615021 containerd[1503]: time="2025-01-30T05:30:05.614379022Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.42253291s" Jan 30 05:30:05.615021 containerd[1503]: time="2025-01-30T05:30:05.614411924Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 05:30:05.643474 containerd[1503]: time="2025-01-30T05:30:05.643405633Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 05:30:07.909804 containerd[1503]: time="2025-01-30T05:30:07.909708277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:07.911694 containerd[1503]: time="2025-01-30T05:30:07.911651757Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605765" Jan 30 05:30:07.913761 containerd[1503]: time="2025-01-30T05:30:07.913721729Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:07.919345 containerd[1503]: time="2025-01-30T05:30:07.919247242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:07.921591 containerd[1503]: time="2025-01-30T05:30:07.921339545Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.277848279s" Jan 30 05:30:07.921591 containerd[1503]: time="2025-01-30T05:30:07.921397575Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 05:30:07.974196 containerd[1503]: time="2025-01-30T05:30:07.974094458Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 05:30:09.493876 containerd[1503]: time="2025-01-30T05:30:09.493759542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:09.495635 containerd[1503]: time="2025-01-30T05:30:09.495520634Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783084" Jan 30 05:30:09.496984 containerd[1503]: time="2025-01-30T05:30:09.496917054Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:09.500967 containerd[1503]: time="2025-01-30T05:30:09.500867100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:09.503402 containerd[1503]: time="2025-01-30T05:30:09.502824266Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.528655255s" Jan 30 05:30:09.503402 containerd[1503]: time="2025-01-30T05:30:09.502881754Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 05:30:09.543362 containerd[1503]: time="2025-01-30T05:30:09.542866637Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 05:30:10.739853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2538543704.mount: Deactivated successfully. Jan 30 05:30:11.385282 containerd[1503]: time="2025-01-30T05:30:11.385134304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:11.387306 containerd[1503]: time="2025-01-30T05:30:11.387194541Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058363" Jan 30 05:30:11.388954 containerd[1503]: time="2025-01-30T05:30:11.388839320Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:11.397319 containerd[1503]: time="2025-01-30T05:30:11.396077419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:11.397497 containerd[1503]: time="2025-01-30T05:30:11.397460191Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.854533389s" Jan 30 05:30:11.397581 containerd[1503]: time="2025-01-30T05:30:11.397558909Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 05:30:11.443767 containerd[1503]: time="2025-01-30T05:30:11.443711061Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 05:30:12.098804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3556929571.mount: Deactivated successfully. Jan 30 05:30:12.979301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 30 05:30:12.992828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:30:13.224676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:30:13.226464 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:30:13.276948 kubelet[2270]: E0130 05:30:13.276738 2270 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:30:13.282839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:30:13.283202 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:30:13.332425 containerd[1503]: time="2025-01-30T05:30:13.332309577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:13.334081 containerd[1503]: time="2025-01-30T05:30:13.334010862Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Jan 30 05:30:13.335564 containerd[1503]: time="2025-01-30T05:30:13.335515213Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:13.339061 containerd[1503]: time="2025-01-30T05:30:13.338955385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:13.340220 containerd[1503]: time="2025-01-30T05:30:13.340052393Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.896096819s" Jan 30 05:30:13.340220 containerd[1503]: time="2025-01-30T05:30:13.340085115Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 05:30:13.376727 containerd[1503]: time="2025-01-30T05:30:13.376623433Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 05:30:13.939624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1242264511.mount: Deactivated successfully. Jan 30 05:30:13.950479 containerd[1503]: time="2025-01-30T05:30:13.950368904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:13.952632 containerd[1503]: time="2025-01-30T05:30:13.952529810Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Jan 30 05:30:13.954452 containerd[1503]: time="2025-01-30T05:30:13.954347625Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:13.959881 containerd[1503]: time="2025-01-30T05:30:13.959761207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:13.962975 containerd[1503]: time="2025-01-30T05:30:13.961641991Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 584.955117ms" Jan 30 05:30:13.962975 containerd[1503]: time="2025-01-30T05:30:13.961712576Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 05:30:14.003841 containerd[1503]: time="2025-01-30T05:30:14.003777809Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 05:30:14.632340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1432115284.mount: Deactivated successfully. Jan 30 05:30:16.463926 containerd[1503]: time="2025-01-30T05:30:16.463774367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:16.466019 containerd[1503]: time="2025-01-30T05:30:16.465919720Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238651" Jan 30 05:30:16.468946 containerd[1503]: time="2025-01-30T05:30:16.468756393Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:16.477841 containerd[1503]: time="2025-01-30T05:30:16.477658603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:16.479123 containerd[1503]: time="2025-01-30T05:30:16.478882271Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.474754519s" Jan 30 05:30:16.479123 containerd[1503]: time="2025-01-30T05:30:16.478930992Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 05:30:19.939681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:30:19.950651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:30:19.985260 systemd[1]: Reloading requested from client PID 2403 ('systemctl') (unit session-7.scope)... Jan 30 05:30:19.985288 systemd[1]: Reloading... Jan 30 05:30:20.166695 zram_generator::config[2446]: No configuration found. Jan 30 05:30:20.298835 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:30:20.385723 systemd[1]: Reloading finished in 399 ms. Jan 30 05:30:20.454259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:30:20.465074 (kubelet)[2489]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 05:30:20.466559 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:30:20.467435 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 05:30:20.468019 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:30:20.477959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:30:20.663025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:30:20.667116 (kubelet)[2500]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 05:30:20.711780 kubelet[2500]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:30:20.711780 kubelet[2500]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 05:30:20.711780 kubelet[2500]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:30:20.712614 kubelet[2500]: I0130 05:30:20.711817 2500 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 05:30:20.972804 kubelet[2500]: I0130 05:30:20.972638 2500 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 05:30:20.972804 kubelet[2500]: I0130 05:30:20.972675 2500 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 05:30:20.973001 kubelet[2500]: I0130 05:30:20.972925 2500 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 05:30:21.005329 kubelet[2500]: I0130 05:30:21.004596 2500 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:30:21.005941 kubelet[2500]: E0130 05:30:21.005905 2500 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://91.107.218.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:21.023130 kubelet[2500]: I0130 05:30:21.023088 2500 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 05:30:21.023617 kubelet[2500]: I0130 05:30:21.023562 2500 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 05:30:21.023947 kubelet[2500]: I0130 05:30:21.023618 2500 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-3-26ada394c1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 05:30:21.024152 kubelet[2500]: I0130 05:30:21.023967 2500 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 05:30:21.024152 kubelet[2500]: I0130 05:30:21.023985 2500 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 05:30:21.024262 kubelet[2500]: I0130 05:30:21.024233 2500 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:30:21.026587 kubelet[2500]: W0130 05:30:21.026385 2500 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.107.218.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-3-26ada394c1&limit=500&resourceVersion=0": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:21.026587 kubelet[2500]: E0130 05:30:21.026539 2500 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://91.107.218.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-3-26ada394c1&limit=500&resourceVersion=0": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:21.027389 kubelet[2500]: I0130 05:30:21.027342 2500 kubelet.go:400] "Attempting to sync node with API server" Jan 30 05:30:21.027389 kubelet[2500]: I0130 05:30:21.027389 2500 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 05:30:21.027681 kubelet[2500]: I0130 05:30:21.027419 2500 kubelet.go:312] "Adding apiserver pod source" Jan 30 05:30:21.027681 kubelet[2500]: I0130 05:30:21.027445 2500 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 05:30:21.030786 kubelet[2500]: W0130 05:30:21.030067 2500 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.107.218.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:21.030786 kubelet[2500]: E0130 05:30:21.030111 2500 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://91.107.218.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:21.032550 kubelet[2500]: I0130 05:30:21.032254 2500 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 05:30:21.035195 kubelet[2500]: I0130 05:30:21.033949 2500 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 05:30:21.035195 kubelet[2500]: W0130 05:30:21.034023 2500 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 05:30:21.035195 kubelet[2500]: I0130 05:30:21.034780 2500 server.go:1264] "Started kubelet" Jan 30 05:30:21.041254 kubelet[2500]: I0130 05:30:21.041110 2500 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 05:30:21.043517 kubelet[2500]: I0130 05:30:21.042868 2500 server.go:455] "Adding debug handlers to kubelet server" Jan 30 05:30:21.044953 kubelet[2500]: I0130 05:30:21.044866 2500 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 05:30:21.045458 kubelet[2500]: I0130 05:30:21.045427 2500 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 05:30:21.046470 kubelet[2500]: E0130 05:30:21.046224 2500 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.107.218.70:6443/api/v1/namespaces/default/events\": dial tcp 91.107.218.70:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-3-26ada394c1.181f615a28064a30 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-3-26ada394c1,UID:ci-4186-1-0-3-26ada394c1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-3-26ada394c1,},FirstTimestamp:2025-01-30 05:30:21.03475256 +0000 UTC m=+0.362041664,LastTimestamp:2025-01-30 05:30:21.03475256 +0000 UTC m=+0.362041664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-3-26ada394c1,}" Jan 30 05:30:21.049933 kubelet[2500]: I0130 05:30:21.049059 2500 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 05:30:21.052454 kubelet[2500]: I0130 05:30:21.052428 2500 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 05:30:21.055388 kubelet[2500]: I0130 05:30:21.054380 2500 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 05:30:21.055717 kubelet[2500]: I0130 05:30:21.055689 2500 reconciler.go:26] "Reconciler: start to sync state" Jan 30 05:30:21.060174 kubelet[2500]: E0130 05:30:21.060147 2500 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 05:30:21.060874 kubelet[2500]: W0130 05:30:21.060419 2500 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.107.218.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:21.060874 kubelet[2500]: E0130 05:30:21.060508 2500 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://91.107.218.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:21.060874 kubelet[2500]: E0130 05:30:21.060596 2500 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.218.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-3-26ada394c1?timeout=10s\": dial tcp 91.107.218.70:6443: connect: connection refused" interval="200ms" Jan 30 05:30:21.061590 kubelet[2500]: I0130 05:30:21.061567 2500 factory.go:221] Registration of the systemd container factory successfully Jan 30 05:30:21.061751 kubelet[2500]: I0130 05:30:21.061675 2500 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 05:30:21.063523 kubelet[2500]: I0130 05:30:21.062936 2500 factory.go:221] Registration of the containerd container factory successfully Jan 30 05:30:21.088939 kubelet[2500]: I0130 05:30:21.088871 2500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 05:30:21.091096 kubelet[2500]: I0130 05:30:21.091073 2500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 05:30:21.091239 kubelet[2500]: I0130 05:30:21.091224 2500 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 05:30:21.091352 kubelet[2500]: I0130 05:30:21.091336 2500 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 05:30:21.091635 kubelet[2500]: E0130 05:30:21.091606 2500 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 05:30:21.098248 kubelet[2500]: W0130 05:30:21.097479 2500 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.107.218.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:21.098248 kubelet[2500]: E0130 05:30:21.098255 2500 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://91.107.218.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:21.106983 kubelet[2500]: I0130 05:30:21.106922 2500 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 05:30:21.106983 kubelet[2500]: I0130 05:30:21.106949 2500 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 05:30:21.106983 kubelet[2500]: I0130 05:30:21.106978 2500 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:30:21.110575 kubelet[2500]: I0130 05:30:21.110549 2500 policy_none.go:49] "None policy: Start" Jan 30 05:30:21.111476 kubelet[2500]: I0130 05:30:21.111462 2500 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 05:30:21.111868 kubelet[2500]: I0130 05:30:21.111589 2500 state_mem.go:35] "Initializing new in-memory state store" Jan 30 05:30:21.117068 kubelet[2500]: E0130 05:30:21.116749 2500 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.107.218.70:6443/api/v1/namespaces/default/events\": dial tcp 91.107.218.70:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-3-26ada394c1.181f615a28064a30 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-3-26ada394c1,UID:ci-4186-1-0-3-26ada394c1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-3-26ada394c1,},FirstTimestamp:2025-01-30 05:30:21.03475256 +0000 UTC m=+0.362041664,LastTimestamp:2025-01-30 05:30:21.03475256 +0000 UTC m=+0.362041664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-3-26ada394c1,}" Jan 30 05:30:21.118958 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 05:30:21.137644 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 05:30:21.142890 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 05:30:21.155549 kubelet[2500]: I0130 05:30:21.154173 2500 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 05:30:21.155549 kubelet[2500]: I0130 05:30:21.154575 2500 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 05:30:21.155549 kubelet[2500]: I0130 05:30:21.154788 2500 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 05:30:21.155908 kubelet[2500]: I0130 05:30:21.155811 2500 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.159909 kubelet[2500]: E0130 05:30:21.159853 2500 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://91.107.218.70:6443/api/v1/nodes\": dial tcp 91.107.218.70:6443: connect: connection refused" node="ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.163864 kubelet[2500]: E0130 05:30:21.160301 2500 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-0-3-26ada394c1\" not found" Jan 30 05:30:21.193173 kubelet[2500]: I0130 05:30:21.192552 2500 topology_manager.go:215] "Topology Admit Handler" podUID="98e06c75162dc1a91b9a4bcf8545ff58" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.196263 kubelet[2500]: I0130 05:30:21.195840 2500 topology_manager.go:215] "Topology Admit Handler" podUID="0d16f4a01addf8c803523afd7d2d65a7" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.205553 kubelet[2500]: I0130 05:30:21.203455 2500 topology_manager.go:215] "Topology Admit Handler" podUID="1f3527d596fdba06a192a1c65e70a442" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.221942 systemd[1]: Created slice kubepods-burstable-pod98e06c75162dc1a91b9a4bcf8545ff58.slice - libcontainer container kubepods-burstable-pod98e06c75162dc1a91b9a4bcf8545ff58.slice. Jan 30 05:30:21.246740 systemd[1]: Created slice kubepods-burstable-pod0d16f4a01addf8c803523afd7d2d65a7.slice - libcontainer container kubepods-burstable-pod0d16f4a01addf8c803523afd7d2d65a7.slice. Jan 30 05:30:21.259870 systemd[1]: Created slice kubepods-burstable-pod1f3527d596fdba06a192a1c65e70a442.slice - libcontainer container kubepods-burstable-pod1f3527d596fdba06a192a1c65e70a442.slice. Jan 30 05:30:21.261605 kubelet[2500]: E0130 05:30:21.261541 2500 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.218.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-3-26ada394c1?timeout=10s\": dial tcp 91.107.218.70:6443: connect: connection refused" interval="400ms" Jan 30 05:30:21.357697 kubelet[2500]: I0130 05:30:21.357548 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d16f4a01addf8c803523afd7d2d65a7-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-3-26ada394c1\" (UID: \"0d16f4a01addf8c803523afd7d2d65a7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.357697 kubelet[2500]: I0130 05:30:21.357659 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d16f4a01addf8c803523afd7d2d65a7-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-3-26ada394c1\" (UID: \"0d16f4a01addf8c803523afd7d2d65a7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.357697 kubelet[2500]: I0130 05:30:21.357697 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f3527d596fdba06a192a1c65e70a442-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-3-26ada394c1\" (UID: \"1f3527d596fdba06a192a1c65e70a442\") " pod="kube-system/kube-scheduler-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.357697 kubelet[2500]: I0130 05:30:21.357732 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98e06c75162dc1a91b9a4bcf8545ff58-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-3-26ada394c1\" (UID: \"98e06c75162dc1a91b9a4bcf8545ff58\") " pod="kube-system/kube-apiserver-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.358225 kubelet[2500]: I0130 05:30:21.357765 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98e06c75162dc1a91b9a4bcf8545ff58-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-3-26ada394c1\" (UID: \"98e06c75162dc1a91b9a4bcf8545ff58\") " pod="kube-system/kube-apiserver-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.358225 kubelet[2500]: I0130 05:30:21.357815 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98e06c75162dc1a91b9a4bcf8545ff58-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-3-26ada394c1\" (UID: \"98e06c75162dc1a91b9a4bcf8545ff58\") " pod="kube-system/kube-apiserver-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.358225 kubelet[2500]: I0130 05:30:21.357850 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d16f4a01addf8c803523afd7d2d65a7-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-3-26ada394c1\" (UID: \"0d16f4a01addf8c803523afd7d2d65a7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.358225 kubelet[2500]: I0130 05:30:21.357886 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d16f4a01addf8c803523afd7d2d65a7-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-3-26ada394c1\" (UID: \"0d16f4a01addf8c803523afd7d2d65a7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.358225 kubelet[2500]: I0130 05:30:21.357925 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d16f4a01addf8c803523afd7d2d65a7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-3-26ada394c1\" (UID: \"0d16f4a01addf8c803523afd7d2d65a7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.364441 kubelet[2500]: I0130 05:30:21.364322 2500 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.365161 kubelet[2500]: E0130 05:30:21.365083 2500 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://91.107.218.70:6443/api/v1/nodes\": dial tcp 91.107.218.70:6443: connect: connection refused" node="ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.543967 containerd[1503]: time="2025-01-30T05:30:21.543775830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-3-26ada394c1,Uid:98e06c75162dc1a91b9a4bcf8545ff58,Namespace:kube-system,Attempt:0,}" Jan 30 05:30:21.551381 containerd[1503]: time="2025-01-30T05:30:21.551245570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-3-26ada394c1,Uid:0d16f4a01addf8c803523afd7d2d65a7,Namespace:kube-system,Attempt:0,}" Jan 30 05:30:21.566603 containerd[1503]: time="2025-01-30T05:30:21.566477655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-3-26ada394c1,Uid:1f3527d596fdba06a192a1c65e70a442,Namespace:kube-system,Attempt:0,}" Jan 30 05:30:21.663327 kubelet[2500]: E0130 05:30:21.663182 2500 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.218.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-3-26ada394c1?timeout=10s\": dial tcp 91.107.218.70:6443: connect: connection refused" interval="800ms" Jan 30 05:30:21.769812 kubelet[2500]: I0130 05:30:21.769746 2500 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.770442 kubelet[2500]: E0130 05:30:21.770366 2500 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://91.107.218.70:6443/api/v1/nodes\": dial tcp 91.107.218.70:6443: connect: connection refused" node="ci-4186-1-0-3-26ada394c1" Jan 30 05:30:21.852678 kubelet[2500]: W0130 05:30:21.852299 2500 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.107.218.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:21.852678 kubelet[2500]: E0130 05:30:21.852467 2500 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://91.107.218.70:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:22.052358 kubelet[2500]: W0130 05:30:22.052191 2500 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.107.218.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-3-26ada394c1&limit=500&resourceVersion=0": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:22.052358 kubelet[2500]: E0130 05:30:22.052349 2500 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://91.107.218.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-3-26ada394c1&limit=500&resourceVersion=0": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:22.106081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2172625723.mount: Deactivated successfully. Jan 30 05:30:22.116426 containerd[1503]: time="2025-01-30T05:30:22.116254532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:30:22.122667 containerd[1503]: time="2025-01-30T05:30:22.122266382Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Jan 30 05:30:22.124465 containerd[1503]: time="2025-01-30T05:30:22.124312993Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:30:22.126197 containerd[1503]: time="2025-01-30T05:30:22.126110252Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:30:22.130311 containerd[1503]: time="2025-01-30T05:30:22.130155482Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 05:30:22.131598 containerd[1503]: time="2025-01-30T05:30:22.131567534Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 05:30:22.131938 containerd[1503]: time="2025-01-30T05:30:22.131800463Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:30:22.134258 containerd[1503]: time="2025-01-30T05:30:22.134123147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:30:22.136450 containerd[1503]: time="2025-01-30T05:30:22.136124241Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 592.154343ms" Jan 30 05:30:22.142404 containerd[1503]: time="2025-01-30T05:30:22.142135631Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 575.445605ms" Jan 30 05:30:22.145689 containerd[1503]: time="2025-01-30T05:30:22.145572261Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 594.119077ms" Jan 30 05:30:22.177105 kubelet[2500]: W0130 05:30:22.176945 2500 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.107.218.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:22.177105 kubelet[2500]: E0130 05:30:22.177031 2500 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://91.107.218.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:22.347622 containerd[1503]: time="2025-01-30T05:30:22.347298964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:30:22.347622 containerd[1503]: time="2025-01-30T05:30:22.347375490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:30:22.347622 containerd[1503]: time="2025-01-30T05:30:22.347408852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:30:22.349918 containerd[1503]: time="2025-01-30T05:30:22.349640092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:30:22.349918 containerd[1503]: time="2025-01-30T05:30:22.349701638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:30:22.349918 containerd[1503]: time="2025-01-30T05:30:22.349715655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:30:22.349918 containerd[1503]: time="2025-01-30T05:30:22.349806517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:30:22.350540 containerd[1503]: time="2025-01-30T05:30:22.350462818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:30:22.355178 containerd[1503]: time="2025-01-30T05:30:22.355080973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:30:22.355460 containerd[1503]: time="2025-01-30T05:30:22.355145485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:30:22.355460 containerd[1503]: time="2025-01-30T05:30:22.355158459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:30:22.355460 containerd[1503]: time="2025-01-30T05:30:22.355252957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:30:22.390311 systemd[1]: Started cri-containerd-33c788cffa0c3348fff2daa8e3ec615f17c34cdfbf357c1f9ae3a35a981877ae.scope - libcontainer container 33c788cffa0c3348fff2daa8e3ec615f17c34cdfbf357c1f9ae3a35a981877ae. Jan 30 05:30:22.408815 systemd[1]: Started cri-containerd-7ac9ef2d7dcb44972bda98286c0293b7915b647b0227c46fb8fd22ed6c475c74.scope - libcontainer container 7ac9ef2d7dcb44972bda98286c0293b7915b647b0227c46fb8fd22ed6c475c74. Jan 30 05:30:22.412977 systemd[1]: Started cri-containerd-e0336c0d8505cbc59e9be0085556784b994e711365dd2625a195911ed4650912.scope - libcontainer container e0336c0d8505cbc59e9be0085556784b994e711365dd2625a195911ed4650912. Jan 30 05:30:22.440942 kubelet[2500]: W0130 05:30:22.440852 2500 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.107.218.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:22.440942 kubelet[2500]: E0130 05:30:22.440919 2500 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://91.107.218.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:22.467661 kubelet[2500]: E0130 05:30:22.465799 2500 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.218.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-3-26ada394c1?timeout=10s\": dial tcp 91.107.218.70:6443: connect: connection refused" interval="1.6s" Jan 30 05:30:22.478008 containerd[1503]: time="2025-01-30T05:30:22.477937049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-3-26ada394c1,Uid:0d16f4a01addf8c803523afd7d2d65a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ac9ef2d7dcb44972bda98286c0293b7915b647b0227c46fb8fd22ed6c475c74\"" Jan 30 05:30:22.480613 containerd[1503]: time="2025-01-30T05:30:22.480549630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-3-26ada394c1,Uid:98e06c75162dc1a91b9a4bcf8545ff58,Namespace:kube-system,Attempt:0,} returns sandbox id \"33c788cffa0c3348fff2daa8e3ec615f17c34cdfbf357c1f9ae3a35a981877ae\"" Jan 30 05:30:22.490990 containerd[1503]: time="2025-01-30T05:30:22.489198818Z" level=info msg="CreateContainer within sandbox \"7ac9ef2d7dcb44972bda98286c0293b7915b647b0227c46fb8fd22ed6c475c74\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 05:30:22.491215 containerd[1503]: time="2025-01-30T05:30:22.491149959Z" level=info msg="CreateContainer within sandbox \"33c788cffa0c3348fff2daa8e3ec615f17c34cdfbf357c1f9ae3a35a981877ae\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 05:30:22.515978 containerd[1503]: time="2025-01-30T05:30:22.515898124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-3-26ada394c1,Uid:1f3527d596fdba06a192a1c65e70a442,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0336c0d8505cbc59e9be0085556784b994e711365dd2625a195911ed4650912\"" Jan 30 05:30:22.528216 containerd[1503]: time="2025-01-30T05:30:22.528126101Z" level=info msg="CreateContainer within sandbox \"e0336c0d8505cbc59e9be0085556784b994e711365dd2625a195911ed4650912\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 05:30:22.533283 containerd[1503]: time="2025-01-30T05:30:22.533201240Z" level=info msg="CreateContainer within sandbox \"7ac9ef2d7dcb44972bda98286c0293b7915b647b0227c46fb8fd22ed6c475c74\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7516bde5bcef8096bcc1f56a43cc31bdeb03bd62bdc21c956629cfa749809502\"" Jan 30 05:30:22.534252 containerd[1503]: time="2025-01-30T05:30:22.534194800Z" level=info msg="StartContainer for \"7516bde5bcef8096bcc1f56a43cc31bdeb03bd62bdc21c956629cfa749809502\"" Jan 30 05:30:22.539188 containerd[1503]: time="2025-01-30T05:30:22.539107911Z" level=info msg="CreateContainer within sandbox \"33c788cffa0c3348fff2daa8e3ec615f17c34cdfbf357c1f9ae3a35a981877ae\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1dee02302ca8bb1ae84918162cad2d073bf424a4af833fd1cc00c46044c58558\"" Jan 30 05:30:22.541344 containerd[1503]: time="2025-01-30T05:30:22.541184499Z" level=info msg="StartContainer for \"1dee02302ca8bb1ae84918162cad2d073bf424a4af833fd1cc00c46044c58558\"" Jan 30 05:30:22.560368 containerd[1503]: time="2025-01-30T05:30:22.560160849Z" level=info msg="CreateContainer within sandbox \"e0336c0d8505cbc59e9be0085556784b994e711365dd2625a195911ed4650912\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c2ff51555ea6a11fdffcfdc70ae84524af6a3fe30d191ca19af26f95c10f4bec\"" Jan 30 05:30:22.561428 containerd[1503]: time="2025-01-30T05:30:22.561335451Z" level=info msg="StartContainer for \"c2ff51555ea6a11fdffcfdc70ae84524af6a3fe30d191ca19af26f95c10f4bec\"" Jan 30 05:30:22.574769 kubelet[2500]: I0130 05:30:22.574712 2500 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-3-26ada394c1" Jan 30 05:30:22.575307 kubelet[2500]: E0130 05:30:22.575183 2500 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://91.107.218.70:6443/api/v1/nodes\": dial tcp 91.107.218.70:6443: connect: connection refused" node="ci-4186-1-0-3-26ada394c1" Jan 30 05:30:22.595268 systemd[1]: Started cri-containerd-7516bde5bcef8096bcc1f56a43cc31bdeb03bd62bdc21c956629cfa749809502.scope - libcontainer container 7516bde5bcef8096bcc1f56a43cc31bdeb03bd62bdc21c956629cfa749809502. Jan 30 05:30:22.605825 systemd[1]: Started cri-containerd-1dee02302ca8bb1ae84918162cad2d073bf424a4af833fd1cc00c46044c58558.scope - libcontainer container 1dee02302ca8bb1ae84918162cad2d073bf424a4af833fd1cc00c46044c58558. Jan 30 05:30:22.617366 systemd[1]: Started cri-containerd-c2ff51555ea6a11fdffcfdc70ae84524af6a3fe30d191ca19af26f95c10f4bec.scope - libcontainer container c2ff51555ea6a11fdffcfdc70ae84524af6a3fe30d191ca19af26f95c10f4bec. Jan 30 05:30:22.703646 containerd[1503]: time="2025-01-30T05:30:22.702211878Z" level=info msg="StartContainer for \"7516bde5bcef8096bcc1f56a43cc31bdeb03bd62bdc21c956629cfa749809502\" returns successfully" Jan 30 05:30:22.703646 containerd[1503]: time="2025-01-30T05:30:22.702440691Z" level=info msg="StartContainer for \"c2ff51555ea6a11fdffcfdc70ae84524af6a3fe30d191ca19af26f95c10f4bec\" returns successfully" Jan 30 05:30:22.716618 containerd[1503]: time="2025-01-30T05:30:22.716529257Z" level=info msg="StartContainer for \"1dee02302ca8bb1ae84918162cad2d073bf424a4af833fd1cc00c46044c58558\" returns successfully" Jan 30 05:30:23.019427 kubelet[2500]: E0130 05:30:23.019270 2500 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://91.107.218.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 91.107.218.70:6443: connect: connection refused Jan 30 05:30:24.178907 kubelet[2500]: I0130 05:30:24.178862 2500 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-3-26ada394c1" Jan 30 05:30:24.716328 kubelet[2500]: E0130 05:30:24.716243 2500 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186-1-0-3-26ada394c1\" not found" node="ci-4186-1-0-3-26ada394c1" Jan 30 05:30:24.813411 kubelet[2500]: I0130 05:30:24.813341 2500 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-3-26ada394c1" Jan 30 05:30:25.032929 kubelet[2500]: I0130 05:30:25.032763 2500 apiserver.go:52] "Watching apiserver" Jan 30 05:30:25.056480 kubelet[2500]: I0130 05:30:25.056376 2500 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 05:30:27.169921 systemd[1]: Reloading requested from client PID 2776 ('systemctl') (unit session-7.scope)... Jan 30 05:30:27.169960 systemd[1]: Reloading... Jan 30 05:30:27.331718 zram_generator::config[2817]: No configuration found. Jan 30 05:30:27.469151 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:30:27.573664 systemd[1]: Reloading finished in 402 ms. Jan 30 05:30:27.628004 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:30:27.628881 kubelet[2500]: I0130 05:30:27.628275 2500 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:30:27.651895 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 05:30:27.652590 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:30:27.652841 systemd[1]: kubelet.service: Consumed 1.009s CPU time, 111.9M memory peak, 0B memory swap peak. Jan 30 05:30:27.659864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:30:27.881840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:30:27.882369 (kubelet)[2867]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 05:30:28.006460 kubelet[2867]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:30:28.006460 kubelet[2867]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 05:30:28.006460 kubelet[2867]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:30:28.007109 kubelet[2867]: I0130 05:30:28.006534 2867 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 05:30:28.020012 kubelet[2867]: I0130 05:30:28.019944 2867 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 05:30:28.020012 kubelet[2867]: I0130 05:30:28.019988 2867 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 05:30:28.020411 kubelet[2867]: I0130 05:30:28.020384 2867 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 05:30:28.023621 kubelet[2867]: I0130 05:30:28.023585 2867 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 05:30:28.032539 kubelet[2867]: I0130 05:30:28.031419 2867 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:30:28.045535 kubelet[2867]: I0130 05:30:28.043634 2867 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 05:30:28.045535 kubelet[2867]: I0130 05:30:28.043951 2867 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 05:30:28.045535 kubelet[2867]: I0130 05:30:28.043979 2867 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-3-26ada394c1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 05:30:28.045535 kubelet[2867]: I0130 05:30:28.044201 2867 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 05:30:28.045891 kubelet[2867]: I0130 05:30:28.044211 2867 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 05:30:28.045891 kubelet[2867]: I0130 05:30:28.044264 2867 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:30:28.045891 kubelet[2867]: I0130 05:30:28.044397 2867 kubelet.go:400] "Attempting to sync node with API server" Jan 30 05:30:28.045891 kubelet[2867]: I0130 05:30:28.044411 2867 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 05:30:28.045891 kubelet[2867]: I0130 05:30:28.044435 2867 kubelet.go:312] "Adding apiserver pod source" Jan 30 05:30:28.045891 kubelet[2867]: I0130 05:30:28.044464 2867 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 05:30:28.047637 kubelet[2867]: I0130 05:30:28.047607 2867 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 05:30:28.053164 kubelet[2867]: I0130 05:30:28.051381 2867 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 05:30:28.053164 kubelet[2867]: I0130 05:30:28.052046 2867 server.go:1264] "Started kubelet" Jan 30 05:30:28.054366 kubelet[2867]: I0130 05:30:28.054295 2867 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 05:30:28.055198 kubelet[2867]: I0130 05:30:28.055133 2867 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 05:30:28.056543 kubelet[2867]: I0130 05:30:28.055695 2867 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 05:30:28.061002 kubelet[2867]: I0130 05:30:28.060335 2867 server.go:455] "Adding debug handlers to kubelet server" Jan 30 05:30:28.065700 kubelet[2867]: I0130 05:30:28.065666 2867 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 05:30:28.068209 kubelet[2867]: I0130 05:30:28.068192 2867 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 05:30:28.068365 kubelet[2867]: I0130 05:30:28.068353 2867 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 05:30:28.068611 kubelet[2867]: I0130 05:30:28.068599 2867 reconciler.go:26] "Reconciler: start to sync state" Jan 30 05:30:28.071549 kubelet[2867]: I0130 05:30:28.070577 2867 factory.go:221] Registration of the systemd container factory successfully Jan 30 05:30:28.071800 kubelet[2867]: I0130 05:30:28.071777 2867 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 05:30:28.072725 kubelet[2867]: E0130 05:30:28.072407 2867 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 05:30:28.075169 kubelet[2867]: I0130 05:30:28.075133 2867 factory.go:221] Registration of the containerd container factory successfully Jan 30 05:30:28.081249 kubelet[2867]: I0130 05:30:28.081181 2867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 05:30:28.082990 kubelet[2867]: I0130 05:30:28.082954 2867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 05:30:28.082990 kubelet[2867]: I0130 05:30:28.082989 2867 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 05:30:28.083078 kubelet[2867]: I0130 05:30:28.083011 2867 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 05:30:28.083078 kubelet[2867]: E0130 05:30:28.083060 2867 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 05:30:28.173925 sudo[2897]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 05:30:28.174329 sudo[2897]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 05:30:28.183734 kubelet[2867]: E0130 05:30:28.183282 2867 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 05:30:28.184008 kubelet[2867]: I0130 05:30:28.183819 2867 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-3-26ada394c1" Jan 30 05:30:28.191516 kubelet[2867]: I0130 05:30:28.189241 2867 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 05:30:28.191516 kubelet[2867]: I0130 05:30:28.189256 2867 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 05:30:28.191516 kubelet[2867]: I0130 05:30:28.189282 2867 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:30:28.191907 kubelet[2867]: I0130 05:30:28.191878 2867 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 05:30:28.192389 kubelet[2867]: I0130 05:30:28.192358 2867 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 05:30:28.192451 kubelet[2867]: I0130 05:30:28.192443 2867 policy_none.go:49] "None policy: Start" Jan 30 05:30:28.194066 kubelet[2867]: I0130 05:30:28.193453 2867 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 05:30:28.194066 kubelet[2867]: I0130 05:30:28.193500 2867 state_mem.go:35] "Initializing new in-memory state store" Jan 30 05:30:28.194700 kubelet[2867]: I0130 05:30:28.194652 2867 state_mem.go:75] "Updated machine memory state" Jan 30 05:30:28.199177 kubelet[2867]: I0130 05:30:28.198933 2867 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186-1-0-3-26ada394c1" Jan 30 05:30:28.199177 kubelet[2867]: I0130 05:30:28.198997 2867 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-3-26ada394c1" Jan 30 05:30:28.212100 kubelet[2867]: I0130 05:30:28.212045 2867 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 05:30:28.213055 kubelet[2867]: I0130 05:30:28.212300 2867 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 05:30:28.216264 kubelet[2867]: I0130 05:30:28.216235 2867 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 05:30:28.383733 kubelet[2867]: I0130 05:30:28.383631 2867 topology_manager.go:215] "Topology Admit Handler" podUID="1f3527d596fdba06a192a1c65e70a442" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:28.383952 kubelet[2867]: I0130 05:30:28.383796 2867 topology_manager.go:215] "Topology Admit Handler" podUID="98e06c75162dc1a91b9a4bcf8545ff58" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:28.383952 kubelet[2867]: I0130 05:30:28.383888 2867 topology_manager.go:215] "Topology Admit Handler" podUID="0d16f4a01addf8c803523afd7d2d65a7" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:28.473444 kubelet[2867]: I0130 05:30:28.472934 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f3527d596fdba06a192a1c65e70a442-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-3-26ada394c1\" (UID: \"1f3527d596fdba06a192a1c65e70a442\") " pod="kube-system/kube-scheduler-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:28.473444 kubelet[2867]: I0130 05:30:28.472987 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98e06c75162dc1a91b9a4bcf8545ff58-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-3-26ada394c1\" (UID: \"98e06c75162dc1a91b9a4bcf8545ff58\") " pod="kube-system/kube-apiserver-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:28.473444 kubelet[2867]: I0130 05:30:28.473015 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d16f4a01addf8c803523afd7d2d65a7-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-3-26ada394c1\" (UID: \"0d16f4a01addf8c803523afd7d2d65a7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:28.473444 kubelet[2867]: I0130 05:30:28.473044 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d16f4a01addf8c803523afd7d2d65a7-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-3-26ada394c1\" (UID: \"0d16f4a01addf8c803523afd7d2d65a7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:28.473444 kubelet[2867]: I0130 05:30:28.473075 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d16f4a01addf8c803523afd7d2d65a7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-3-26ada394c1\" (UID: \"0d16f4a01addf8c803523afd7d2d65a7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:28.473845 kubelet[2867]: I0130 05:30:28.473095 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98e06c75162dc1a91b9a4bcf8545ff58-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-3-26ada394c1\" (UID: \"98e06c75162dc1a91b9a4bcf8545ff58\") " pod="kube-system/kube-apiserver-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:28.473845 kubelet[2867]: I0130 05:30:28.473116 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98e06c75162dc1a91b9a4bcf8545ff58-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-3-26ada394c1\" (UID: \"98e06c75162dc1a91b9a4bcf8545ff58\") " pod="kube-system/kube-apiserver-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:28.473845 kubelet[2867]: I0130 05:30:28.473139 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d16f4a01addf8c803523afd7d2d65a7-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-3-26ada394c1\" (UID: \"0d16f4a01addf8c803523afd7d2d65a7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:28.473845 kubelet[2867]: I0130 05:30:28.473159 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d16f4a01addf8c803523afd7d2d65a7-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-3-26ada394c1\" (UID: \"0d16f4a01addf8c803523afd7d2d65a7\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:28.815864 sudo[2897]: pam_unix(sudo:session): session closed for user root Jan 30 05:30:29.047121 kubelet[2867]: I0130 05:30:29.046122 2867 apiserver.go:52] "Watching apiserver" Jan 30 05:30:29.078579 kubelet[2867]: I0130 05:30:29.074644 2867 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 05:30:29.139531 kubelet[2867]: E0130 05:30:29.139290 2867 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-0-3-26ada394c1\" already exists" pod="kube-system/kube-apiserver-ci-4186-1-0-3-26ada394c1" Jan 30 05:30:29.144185 kubelet[2867]: I0130 05:30:29.144112 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-0-3-26ada394c1" podStartSLOduration=1.142481803 podStartE2EDuration="1.142481803s" podCreationTimestamp="2025-01-30 05:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:30:29.131165209 +0000 UTC m=+1.235012453" watchObservedRunningTime="2025-01-30 05:30:29.142481803 +0000 UTC m=+1.246329038" Jan 30 05:30:29.144383 kubelet[2867]: I0130 05:30:29.144310 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-0-3-26ada394c1" podStartSLOduration=1.144305868 podStartE2EDuration="1.144305868s" podCreationTimestamp="2025-01-30 05:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:30:29.142096576 +0000 UTC m=+1.245943810" watchObservedRunningTime="2025-01-30 05:30:29.144305868 +0000 UTC m=+1.248153103" Jan 30 05:30:29.170722 kubelet[2867]: I0130 05:30:29.170637 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-0-3-26ada394c1" podStartSLOduration=1.17060973 podStartE2EDuration="1.17060973s" podCreationTimestamp="2025-01-30 05:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:30:29.159403664 +0000 UTC m=+1.263250899" watchObservedRunningTime="2025-01-30 05:30:29.17060973 +0000 UTC m=+1.274456965" Jan 30 05:30:30.572119 sudo[1898]: pam_unix(sudo:session): session closed for user root Jan 30 05:30:30.730455 sshd[1897]: Connection closed by 139.178.89.65 port 37228 Jan 30 05:30:30.733203 sshd-session[1895]: pam_unix(sshd:session): session closed for user core Jan 30 05:30:30.738406 systemd[1]: sshd@7-91.107.218.70:22-139.178.89.65:37228.service: Deactivated successfully. Jan 30 05:30:30.743333 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 05:30:30.744019 systemd[1]: session-7.scope: Consumed 6.872s CPU time, 187.5M memory peak, 0B memory swap peak. Jan 30 05:30:30.747754 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Jan 30 05:30:30.750198 systemd-logind[1483]: Removed session 7. Jan 30 05:30:41.505025 kubelet[2867]: I0130 05:30:41.504929 2867 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 05:30:41.506305 containerd[1503]: time="2025-01-30T05:30:41.506130457Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 05:30:41.507518 kubelet[2867]: I0130 05:30:41.507084 2867 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 05:30:42.483010 kubelet[2867]: I0130 05:30:42.482590 2867 topology_manager.go:215] "Topology Admit Handler" podUID="0b97e5e9-0fea-4163-877b-e67ca8f04780" podNamespace="kube-system" podName="kube-proxy-4znmj" Jan 30 05:30:42.491650 kubelet[2867]: I0130 05:30:42.491562 2867 topology_manager.go:215] "Topology Admit Handler" podUID="c6c04acb-168b-4772-81e8-9b6642052623" podNamespace="kube-system" podName="cilium-vfjw7" Jan 30 05:30:42.501939 systemd[1]: Created slice kubepods-besteffort-pod0b97e5e9_0fea_4163_877b_e67ca8f04780.slice - libcontainer container kubepods-besteffort-pod0b97e5e9_0fea_4163_877b_e67ca8f04780.slice. Jan 30 05:30:42.510780 kubelet[2867]: W0130 05:30:42.508732 2867 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186-1-0-3-26ada394c1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-3-26ada394c1' and this object Jan 30 05:30:42.510780 kubelet[2867]: E0130 05:30:42.508841 2867 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186-1-0-3-26ada394c1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-3-26ada394c1' and this object Jan 30 05:30:42.510780 kubelet[2867]: W0130 05:30:42.508909 2867 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4186-1-0-3-26ada394c1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-3-26ada394c1' and this object Jan 30 05:30:42.510780 kubelet[2867]: E0130 05:30:42.508924 2867 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4186-1-0-3-26ada394c1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-3-26ada394c1' and this object Jan 30 05:30:42.513097 kubelet[2867]: W0130 05:30:42.511900 2867 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4186-1-0-3-26ada394c1" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-3-26ada394c1' and this object Jan 30 05:30:42.513097 kubelet[2867]: E0130 05:30:42.511932 2867 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4186-1-0-3-26ada394c1" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-3-26ada394c1' and this object Jan 30 05:30:42.528442 systemd[1]: Created slice kubepods-burstable-podc6c04acb_168b_4772_81e8_9b6642052623.slice - libcontainer container kubepods-burstable-podc6c04acb_168b_4772_81e8_9b6642052623.slice. Jan 30 05:30:42.654010 kubelet[2867]: I0130 05:30:42.653964 2867 topology_manager.go:215] "Topology Admit Handler" podUID="fc5babab-146e-463e-84e6-444584a41b6a" podNamespace="kube-system" podName="cilium-operator-599987898-77nbp" Jan 30 05:30:42.661608 systemd[1]: Created slice kubepods-besteffort-podfc5babab_146e_463e_84e6_444584a41b6a.slice - libcontainer container kubepods-besteffort-podfc5babab_146e_463e_84e6_444584a41b6a.slice. Jan 30 05:30:42.663458 kubelet[2867]: I0130 05:30:42.663429 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-hostproc\") pod \"cilium-vfjw7\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " pod="kube-system/cilium-vfjw7" Jan 30 05:30:42.663582 kubelet[2867]: I0130 05:30:42.663461 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zszr\" (UniqueName: \"kubernetes.io/projected/0b97e5e9-0fea-4163-877b-e67ca8f04780-kube-api-access-6zszr\") pod \"kube-proxy-4znmj\" (UID: \"0b97e5e9-0fea-4163-877b-e67ca8f04780\") " pod="kube-system/kube-proxy-4znmj" Jan 30 05:30:42.663582 kubelet[2867]: I0130 05:30:42.663477 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-etc-cni-netd\") pod \"cilium-vfjw7\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " pod="kube-system/cilium-vfjw7" Jan 30 05:30:42.663582 kubelet[2867]: I0130 05:30:42.663513 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b97e5e9-0fea-4163-877b-e67ca8f04780-xtables-lock\") pod \"kube-proxy-4znmj\" (UID: \"0b97e5e9-0fea-4163-877b-e67ca8f04780\") " pod="kube-system/kube-proxy-4znmj" Jan 30 05:30:42.663582 kubelet[2867]: I0130 05:30:42.663526 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b97e5e9-0fea-4163-877b-e67ca8f04780-lib-modules\") pod \"kube-proxy-4znmj\" (UID: \"0b97e5e9-0fea-4163-877b-e67ca8f04780\") " pod="kube-system/kube-proxy-4znmj" Jan 30 05:30:42.663582 kubelet[2867]: I0130 05:30:42.663539 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6c04acb-168b-4772-81e8-9b6642052623-hubble-tls\") pod \"cilium-vfjw7\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " pod="kube-system/cilium-vfjw7" Jan 30 05:30:42.663706 kubelet[2867]: I0130 05:30:42.663552 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-455dk\" (UniqueName: \"kubernetes.io/projected/fc5babab-146e-463e-84e6-444584a41b6a-kube-api-access-455dk\") pod \"cilium-operator-599987898-77nbp\" (UID: \"fc5babab-146e-463e-84e6-444584a41b6a\") " pod="kube-system/cilium-operator-599987898-77nbp" Jan 30 05:30:42.663706 kubelet[2867]: I0130 05:30:42.663567 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-cilium-cgroup\") pod \"cilium-vfjw7\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " pod="kube-system/cilium-vfjw7" Jan 30 05:30:42.663706 kubelet[2867]: I0130 05:30:42.663580 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6c04acb-168b-4772-81e8-9b6642052623-clustermesh-secrets\") pod \"cilium-vfjw7\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " pod="kube-system/cilium-vfjw7" Jan 30 05:30:42.663706 kubelet[2867]: I0130 05:30:42.663593 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-host-proc-sys-net\") pod \"cilium-vfjw7\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " pod="kube-system/cilium-vfjw7" Jan 30 05:30:42.663706 kubelet[2867]: I0130 05:30:42.663613 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc5babab-146e-463e-84e6-444584a41b6a-cilium-config-path\") pod \"cilium-operator-599987898-77nbp\" (UID: \"fc5babab-146e-463e-84e6-444584a41b6a\") " pod="kube-system/cilium-operator-599987898-77nbp" Jan 30 05:30:42.663832 kubelet[2867]: I0130 05:30:42.663626 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b97e5e9-0fea-4163-877b-e67ca8f04780-kube-proxy\") pod \"kube-proxy-4znmj\" (UID: \"0b97e5e9-0fea-4163-877b-e67ca8f04780\") " pod="kube-system/kube-proxy-4znmj" Jan 30 05:30:42.663832 kubelet[2867]: I0130 05:30:42.663640 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-bpf-maps\") pod \"cilium-vfjw7\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " pod="kube-system/cilium-vfjw7" Jan 30 05:30:42.663832 kubelet[2867]: I0130 05:30:42.663652 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6c04acb-168b-4772-81e8-9b6642052623-cilium-config-path\") pod \"cilium-vfjw7\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " pod="kube-system/cilium-vfjw7" Jan 30 05:30:42.663832 kubelet[2867]: I0130 05:30:42.663665 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-host-proc-sys-kernel\") pod \"cilium-vfjw7\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " pod="kube-system/cilium-vfjw7" Jan 30 05:30:42.663832 kubelet[2867]: I0130 05:30:42.663678 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-xtables-lock\") pod \"cilium-vfjw7\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " pod="kube-system/cilium-vfjw7" Jan 30 05:30:42.663832 kubelet[2867]: I0130 05:30:42.663690 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-cilium-run\") pod \"cilium-vfjw7\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " pod="kube-system/cilium-vfjw7" Jan 30 05:30:42.663963 kubelet[2867]: I0130 05:30:42.663705 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-cni-path\") pod \"cilium-vfjw7\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " pod="kube-system/cilium-vfjw7" Jan 30 05:30:42.663963 kubelet[2867]: I0130 05:30:42.663718 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-lib-modules\") pod \"cilium-vfjw7\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " pod="kube-system/cilium-vfjw7" Jan 30 05:30:42.663963 kubelet[2867]: I0130 05:30:42.663732 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqbfz\" (UniqueName: \"kubernetes.io/projected/c6c04acb-168b-4772-81e8-9b6642052623-kube-api-access-dqbfz\") pod \"cilium-vfjw7\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " pod="kube-system/cilium-vfjw7" Jan 30 05:30:43.773743 kubelet[2867]: E0130 05:30:43.773651 2867 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 30 05:30:43.774803 kubelet[2867]: E0130 05:30:43.773828 2867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b97e5e9-0fea-4163-877b-e67ca8f04780-kube-proxy podName:0b97e5e9-0fea-4163-877b-e67ca8f04780 nodeName:}" failed. No retries permitted until 2025-01-30 05:30:44.273791938 +0000 UTC m=+16.377639203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/0b97e5e9-0fea-4163-877b-e67ca8f04780-kube-proxy") pod "kube-proxy-4znmj" (UID: "0b97e5e9-0fea-4163-877b-e67ca8f04780") : failed to sync configmap cache: timed out waiting for the condition Jan 30 05:30:43.793976 kubelet[2867]: E0130 05:30:43.793654 2867 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 30 05:30:43.793976 kubelet[2867]: E0130 05:30:43.793659 2867 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 30 05:30:43.793976 kubelet[2867]: E0130 05:30:43.793792 2867 projected.go:200] Error preparing data for projected volume kube-api-access-dqbfz for pod kube-system/cilium-vfjw7: failed to sync configmap cache: timed out waiting for the condition Jan 30 05:30:43.793976 kubelet[2867]: E0130 05:30:43.793888 2867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c6c04acb-168b-4772-81e8-9b6642052623-kube-api-access-dqbfz podName:c6c04acb-168b-4772-81e8-9b6642052623 nodeName:}" failed. No retries permitted until 2025-01-30 05:30:44.293862023 +0000 UTC m=+16.397709257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dqbfz" (UniqueName: "kubernetes.io/projected/c6c04acb-168b-4772-81e8-9b6642052623-kube-api-access-dqbfz") pod "cilium-vfjw7" (UID: "c6c04acb-168b-4772-81e8-9b6642052623") : failed to sync configmap cache: timed out waiting for the condition Jan 30 05:30:43.794527 kubelet[2867]: E0130 05:30:43.793733 2867 projected.go:200] Error preparing data for projected volume kube-api-access-6zszr for pod kube-system/kube-proxy-4znmj: failed to sync configmap cache: timed out waiting for the condition Jan 30 05:30:43.794527 kubelet[2867]: E0130 05:30:43.794187 2867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b97e5e9-0fea-4163-877b-e67ca8f04780-kube-api-access-6zszr podName:0b97e5e9-0fea-4163-877b-e67ca8f04780 nodeName:}" failed. No retries permitted until 2025-01-30 05:30:44.294175674 +0000 UTC m=+16.398022918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6zszr" (UniqueName: "kubernetes.io/projected/0b97e5e9-0fea-4163-877b-e67ca8f04780-kube-api-access-6zszr") pod "kube-proxy-4znmj" (UID: "0b97e5e9-0fea-4163-877b-e67ca8f04780") : failed to sync configmap cache: timed out waiting for the condition Jan 30 05:30:43.794527 kubelet[2867]: E0130 05:30:43.794233 2867 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 30 05:30:43.794527 kubelet[2867]: E0130 05:30:43.794247 2867 projected.go:200] Error preparing data for projected volume kube-api-access-455dk for pod kube-system/cilium-operator-599987898-77nbp: failed to sync configmap cache: timed out waiting for the condition Jan 30 05:30:43.794527 kubelet[2867]: E0130 05:30:43.794278 2867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc5babab-146e-463e-84e6-444584a41b6a-kube-api-access-455dk podName:fc5babab-146e-463e-84e6-444584a41b6a nodeName:}" failed. No retries permitted until 2025-01-30 05:30:44.294268078 +0000 UTC m=+16.398115313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-455dk" (UniqueName: "kubernetes.io/projected/fc5babab-146e-463e-84e6-444584a41b6a-kube-api-access-455dk") pod "cilium-operator-599987898-77nbp" (UID: "fc5babab-146e-463e-84e6-444584a41b6a") : failed to sync configmap cache: timed out waiting for the condition Jan 30 05:30:44.467288 containerd[1503]: time="2025-01-30T05:30:44.467181052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-77nbp,Uid:fc5babab-146e-463e-84e6-444584a41b6a,Namespace:kube-system,Attempt:0,}" Jan 30 05:30:44.519087 containerd[1503]: time="2025-01-30T05:30:44.518913392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:30:44.519087 containerd[1503]: time="2025-01-30T05:30:44.519041694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:30:44.519508 containerd[1503]: time="2025-01-30T05:30:44.519070809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:30:44.519508 containerd[1503]: time="2025-01-30T05:30:44.519273582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:30:44.566778 systemd[1]: Started cri-containerd-657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd.scope - libcontainer container 657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd. Jan 30 05:30:44.624272 containerd[1503]: time="2025-01-30T05:30:44.623471556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4znmj,Uid:0b97e5e9-0fea-4163-877b-e67ca8f04780,Namespace:kube-system,Attempt:0,}" Jan 30 05:30:44.635619 containerd[1503]: time="2025-01-30T05:30:44.635174114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vfjw7,Uid:c6c04acb-168b-4772-81e8-9b6642052623,Namespace:kube-system,Attempt:0,}" Jan 30 05:30:44.644352 containerd[1503]: time="2025-01-30T05:30:44.644097754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-77nbp,Uid:fc5babab-146e-463e-84e6-444584a41b6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd\"" Jan 30 05:30:44.662203 containerd[1503]: time="2025-01-30T05:30:44.662160921Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 05:30:44.684260 containerd[1503]: time="2025-01-30T05:30:44.684176839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:30:44.684680 containerd[1503]: time="2025-01-30T05:30:44.684630605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:30:44.685313 containerd[1503]: time="2025-01-30T05:30:44.685279477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:30:44.685561 containerd[1503]: time="2025-01-30T05:30:44.685477350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:30:44.694150 containerd[1503]: time="2025-01-30T05:30:44.693905838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:30:44.694150 containerd[1503]: time="2025-01-30T05:30:44.693961192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:30:44.694150 containerd[1503]: time="2025-01-30T05:30:44.693971411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:30:44.694150 containerd[1503]: time="2025-01-30T05:30:44.694038418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:30:44.714721 systemd[1]: Started cri-containerd-cef49273fbc221ef542ac7d2df1725088888b04810b0a53254ecf84d30580c99.scope - libcontainer container cef49273fbc221ef542ac7d2df1725088888b04810b0a53254ecf84d30580c99. Jan 30 05:30:44.720226 systemd[1]: Started cri-containerd-ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e.scope - libcontainer container ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e. Jan 30 05:30:44.758407 containerd[1503]: time="2025-01-30T05:30:44.758160783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4znmj,Uid:0b97e5e9-0fea-4163-877b-e67ca8f04780,Namespace:kube-system,Attempt:0,} returns sandbox id \"cef49273fbc221ef542ac7d2df1725088888b04810b0a53254ecf84d30580c99\"" Jan 30 05:30:44.758694 containerd[1503]: time="2025-01-30T05:30:44.758648061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vfjw7,Uid:c6c04acb-168b-4772-81e8-9b6642052623,Namespace:kube-system,Attempt:0,} returns sandbox id \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\"" Jan 30 05:30:44.763419 containerd[1503]: time="2025-01-30T05:30:44.763115752Z" level=info msg="CreateContainer within sandbox \"cef49273fbc221ef542ac7d2df1725088888b04810b0a53254ecf84d30580c99\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 05:30:44.786423 containerd[1503]: time="2025-01-30T05:30:44.786374512Z" level=info msg="CreateContainer within sandbox \"cef49273fbc221ef542ac7d2df1725088888b04810b0a53254ecf84d30580c99\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"65e3031fd2e1627753ecb84dc04505c8808de2c661a556e1e1a65d6062eaf5c9\"" Jan 30 05:30:44.789845 containerd[1503]: time="2025-01-30T05:30:44.788257913Z" level=info msg="StartContainer for \"65e3031fd2e1627753ecb84dc04505c8808de2c661a556e1e1a65d6062eaf5c9\"" Jan 30 05:30:44.824621 systemd[1]: Started cri-containerd-65e3031fd2e1627753ecb84dc04505c8808de2c661a556e1e1a65d6062eaf5c9.scope - libcontainer container 65e3031fd2e1627753ecb84dc04505c8808de2c661a556e1e1a65d6062eaf5c9. Jan 30 05:30:44.858149 containerd[1503]: time="2025-01-30T05:30:44.858036309Z" level=info msg="StartContainer for \"65e3031fd2e1627753ecb84dc04505c8808de2c661a556e1e1a65d6062eaf5c9\" returns successfully" Jan 30 05:30:45.183255 kubelet[2867]: I0130 05:30:45.182716 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4znmj" podStartSLOduration=3.182693382 podStartE2EDuration="3.182693382s" podCreationTimestamp="2025-01-30 05:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:30:45.180307565 +0000 UTC m=+17.284154840" watchObservedRunningTime="2025-01-30 05:30:45.182693382 +0000 UTC m=+17.286540637" Jan 30 05:30:47.344064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2961668148.mount: Deactivated successfully. Jan 30 05:30:47.809060 containerd[1503]: time="2025-01-30T05:30:47.809004389Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:47.810675 containerd[1503]: time="2025-01-30T05:30:47.810641184Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 05:30:47.812616 containerd[1503]: time="2025-01-30T05:30:47.812576070Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:47.813787 containerd[1503]: time="2025-01-30T05:30:47.813651737Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.150642426s" Jan 30 05:30:47.813787 containerd[1503]: time="2025-01-30T05:30:47.813676413Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 05:30:47.820800 containerd[1503]: time="2025-01-30T05:30:47.820277161Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 05:30:47.836228 containerd[1503]: time="2025-01-30T05:30:47.836182341Z" level=info msg="CreateContainer within sandbox \"657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 05:30:47.853355 containerd[1503]: time="2025-01-30T05:30:47.853312090Z" level=info msg="CreateContainer within sandbox \"657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52\"" Jan 30 05:30:47.854053 containerd[1503]: time="2025-01-30T05:30:47.853956545Z" level=info msg="StartContainer for \"072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52\"" Jan 30 05:30:47.893631 systemd[1]: Started cri-containerd-072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52.scope - libcontainer container 072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52. Jan 30 05:30:47.931854 containerd[1503]: time="2025-01-30T05:30:47.931767386Z" level=info msg="StartContainer for \"072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52\" returns successfully" Jan 30 05:30:53.459437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3904788480.mount: Deactivated successfully. Jan 30 05:30:55.362741 containerd[1503]: time="2025-01-30T05:30:55.362656093Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:55.365407 containerd[1503]: time="2025-01-30T05:30:55.365327904Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 05:30:55.367559 containerd[1503]: time="2025-01-30T05:30:55.367100362Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:30:55.369923 containerd[1503]: time="2025-01-30T05:30:55.369884625Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.549581405s" Jan 30 05:30:55.369923 containerd[1503]: time="2025-01-30T05:30:55.369916264Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 05:30:55.375226 containerd[1503]: time="2025-01-30T05:30:55.374131202Z" level=info msg="CreateContainer within sandbox \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 05:30:55.465118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1934724789.mount: Deactivated successfully. Jan 30 05:30:55.469508 containerd[1503]: time="2025-01-30T05:30:55.469435945Z" level=info msg="CreateContainer within sandbox \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846\"" Jan 30 05:30:55.472851 containerd[1503]: time="2025-01-30T05:30:55.472772909Z" level=info msg="StartContainer for \"03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846\"" Jan 30 05:30:55.761642 systemd[1]: Started cri-containerd-03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846.scope - libcontainer container 03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846. Jan 30 05:30:55.792528 containerd[1503]: time="2025-01-30T05:30:55.792376608Z" level=info msg="StartContainer for \"03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846\" returns successfully" Jan 30 05:30:55.809722 systemd[1]: cri-containerd-03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846.scope: Deactivated successfully. Jan 30 05:30:55.941228 containerd[1503]: time="2025-01-30T05:30:55.931120208Z" level=info msg="shim disconnected" id=03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846 namespace=k8s.io Jan 30 05:30:55.941228 containerd[1503]: time="2025-01-30T05:30:55.941208074Z" level=warning msg="cleaning up after shim disconnected" id=03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846 namespace=k8s.io Jan 30 05:30:55.941228 containerd[1503]: time="2025-01-30T05:30:55.941226870Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:30:56.211940 containerd[1503]: time="2025-01-30T05:30:56.211467798Z" level=info msg="CreateContainer within sandbox \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 05:30:56.240078 containerd[1503]: time="2025-01-30T05:30:56.239997496Z" level=info msg="CreateContainer within sandbox \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8\"" Jan 30 05:30:56.244762 containerd[1503]: time="2025-01-30T05:30:56.244674863Z" level=info msg="StartContainer for \"0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8\"" Jan 30 05:30:56.246246 kubelet[2867]: I0130 05:30:56.241169 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-77nbp" podStartSLOduration=11.080511882 podStartE2EDuration="14.241134567s" podCreationTimestamp="2025-01-30 05:30:42 +0000 UTC" firstStartedPulling="2025-01-30 05:30:44.658293081 +0000 UTC m=+16.762140345" lastFinishedPulling="2025-01-30 05:30:47.818915796 +0000 UTC m=+19.922763030" observedRunningTime="2025-01-30 05:30:48.190460216 +0000 UTC m=+20.294307481" watchObservedRunningTime="2025-01-30 05:30:56.241134567 +0000 UTC m=+28.344981861" Jan 30 05:30:56.301827 systemd[1]: Started cri-containerd-0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8.scope - libcontainer container 0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8. Jan 30 05:30:56.347436 containerd[1503]: time="2025-01-30T05:30:56.347387603Z" level=info msg="StartContainer for \"0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8\" returns successfully" Jan 30 05:30:56.364453 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 05:30:56.365437 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:30:56.365697 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:30:56.373123 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:30:56.373427 systemd[1]: cri-containerd-0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8.scope: Deactivated successfully. Jan 30 05:30:56.415050 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:30:56.416747 containerd[1503]: time="2025-01-30T05:30:56.416376813Z" level=info msg="shim disconnected" id=0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8 namespace=k8s.io Jan 30 05:30:56.416747 containerd[1503]: time="2025-01-30T05:30:56.416445091Z" level=warning msg="cleaning up after shim disconnected" id=0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8 namespace=k8s.io Jan 30 05:30:56.416747 containerd[1503]: time="2025-01-30T05:30:56.416454328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:30:56.433107 containerd[1503]: time="2025-01-30T05:30:56.433016374Z" level=warning msg="cleanup warnings time=\"2025-01-30T05:30:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 05:30:56.459065 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846-rootfs.mount: Deactivated successfully. Jan 30 05:30:57.219727 containerd[1503]: time="2025-01-30T05:30:57.219622372Z" level=info msg="CreateContainer within sandbox \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 05:30:57.280246 containerd[1503]: time="2025-01-30T05:30:57.279553880Z" level=info msg="CreateContainer within sandbox \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c\"" Jan 30 05:30:57.280481 containerd[1503]: time="2025-01-30T05:30:57.280427795Z" level=info msg="StartContainer for \"1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c\"" Jan 30 05:30:57.339782 systemd[1]: Started cri-containerd-1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c.scope - libcontainer container 1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c. Jan 30 05:30:57.403939 containerd[1503]: time="2025-01-30T05:30:57.403849640Z" level=info msg="StartContainer for \"1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c\" returns successfully" Jan 30 05:30:57.413382 systemd[1]: cri-containerd-1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c.scope: Deactivated successfully. Jan 30 05:30:57.444904 containerd[1503]: time="2025-01-30T05:30:57.444796875Z" level=info msg="shim disconnected" id=1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c namespace=k8s.io Jan 30 05:30:57.444904 containerd[1503]: time="2025-01-30T05:30:57.444849324Z" level=warning msg="cleaning up after shim disconnected" id=1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c namespace=k8s.io Jan 30 05:30:57.444904 containerd[1503]: time="2025-01-30T05:30:57.444857970Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:30:57.460062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c-rootfs.mount: Deactivated successfully. Jan 30 05:30:58.225158 containerd[1503]: time="2025-01-30T05:30:58.224941080Z" level=info msg="CreateContainer within sandbox \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 05:30:58.267082 containerd[1503]: time="2025-01-30T05:30:58.266875979Z" level=info msg="CreateContainer within sandbox \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d\"" Jan 30 05:30:58.269894 containerd[1503]: time="2025-01-30T05:30:58.269844267Z" level=info msg="StartContainer for \"9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d\"" Jan 30 05:30:58.315700 systemd[1]: Started cri-containerd-9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d.scope - libcontainer container 9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d. Jan 30 05:30:58.357382 systemd[1]: cri-containerd-9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d.scope: Deactivated successfully. Jan 30 05:30:58.361270 containerd[1503]: time="2025-01-30T05:30:58.360891868Z" level=info msg="StartContainer for \"9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d\" returns successfully" Jan 30 05:30:58.397062 containerd[1503]: time="2025-01-30T05:30:58.396975419Z" level=info msg="shim disconnected" id=9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d namespace=k8s.io Jan 30 05:30:58.397062 containerd[1503]: time="2025-01-30T05:30:58.397052435Z" level=warning msg="cleaning up after shim disconnected" id=9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d namespace=k8s.io Jan 30 05:30:58.397062 containerd[1503]: time="2025-01-30T05:30:58.397065239Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:30:58.460197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d-rootfs.mount: Deactivated successfully. Jan 30 05:30:59.233807 containerd[1503]: time="2025-01-30T05:30:59.233720545Z" level=info msg="CreateContainer within sandbox \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 05:30:59.322295 containerd[1503]: time="2025-01-30T05:30:59.322138603Z" level=info msg="CreateContainer within sandbox \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb\"" Jan 30 05:30:59.324465 containerd[1503]: time="2025-01-30T05:30:59.324311994Z" level=info msg="StartContainer for \"1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb\"" Jan 30 05:30:59.373696 systemd[1]: Started cri-containerd-1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb.scope - libcontainer container 1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb. Jan 30 05:30:59.425710 containerd[1503]: time="2025-01-30T05:30:59.425642932Z" level=info msg="StartContainer for \"1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb\" returns successfully" Jan 30 05:30:59.584589 kubelet[2867]: I0130 05:30:59.584258 2867 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 05:30:59.621285 kubelet[2867]: I0130 05:30:59.621214 2867 topology_manager.go:215] "Topology Admit Handler" podUID="af8ff7da-8c79-4a48-b874-63b950999e89" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ltzdf" Jan 30 05:30:59.626124 kubelet[2867]: I0130 05:30:59.625464 2867 topology_manager.go:215] "Topology Admit Handler" podUID="11761383-8bd4-4055-ab51-e99ef53a9247" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fcj9g" Jan 30 05:30:59.631935 systemd[1]: Created slice kubepods-burstable-podaf8ff7da_8c79_4a48_b874_63b950999e89.slice - libcontainer container kubepods-burstable-podaf8ff7da_8c79_4a48_b874_63b950999e89.slice. Jan 30 05:30:59.640788 systemd[1]: Created slice kubepods-burstable-pod11761383_8bd4_4055_ab51_e99ef53a9247.slice - libcontainer container kubepods-burstable-pod11761383_8bd4_4055_ab51_e99ef53a9247.slice. Jan 30 05:30:59.793019 kubelet[2867]: I0130 05:30:59.792779 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p88gz\" (UniqueName: \"kubernetes.io/projected/af8ff7da-8c79-4a48-b874-63b950999e89-kube-api-access-p88gz\") pod \"coredns-7db6d8ff4d-ltzdf\" (UID: \"af8ff7da-8c79-4a48-b874-63b950999e89\") " pod="kube-system/coredns-7db6d8ff4d-ltzdf" Jan 30 05:30:59.793019 kubelet[2867]: I0130 05:30:59.792834 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af8ff7da-8c79-4a48-b874-63b950999e89-config-volume\") pod \"coredns-7db6d8ff4d-ltzdf\" (UID: \"af8ff7da-8c79-4a48-b874-63b950999e89\") " pod="kube-system/coredns-7db6d8ff4d-ltzdf" Jan 30 05:30:59.793019 kubelet[2867]: I0130 05:30:59.792862 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69bk7\" (UniqueName: \"kubernetes.io/projected/11761383-8bd4-4055-ab51-e99ef53a9247-kube-api-access-69bk7\") pod \"coredns-7db6d8ff4d-fcj9g\" (UID: \"11761383-8bd4-4055-ab51-e99ef53a9247\") " pod="kube-system/coredns-7db6d8ff4d-fcj9g" Jan 30 05:30:59.793019 kubelet[2867]: I0130 05:30:59.792886 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11761383-8bd4-4055-ab51-e99ef53a9247-config-volume\") pod \"coredns-7db6d8ff4d-fcj9g\" (UID: \"11761383-8bd4-4055-ab51-e99ef53a9247\") " pod="kube-system/coredns-7db6d8ff4d-fcj9g" Jan 30 05:30:59.939071 containerd[1503]: time="2025-01-30T05:30:59.938672298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ltzdf,Uid:af8ff7da-8c79-4a48-b874-63b950999e89,Namespace:kube-system,Attempt:0,}" Jan 30 05:30:59.944798 containerd[1503]: time="2025-01-30T05:30:59.944586011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fcj9g,Uid:11761383-8bd4-4055-ab51-e99ef53a9247,Namespace:kube-system,Attempt:0,}" Jan 30 05:31:01.884626 systemd-networkd[1403]: cilium_host: Link UP Jan 30 05:31:01.884887 systemd-networkd[1403]: cilium_net: Link UP Jan 30 05:31:01.884893 systemd-networkd[1403]: cilium_net: Gained carrier Jan 30 05:31:01.885241 systemd-networkd[1403]: cilium_host: Gained carrier Jan 30 05:31:02.036699 systemd-networkd[1403]: cilium_vxlan: Link UP Jan 30 05:31:02.036715 systemd-networkd[1403]: cilium_vxlan: Gained carrier Jan 30 05:31:02.037795 systemd-networkd[1403]: cilium_host: Gained IPv6LL Jan 30 05:31:02.366739 systemd-networkd[1403]: cilium_net: Gained IPv6LL Jan 30 05:31:02.568013 kernel: NET: Registered PF_ALG protocol family Jan 30 05:31:03.508342 systemd-networkd[1403]: lxc_health: Link UP Jan 30 05:31:03.517636 systemd-networkd[1403]: lxc_health: Gained carrier Jan 30 05:31:03.951472 systemd-networkd[1403]: cilium_vxlan: Gained IPv6LL Jan 30 05:31:04.031697 systemd-networkd[1403]: lxc333e93decb0a: Link UP Jan 30 05:31:04.042946 systemd-networkd[1403]: lxc02c49b444e9a: Link UP Jan 30 05:31:04.050161 kernel: eth0: renamed from tmp81d5e Jan 30 05:31:04.063383 kernel: eth0: renamed from tmp5f3a1 Jan 30 05:31:04.062784 systemd-networkd[1403]: lxc333e93decb0a: Gained carrier Jan 30 05:31:04.071914 systemd-networkd[1403]: lxc02c49b444e9a: Gained carrier Jan 30 05:31:04.653757 systemd-networkd[1403]: lxc_health: Gained IPv6LL Jan 30 05:31:04.672795 kubelet[2867]: I0130 05:31:04.672718 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vfjw7" podStartSLOduration=12.061886902 podStartE2EDuration="22.672597941s" podCreationTimestamp="2025-01-30 05:30:42 +0000 UTC" firstStartedPulling="2025-01-30 05:30:44.761126112 +0000 UTC m=+16.864973346" lastFinishedPulling="2025-01-30 05:30:55.371837111 +0000 UTC m=+27.475684385" observedRunningTime="2025-01-30 05:31:00.255025708 +0000 UTC m=+32.358872961" watchObservedRunningTime="2025-01-30 05:31:04.672597941 +0000 UTC m=+36.776445165" Jan 30 05:31:05.229761 systemd-networkd[1403]: lxc333e93decb0a: Gained IPv6LL Jan 30 05:31:05.869660 systemd-networkd[1403]: lxc02c49b444e9a: Gained IPv6LL Jan 30 05:31:07.949390 containerd[1503]: time="2025-01-30T05:31:07.949198213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:31:07.952259 containerd[1503]: time="2025-01-30T05:31:07.949678727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:31:07.952259 containerd[1503]: time="2025-01-30T05:31:07.949698034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:31:07.952259 containerd[1503]: time="2025-01-30T05:31:07.950642911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:31:07.987639 systemd[1]: Started cri-containerd-5f3a16827308491c853c5eda782fd6c9cfce047a43f3cf41fadaa173bd8d19e6.scope - libcontainer container 5f3a16827308491c853c5eda782fd6c9cfce047a43f3cf41fadaa173bd8d19e6. Jan 30 05:31:08.013265 containerd[1503]: time="2025-01-30T05:31:08.012917658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:31:08.013844 containerd[1503]: time="2025-01-30T05:31:08.013367395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:31:08.013844 containerd[1503]: time="2025-01-30T05:31:08.013604711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:31:08.018336 containerd[1503]: time="2025-01-30T05:31:08.017762534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:31:08.050650 systemd[1]: Started cri-containerd-81d5ee0b90d61077ba1bdd72c20dd81cccac127eefbcd751549bfdf326d22b65.scope - libcontainer container 81d5ee0b90d61077ba1bdd72c20dd81cccac127eefbcd751549bfdf326d22b65. Jan 30 05:31:08.093071 containerd[1503]: time="2025-01-30T05:31:08.092738675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ltzdf,Uid:af8ff7da-8c79-4a48-b874-63b950999e89,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f3a16827308491c853c5eda782fd6c9cfce047a43f3cf41fadaa173bd8d19e6\"" Jan 30 05:31:08.099733 containerd[1503]: time="2025-01-30T05:31:08.098754715Z" level=info msg="CreateContainer within sandbox \"5f3a16827308491c853c5eda782fd6c9cfce047a43f3cf41fadaa173bd8d19e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 05:31:08.130390 containerd[1503]: time="2025-01-30T05:31:08.130231629Z" level=info msg="CreateContainer within sandbox \"5f3a16827308491c853c5eda782fd6c9cfce047a43f3cf41fadaa173bd8d19e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9a8d033c12837af114dcdbeab10ee0eee55c10864ac9b8f61c123a85bfc1711\"" Jan 30 05:31:08.132391 containerd[1503]: time="2025-01-30T05:31:08.131610684Z" level=info msg="StartContainer for \"f9a8d033c12837af114dcdbeab10ee0eee55c10864ac9b8f61c123a85bfc1711\"" Jan 30 05:31:08.140164 containerd[1503]: time="2025-01-30T05:31:08.140139351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fcj9g,Uid:11761383-8bd4-4055-ab51-e99ef53a9247,Namespace:kube-system,Attempt:0,} returns sandbox id \"81d5ee0b90d61077ba1bdd72c20dd81cccac127eefbcd751549bfdf326d22b65\"" Jan 30 05:31:08.146111 containerd[1503]: time="2025-01-30T05:31:08.146046786Z" level=info msg="CreateContainer within sandbox \"81d5ee0b90d61077ba1bdd72c20dd81cccac127eefbcd751549bfdf326d22b65\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 05:31:08.166674 containerd[1503]: time="2025-01-30T05:31:08.166625646Z" level=info msg="CreateContainer within sandbox \"81d5ee0b90d61077ba1bdd72c20dd81cccac127eefbcd751549bfdf326d22b65\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c6305d1fbc9125332d09c3702e111d273b9b79439d2affc1e75c687b7412da0\"" Jan 30 05:31:08.169335 containerd[1503]: time="2025-01-30T05:31:08.168605031Z" level=info msg="StartContainer for \"5c6305d1fbc9125332d09c3702e111d273b9b79439d2affc1e75c687b7412da0\"" Jan 30 05:31:08.193691 systemd[1]: Started cri-containerd-f9a8d033c12837af114dcdbeab10ee0eee55c10864ac9b8f61c123a85bfc1711.scope - libcontainer container f9a8d033c12837af114dcdbeab10ee0eee55c10864ac9b8f61c123a85bfc1711. Jan 30 05:31:08.215701 systemd[1]: Started cri-containerd-5c6305d1fbc9125332d09c3702e111d273b9b79439d2affc1e75c687b7412da0.scope - libcontainer container 5c6305d1fbc9125332d09c3702e111d273b9b79439d2affc1e75c687b7412da0. Jan 30 05:31:08.266609 containerd[1503]: time="2025-01-30T05:31:08.264923268Z" level=info msg="StartContainer for \"f9a8d033c12837af114dcdbeab10ee0eee55c10864ac9b8f61c123a85bfc1711\" returns successfully" Jan 30 05:31:08.274983 containerd[1503]: time="2025-01-30T05:31:08.274948280Z" level=info msg="StartContainer for \"5c6305d1fbc9125332d09c3702e111d273b9b79439d2affc1e75c687b7412da0\" returns successfully" Jan 30 05:31:09.325520 kubelet[2867]: I0130 05:31:09.325380 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ltzdf" podStartSLOduration=27.325350747 podStartE2EDuration="27.325350747s" podCreationTimestamp="2025-01-30 05:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:31:09.297282148 +0000 UTC m=+41.401129422" watchObservedRunningTime="2025-01-30 05:31:09.325350747 +0000 UTC m=+41.429198022" Jan 30 05:31:09.327832 kubelet[2867]: I0130 05:31:09.325614 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fcj9g" podStartSLOduration=27.325604064 podStartE2EDuration="27.325604064s" podCreationTimestamp="2025-01-30 05:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:31:09.324874551 +0000 UTC m=+41.428721825" watchObservedRunningTime="2025-01-30 05:31:09.325604064 +0000 UTC m=+41.429451339" Jan 30 05:33:07.167001 systemd[1]: Started sshd@8-91.107.218.70:22-139.178.89.65:34154.service - OpenSSH per-connection server daemon (139.178.89.65:34154). Jan 30 05:33:08.221733 sshd[4251]: Accepted publickey for core from 139.178.89.65 port 34154 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:33:08.223694 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:08.232512 systemd-logind[1483]: New session 8 of user core. Jan 30 05:33:08.239812 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 05:33:09.634952 sshd[4255]: Connection closed by 139.178.89.65 port 34154 Jan 30 05:33:09.636032 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:09.643866 systemd[1]: sshd@8-91.107.218.70:22-139.178.89.65:34154.service: Deactivated successfully. Jan 30 05:33:09.650060 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 05:33:09.654665 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Jan 30 05:33:09.657810 systemd-logind[1483]: Removed session 8. Jan 30 05:33:14.816028 systemd[1]: Started sshd@9-91.107.218.70:22-139.178.89.65:35592.service - OpenSSH per-connection server daemon (139.178.89.65:35592). Jan 30 05:33:15.840116 sshd[4267]: Accepted publickey for core from 139.178.89.65 port 35592 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:33:15.843733 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:15.853693 systemd-logind[1483]: New session 9 of user core. Jan 30 05:33:15.868831 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 05:33:16.687303 sshd[4271]: Connection closed by 139.178.89.65 port 35592 Jan 30 05:33:16.689297 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:16.699009 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Jan 30 05:33:16.700087 systemd[1]: sshd@9-91.107.218.70:22-139.178.89.65:35592.service: Deactivated successfully. Jan 30 05:33:16.705942 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 05:33:16.707909 systemd-logind[1483]: Removed session 9. Jan 30 05:33:21.865018 systemd[1]: Started sshd@10-91.107.218.70:22-139.178.89.65:38068.service - OpenSSH per-connection server daemon (139.178.89.65:38068). Jan 30 05:33:22.885803 sshd[4283]: Accepted publickey for core from 139.178.89.65 port 38068 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:33:22.888608 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:22.897047 systemd-logind[1483]: New session 10 of user core. Jan 30 05:33:22.903867 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 05:33:23.707552 sshd[4286]: Connection closed by 139.178.89.65 port 38068 Jan 30 05:33:23.708584 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:23.717421 systemd[1]: sshd@10-91.107.218.70:22-139.178.89.65:38068.service: Deactivated successfully. Jan 30 05:33:23.722298 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 05:33:23.724040 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. Jan 30 05:33:23.725780 systemd-logind[1483]: Removed session 10. Jan 30 05:33:23.890178 systemd[1]: Started sshd@11-91.107.218.70:22-139.178.89.65:38078.service - OpenSSH per-connection server daemon (139.178.89.65:38078). Jan 30 05:33:24.910547 sshd[4298]: Accepted publickey for core from 139.178.89.65 port 38078 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:33:24.912230 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:24.921845 systemd-logind[1483]: New session 11 of user core. Jan 30 05:33:24.931805 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 05:33:25.799673 sshd[4300]: Connection closed by 139.178.89.65 port 38078 Jan 30 05:33:25.802216 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:25.810904 systemd[1]: sshd@11-91.107.218.70:22-139.178.89.65:38078.service: Deactivated successfully. Jan 30 05:33:25.816207 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 05:33:25.819227 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. Jan 30 05:33:25.821314 systemd-logind[1483]: Removed session 11. Jan 30 05:33:25.977984 systemd[1]: Started sshd@12-91.107.218.70:22-139.178.89.65:38082.service - OpenSSH per-connection server daemon (139.178.89.65:38082). Jan 30 05:33:26.997627 sshd[4309]: Accepted publickey for core from 139.178.89.65 port 38082 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:33:27.000873 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:27.011333 systemd-logind[1483]: New session 12 of user core. Jan 30 05:33:27.021831 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 05:33:27.806523 sshd[4311]: Connection closed by 139.178.89.65 port 38082 Jan 30 05:33:27.807775 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:27.816343 systemd[1]: sshd@12-91.107.218.70:22-139.178.89.65:38082.service: Deactivated successfully. Jan 30 05:33:27.820908 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 05:33:27.823140 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. Jan 30 05:33:27.825810 systemd-logind[1483]: Removed session 12. Jan 30 05:33:32.986076 systemd[1]: Started sshd@13-91.107.218.70:22-139.178.89.65:53222.service - OpenSSH per-connection server daemon (139.178.89.65:53222). Jan 30 05:33:34.024289 sshd[4323]: Accepted publickey for core from 139.178.89.65 port 53222 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:33:34.027723 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:34.036951 systemd-logind[1483]: New session 13 of user core. Jan 30 05:33:34.042865 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 05:33:34.825247 sshd[4325]: Connection closed by 139.178.89.65 port 53222 Jan 30 05:33:34.826386 sshd-session[4323]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:34.833993 systemd[1]: sshd@13-91.107.218.70:22-139.178.89.65:53222.service: Deactivated successfully. Jan 30 05:33:34.838798 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 05:33:34.840477 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. Jan 30 05:33:34.842710 systemd-logind[1483]: Removed session 13. Jan 30 05:33:35.008678 systemd[1]: Started sshd@14-91.107.218.70:22-139.178.89.65:53228.service - OpenSSH per-connection server daemon (139.178.89.65:53228). Jan 30 05:33:36.028713 sshd[4336]: Accepted publickey for core from 139.178.89.65 port 53228 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:33:36.031766 sshd-session[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:36.039833 systemd-logind[1483]: New session 14 of user core. Jan 30 05:33:36.047753 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 05:33:37.088971 sshd[4338]: Connection closed by 139.178.89.65 port 53228 Jan 30 05:33:37.091307 sshd-session[4336]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:37.100309 systemd[1]: sshd@14-91.107.218.70:22-139.178.89.65:53228.service: Deactivated successfully. Jan 30 05:33:37.105852 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 05:33:37.107194 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. Jan 30 05:33:37.109545 systemd-logind[1483]: Removed session 14. Jan 30 05:33:37.263264 systemd[1]: Started sshd@15-91.107.218.70:22-139.178.89.65:53230.service - OpenSSH per-connection server daemon (139.178.89.65:53230). Jan 30 05:33:38.261677 sshd[4348]: Accepted publickey for core from 139.178.89.65 port 53230 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:33:38.264956 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:38.276551 systemd-logind[1483]: New session 15 of user core. Jan 30 05:33:38.281856 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 05:33:40.589148 sshd[4351]: Connection closed by 139.178.89.65 port 53230 Jan 30 05:33:40.590110 sshd-session[4348]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:40.610543 systemd[1]: sshd@15-91.107.218.70:22-139.178.89.65:53230.service: Deactivated successfully. Jan 30 05:33:40.615715 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 05:33:40.617347 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. Jan 30 05:33:40.620821 systemd-logind[1483]: Removed session 15. Jan 30 05:33:40.768103 systemd[1]: Started sshd@16-91.107.218.70:22-139.178.89.65:53234.service - OpenSSH per-connection server daemon (139.178.89.65:53234). Jan 30 05:33:41.791403 sshd[4367]: Accepted publickey for core from 139.178.89.65 port 53234 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:33:41.795099 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:41.807616 systemd-logind[1483]: New session 16 of user core. Jan 30 05:33:41.813809 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 05:33:42.785690 sshd[4369]: Connection closed by 139.178.89.65 port 53234 Jan 30 05:33:42.786289 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:42.793189 systemd[1]: sshd@16-91.107.218.70:22-139.178.89.65:53234.service: Deactivated successfully. Jan 30 05:33:42.800467 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 05:33:42.805319 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. Jan 30 05:33:42.808544 systemd-logind[1483]: Removed session 16. Jan 30 05:33:42.968189 systemd[1]: Started sshd@17-91.107.218.70:22-139.178.89.65:41694.service - OpenSSH per-connection server daemon (139.178.89.65:41694). Jan 30 05:33:43.985662 sshd[4378]: Accepted publickey for core from 139.178.89.65 port 41694 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:33:43.988753 sshd-session[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:43.998370 systemd-logind[1483]: New session 17 of user core. Jan 30 05:33:44.005799 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 05:33:44.807541 sshd[4380]: Connection closed by 139.178.89.65 port 41694 Jan 30 05:33:44.808316 sshd-session[4378]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:44.813927 systemd[1]: sshd@17-91.107.218.70:22-139.178.89.65:41694.service: Deactivated successfully. Jan 30 05:33:44.818304 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 05:33:44.819285 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. Jan 30 05:33:44.820670 systemd-logind[1483]: Removed session 17. Jan 30 05:33:49.985036 systemd[1]: Started sshd@18-91.107.218.70:22-139.178.89.65:41698.service - OpenSSH per-connection server daemon (139.178.89.65:41698). Jan 30 05:33:50.993577 sshd[4396]: Accepted publickey for core from 139.178.89.65 port 41698 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:33:50.996930 sshd-session[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:51.007064 systemd-logind[1483]: New session 18 of user core. Jan 30 05:33:51.014808 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 05:33:51.777933 sshd[4398]: Connection closed by 139.178.89.65 port 41698 Jan 30 05:33:51.779090 sshd-session[4396]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:51.786815 systemd-logind[1483]: Session 18 logged out. Waiting for processes to exit. Jan 30 05:33:51.787885 systemd[1]: sshd@18-91.107.218.70:22-139.178.89.65:41698.service: Deactivated successfully. Jan 30 05:33:51.794199 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 05:33:51.796915 systemd-logind[1483]: Removed session 18. Jan 30 05:33:52.079785 systemd[1]: Started sshd@19-91.107.218.70:22-111.67.201.36:46978.service - OpenSSH per-connection server daemon (111.67.201.36:46978). Jan 30 05:33:52.123066 sshd[4409]: Connection closed by 111.67.201.36 port 46978 Jan 30 05:33:52.124562 systemd[1]: sshd@19-91.107.218.70:22-111.67.201.36:46978.service: Deactivated successfully. Jan 30 05:33:52.376038 systemd[1]: Started sshd@20-91.107.218.70:22-111.67.201.36:50394.service - OpenSSH per-connection server daemon (111.67.201.36:50394). Jan 30 05:33:53.507802 sshd[4413]: Invalid user kafka from 111.67.201.36 port 50394 Jan 30 05:33:53.773995 sshd[4413]: Connection closed by invalid user kafka 111.67.201.36 port 50394 [preauth] Jan 30 05:33:53.779804 systemd[1]: sshd@20-91.107.218.70:22-111.67.201.36:50394.service: Deactivated successfully. Jan 30 05:33:54.017872 systemd[1]: Started sshd@21-91.107.218.70:22-111.67.201.36:52106.service - OpenSSH per-connection server daemon (111.67.201.36:52106). Jan 30 05:33:54.997551 sshd[4419]: Invalid user test from 111.67.201.36 port 52106 Jan 30 05:33:55.154123 sshd[4419]: Connection closed by invalid user test 111.67.201.36 port 52106 [preauth] Jan 30 05:33:55.159553 systemd[1]: sshd@21-91.107.218.70:22-111.67.201.36:52106.service: Deactivated successfully. Jan 30 05:33:55.396114 systemd[1]: Started sshd@22-91.107.218.70:22-111.67.201.36:60368.service - OpenSSH per-connection server daemon (111.67.201.36:60368). Jan 30 05:33:56.454725 sshd[4425]: Invalid user odoo from 111.67.201.36 port 60368 Jan 30 05:33:56.696568 sshd[4425]: Connection closed by invalid user odoo 111.67.201.36 port 60368 [preauth] Jan 30 05:33:56.702390 systemd[1]: sshd@22-91.107.218.70:22-111.67.201.36:60368.service: Deactivated successfully. Jan 30 05:33:56.968053 systemd[1]: Started sshd@23-91.107.218.70:22-139.178.89.65:33206.service - OpenSSH per-connection server daemon (139.178.89.65:33206). Jan 30 05:33:56.976931 systemd[1]: Started sshd@24-91.107.218.70:22-111.67.201.36:33018.service - OpenSSH per-connection server daemon (111.67.201.36:33018). Jan 30 05:33:57.961269 sshd[4431]: Accepted publickey for core from 139.178.89.65 port 33206 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:33:57.964881 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:57.973731 systemd-logind[1483]: New session 19 of user core. Jan 30 05:33:57.980856 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 05:33:58.208537 sshd[4432]: Connection closed by authenticating user root 111.67.201.36 port 33018 [preauth] Jan 30 05:33:58.214025 systemd[1]: sshd@24-91.107.218.70:22-111.67.201.36:33018.service: Deactivated successfully. Jan 30 05:33:58.493305 systemd[1]: Started sshd@25-91.107.218.70:22-111.67.201.36:41356.service - OpenSSH per-connection server daemon (111.67.201.36:41356). Jan 30 05:33:58.767355 sshd[4435]: Connection closed by 139.178.89.65 port 33206 Jan 30 05:33:58.769017 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:58.774999 systemd[1]: sshd@23-91.107.218.70:22-139.178.89.65:33206.service: Deactivated successfully. Jan 30 05:33:58.780212 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 05:33:58.784273 systemd-logind[1483]: Session 19 logged out. Waiting for processes to exit. Jan 30 05:33:58.786676 systemd-logind[1483]: Removed session 19. Jan 30 05:33:58.945634 systemd[1]: Started sshd@26-91.107.218.70:22-139.178.89.65:33208.service - OpenSSH per-connection server daemon (139.178.89.65:33208). Jan 30 05:33:59.523298 sshd[4445]: Invalid user postgres from 111.67.201.36 port 41356 Jan 30 05:33:59.780546 sshd[4445]: Connection closed by invalid user postgres 111.67.201.36 port 41356 [preauth] Jan 30 05:33:59.785793 systemd[1]: sshd@25-91.107.218.70:22-111.67.201.36:41356.service: Deactivated successfully. Jan 30 05:33:59.959271 sshd[4451]: Accepted publickey for core from 139.178.89.65 port 33208 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:33:59.962625 sshd-session[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:59.973961 systemd-logind[1483]: New session 20 of user core. Jan 30 05:33:59.980882 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 05:34:00.014999 systemd[1]: Started sshd@27-91.107.218.70:22-111.67.201.36:42254.service - OpenSSH per-connection server daemon (111.67.201.36:42254). Jan 30 05:34:01.052758 sshd[4457]: Invalid user apiserver from 111.67.201.36 port 42254 Jan 30 05:34:01.375708 sshd[4457]: Connection closed by invalid user apiserver 111.67.201.36 port 42254 [preauth] Jan 30 05:34:01.380216 systemd[1]: sshd@27-91.107.218.70:22-111.67.201.36:42254.service: Deactivated successfully. Jan 30 05:34:01.665988 systemd[1]: Started sshd@28-91.107.218.70:22-111.67.201.36:50574.service - OpenSSH per-connection server daemon (111.67.201.36:50574). Jan 30 05:34:01.951765 containerd[1503]: time="2025-01-30T05:34:01.951200419Z" level=info msg="StopContainer for \"072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52\" with timeout 30 (s)" Jan 30 05:34:01.954549 containerd[1503]: time="2025-01-30T05:34:01.954480321Z" level=info msg="Stop container \"072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52\" with signal terminated" Jan 30 05:34:01.963051 containerd[1503]: time="2025-01-30T05:34:01.962998583Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 05:34:01.977584 systemd[1]: cri-containerd-072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52.scope: Deactivated successfully. Jan 30 05:34:02.005630 containerd[1503]: time="2025-01-30T05:34:02.005289863Z" level=info msg="StopContainer for \"1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb\" with timeout 2 (s)" Jan 30 05:34:02.007538 containerd[1503]: time="2025-01-30T05:34:02.006894056Z" level=info msg="Stop container \"1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb\" with signal terminated" Jan 30 05:34:02.022626 systemd-networkd[1403]: lxc_health: Link DOWN Jan 30 05:34:02.023598 systemd-networkd[1403]: lxc_health: Lost carrier Jan 30 05:34:02.049624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52-rootfs.mount: Deactivated successfully. Jan 30 05:34:02.054386 systemd[1]: cri-containerd-1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb.scope: Deactivated successfully. Jan 30 05:34:02.054798 systemd[1]: cri-containerd-1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb.scope: Consumed 8.591s CPU time. Jan 30 05:34:02.063806 containerd[1503]: time="2025-01-30T05:34:02.063589976Z" level=info msg="shim disconnected" id=072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52 namespace=k8s.io Jan 30 05:34:02.063806 containerd[1503]: time="2025-01-30T05:34:02.063653345Z" level=warning msg="cleaning up after shim disconnected" id=072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52 namespace=k8s.io Jan 30 05:34:02.063806 containerd[1503]: time="2025-01-30T05:34:02.063661129Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:02.079250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb-rootfs.mount: Deactivated successfully. Jan 30 05:34:02.093852 containerd[1503]: time="2025-01-30T05:34:02.093248082Z" level=info msg="shim disconnected" id=1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb namespace=k8s.io Jan 30 05:34:02.093852 containerd[1503]: time="2025-01-30T05:34:02.093307162Z" level=warning msg="cleaning up after shim disconnected" id=1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb namespace=k8s.io Jan 30 05:34:02.093852 containerd[1503]: time="2025-01-30T05:34:02.093317492Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:02.094623 containerd[1503]: time="2025-01-30T05:34:02.094599410Z" level=info msg="StopContainer for \"072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52\" returns successfully" Jan 30 05:34:02.095917 containerd[1503]: time="2025-01-30T05:34:02.095898601Z" level=info msg="StopPodSandbox for \"657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd\"" Jan 30 05:34:02.096051 containerd[1503]: time="2025-01-30T05:34:02.096003728Z" level=info msg="Container to stop \"072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:34:02.098208 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd-shm.mount: Deactivated successfully. Jan 30 05:34:02.109162 containerd[1503]: time="2025-01-30T05:34:02.108625379Z" level=warning msg="cleanup warnings time=\"2025-01-30T05:34:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 05:34:02.112317 containerd[1503]: time="2025-01-30T05:34:02.112284042Z" level=info msg="StopContainer for \"1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb\" returns successfully" Jan 30 05:34:02.112778 containerd[1503]: time="2025-01-30T05:34:02.112711556Z" level=info msg="StopPodSandbox for \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\"" Jan 30 05:34:02.112778 containerd[1503]: time="2025-01-30T05:34:02.112751461Z" level=info msg="Container to stop \"0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:34:02.112778 containerd[1503]: time="2025-01-30T05:34:02.112780255Z" level=info msg="Container to stop \"1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:34:02.112778 containerd[1503]: time="2025-01-30T05:34:02.112787869Z" level=info msg="Container to stop \"9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:34:02.113043 containerd[1503]: time="2025-01-30T05:34:02.112795313Z" level=info msg="Container to stop \"1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:34:02.113043 containerd[1503]: time="2025-01-30T05:34:02.112802907Z" level=info msg="Container to stop \"03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:34:02.115772 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e-shm.mount: Deactivated successfully. Jan 30 05:34:02.117058 systemd[1]: cri-containerd-657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd.scope: Deactivated successfully. Jan 30 05:34:02.129912 systemd[1]: cri-containerd-ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e.scope: Deactivated successfully. Jan 30 05:34:02.164445 containerd[1503]: time="2025-01-30T05:34:02.164196235Z" level=info msg="shim disconnected" id=657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd namespace=k8s.io Jan 30 05:34:02.164445 containerd[1503]: time="2025-01-30T05:34:02.164439081Z" level=warning msg="cleaning up after shim disconnected" id=657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd namespace=k8s.io Jan 30 05:34:02.164445 containerd[1503]: time="2025-01-30T05:34:02.164448148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:02.165599 containerd[1503]: time="2025-01-30T05:34:02.164290612Z" level=info msg="shim disconnected" id=ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e namespace=k8s.io Jan 30 05:34:02.165599 containerd[1503]: time="2025-01-30T05:34:02.165011336Z" level=warning msg="cleaning up after shim disconnected" id=ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e namespace=k8s.io Jan 30 05:34:02.165599 containerd[1503]: time="2025-01-30T05:34:02.165018680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:02.183912 containerd[1503]: time="2025-01-30T05:34:02.183868792Z" level=info msg="TearDown network for sandbox \"657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd\" successfully" Jan 30 05:34:02.183912 containerd[1503]: time="2025-01-30T05:34:02.183901784Z" level=info msg="StopPodSandbox for \"657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd\" returns successfully" Jan 30 05:34:02.186498 containerd[1503]: time="2025-01-30T05:34:02.186450793Z" level=info msg="TearDown network for sandbox \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\" successfully" Jan 30 05:34:02.186498 containerd[1503]: time="2025-01-30T05:34:02.186470891Z" level=info msg="StopPodSandbox for \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\" returns successfully" Jan 30 05:34:02.350469 kubelet[2867]: I0130 05:34:02.349618 2867 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-xtables-lock\") pod \"c6c04acb-168b-4772-81e8-9b6642052623\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " Jan 30 05:34:02.350469 kubelet[2867]: I0130 05:34:02.349711 2867 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-bpf-maps\") pod \"c6c04acb-168b-4772-81e8-9b6642052623\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " Jan 30 05:34:02.350469 kubelet[2867]: I0130 05:34:02.349764 2867 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-cni-path\") pod \"c6c04acb-168b-4772-81e8-9b6642052623\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " Jan 30 05:34:02.350469 kubelet[2867]: I0130 05:34:02.349793 2867 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-lib-modules\") pod \"c6c04acb-168b-4772-81e8-9b6642052623\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " Jan 30 05:34:02.350469 kubelet[2867]: I0130 05:34:02.349836 2867 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6c04acb-168b-4772-81e8-9b6642052623-hubble-tls\") pod \"c6c04acb-168b-4772-81e8-9b6642052623\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " Jan 30 05:34:02.350469 kubelet[2867]: I0130 05:34:02.349862 2867 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-cilium-cgroup\") pod \"c6c04acb-168b-4772-81e8-9b6642052623\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " Jan 30 05:34:02.351261 kubelet[2867]: I0130 05:34:02.349893 2867 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc5babab-146e-463e-84e6-444584a41b6a-cilium-config-path\") pod \"fc5babab-146e-463e-84e6-444584a41b6a\" (UID: \"fc5babab-146e-463e-84e6-444584a41b6a\") " Jan 30 05:34:02.351261 kubelet[2867]: I0130 05:34:02.349927 2867 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-455dk\" (UniqueName: \"kubernetes.io/projected/fc5babab-146e-463e-84e6-444584a41b6a-kube-api-access-455dk\") pod \"fc5babab-146e-463e-84e6-444584a41b6a\" (UID: \"fc5babab-146e-463e-84e6-444584a41b6a\") " Jan 30 05:34:02.351261 kubelet[2867]: I0130 05:34:02.349954 2867 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-cilium-run\") pod \"c6c04acb-168b-4772-81e8-9b6642052623\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " Jan 30 05:34:02.351261 kubelet[2867]: I0130 05:34:02.349986 2867 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6c04acb-168b-4772-81e8-9b6642052623-clustermesh-secrets\") pod \"c6c04acb-168b-4772-81e8-9b6642052623\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " Jan 30 05:34:02.351261 kubelet[2867]: I0130 05:34:02.350013 2867 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-host-proc-sys-kernel\") pod \"c6c04acb-168b-4772-81e8-9b6642052623\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " Jan 30 05:34:02.351261 kubelet[2867]: I0130 05:34:02.350039 2867 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-hostproc\") pod \"c6c04acb-168b-4772-81e8-9b6642052623\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " Jan 30 05:34:02.351467 kubelet[2867]: I0130 05:34:02.350069 2867 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqbfz\" (UniqueName: \"kubernetes.io/projected/c6c04acb-168b-4772-81e8-9b6642052623-kube-api-access-dqbfz\") pod \"c6c04acb-168b-4772-81e8-9b6642052623\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " Jan 30 05:34:02.351467 kubelet[2867]: I0130 05:34:02.350095 2867 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-etc-cni-netd\") pod \"c6c04acb-168b-4772-81e8-9b6642052623\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " Jan 30 05:34:02.351467 kubelet[2867]: I0130 05:34:02.350122 2867 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-host-proc-sys-net\") pod \"c6c04acb-168b-4772-81e8-9b6642052623\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " Jan 30 05:34:02.351467 kubelet[2867]: I0130 05:34:02.350156 2867 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6c04acb-168b-4772-81e8-9b6642052623-cilium-config-path\") pod \"c6c04acb-168b-4772-81e8-9b6642052623\" (UID: \"c6c04acb-168b-4772-81e8-9b6642052623\") " Jan 30 05:34:02.351467 kubelet[2867]: I0130 05:34:02.346607 2867 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c6c04acb-168b-4772-81e8-9b6642052623" (UID: "c6c04acb-168b-4772-81e8-9b6642052623"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:02.368113 kubelet[2867]: I0130 05:34:02.366992 2867 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c6c04acb-168b-4772-81e8-9b6642052623" (UID: "c6c04acb-168b-4772-81e8-9b6642052623"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:02.368713 kubelet[2867]: I0130 05:34:02.368668 2867 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c6c04acb-168b-4772-81e8-9b6642052623" (UID: "c6c04acb-168b-4772-81e8-9b6642052623"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:02.369201 kubelet[2867]: I0130 05:34:02.368871 2867 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-hostproc" (OuterVolumeSpecName: "hostproc") pod "c6c04acb-168b-4772-81e8-9b6642052623" (UID: "c6c04acb-168b-4772-81e8-9b6642052623"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:02.373778 kubelet[2867]: I0130 05:34:02.373693 2867 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c6c04acb-168b-4772-81e8-9b6642052623" (UID: "c6c04acb-168b-4772-81e8-9b6642052623"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:02.374026 kubelet[2867]: I0130 05:34:02.373969 2867 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-cni-path" (OuterVolumeSpecName: "cni-path") pod "c6c04acb-168b-4772-81e8-9b6642052623" (UID: "c6c04acb-168b-4772-81e8-9b6642052623"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:02.374194 kubelet[2867]: I0130 05:34:02.374147 2867 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c6c04acb-168b-4772-81e8-9b6642052623" (UID: "c6c04acb-168b-4772-81e8-9b6642052623"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:02.374658 kubelet[2867]: I0130 05:34:02.374290 2867 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c6c04acb-168b-4772-81e8-9b6642052623" (UID: "c6c04acb-168b-4772-81e8-9b6642052623"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:02.374977 kubelet[2867]: I0130 05:34:02.374309 2867 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c6c04acb-168b-4772-81e8-9b6642052623" (UID: "c6c04acb-168b-4772-81e8-9b6642052623"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:02.374977 kubelet[2867]: I0130 05:34:02.374461 2867 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c04acb-168b-4772-81e8-9b6642052623-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c6c04acb-168b-4772-81e8-9b6642052623" (UID: "c6c04acb-168b-4772-81e8-9b6642052623"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:34:02.374977 kubelet[2867]: I0130 05:34:02.374471 2867 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5babab-146e-463e-84e6-444584a41b6a-kube-api-access-455dk" (OuterVolumeSpecName: "kube-api-access-455dk") pod "fc5babab-146e-463e-84e6-444584a41b6a" (UID: "fc5babab-146e-463e-84e6-444584a41b6a"). InnerVolumeSpecName "kube-api-access-455dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:34:02.375343 kubelet[2867]: I0130 05:34:02.375172 2867 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c04acb-168b-4772-81e8-9b6642052623-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c6c04acb-168b-4772-81e8-9b6642052623" (UID: "c6c04acb-168b-4772-81e8-9b6642052623"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:34:02.375343 kubelet[2867]: I0130 05:34:02.375223 2867 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c6c04acb-168b-4772-81e8-9b6642052623" (UID: "c6c04acb-168b-4772-81e8-9b6642052623"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:34:02.380651 kubelet[2867]: I0130 05:34:02.380460 2867 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c04acb-168b-4772-81e8-9b6642052623-kube-api-access-dqbfz" (OuterVolumeSpecName: "kube-api-access-dqbfz") pod "c6c04acb-168b-4772-81e8-9b6642052623" (UID: "c6c04acb-168b-4772-81e8-9b6642052623"). InnerVolumeSpecName "kube-api-access-dqbfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:34:02.382891 kubelet[2867]: I0130 05:34:02.382812 2867 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c04acb-168b-4772-81e8-9b6642052623-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c6c04acb-168b-4772-81e8-9b6642052623" (UID: "c6c04acb-168b-4772-81e8-9b6642052623"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:34:02.384147 kubelet[2867]: I0130 05:34:02.384064 2867 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc5babab-146e-463e-84e6-444584a41b6a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fc5babab-146e-463e-84e6-444584a41b6a" (UID: "fc5babab-146e-463e-84e6-444584a41b6a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:34:02.457354 kubelet[2867]: I0130 05:34:02.457274 2867 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6c04acb-168b-4772-81e8-9b6642052623-clustermesh-secrets\") on node \"ci-4186-1-0-3-26ada394c1\" DevicePath \"\"" Jan 30 05:34:02.457354 kubelet[2867]: I0130 05:34:02.457356 2867 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-host-proc-sys-kernel\") on node \"ci-4186-1-0-3-26ada394c1\" DevicePath \"\"" Jan 30 05:34:02.457852 kubelet[2867]: I0130 05:34:02.457389 2867 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-hostproc\") on node \"ci-4186-1-0-3-26ada394c1\" DevicePath \"\"" Jan 30 05:34:02.457852 kubelet[2867]: I0130 05:34:02.457413 2867 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dqbfz\" (UniqueName: \"kubernetes.io/projected/c6c04acb-168b-4772-81e8-9b6642052623-kube-api-access-dqbfz\") on node \"ci-4186-1-0-3-26ada394c1\" DevicePath \"\"" Jan 30 05:34:02.457852 kubelet[2867]: I0130 05:34:02.457441 2867 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-etc-cni-netd\") on node \"ci-4186-1-0-3-26ada394c1\" DevicePath \"\"" Jan 30 05:34:02.457852 kubelet[2867]: I0130 05:34:02.457465 2867 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-host-proc-sys-net\") on node \"ci-4186-1-0-3-26ada394c1\" DevicePath \"\"" Jan 30 05:34:02.457852 kubelet[2867]: I0130 05:34:02.457532 2867 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6c04acb-168b-4772-81e8-9b6642052623-cilium-config-path\") on node \"ci-4186-1-0-3-26ada394c1\" DevicePath \"\"" Jan 30 05:34:02.457852 kubelet[2867]: I0130 05:34:02.457560 2867 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-xtables-lock\") on node \"ci-4186-1-0-3-26ada394c1\" DevicePath \"\"" Jan 30 05:34:02.457852 kubelet[2867]: I0130 05:34:02.457583 2867 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-bpf-maps\") on node \"ci-4186-1-0-3-26ada394c1\" DevicePath \"\"" Jan 30 05:34:02.457852 kubelet[2867]: I0130 05:34:02.457604 2867 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-cni-path\") on node \"ci-4186-1-0-3-26ada394c1\" DevicePath \"\"" Jan 30 05:34:02.458436 kubelet[2867]: I0130 05:34:02.457626 2867 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc5babab-146e-463e-84e6-444584a41b6a-cilium-config-path\") on node \"ci-4186-1-0-3-26ada394c1\" DevicePath \"\"" Jan 30 05:34:02.458436 kubelet[2867]: I0130 05:34:02.457648 2867 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-lib-modules\") on node \"ci-4186-1-0-3-26ada394c1\" DevicePath \"\"" Jan 30 05:34:02.458436 kubelet[2867]: I0130 05:34:02.457672 2867 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6c04acb-168b-4772-81e8-9b6642052623-hubble-tls\") on node \"ci-4186-1-0-3-26ada394c1\" DevicePath \"\"" Jan 30 05:34:02.458436 kubelet[2867]: I0130 05:34:02.457693 2867 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-cilium-cgroup\") on node \"ci-4186-1-0-3-26ada394c1\" DevicePath \"\"" Jan 30 05:34:02.458436 kubelet[2867]: I0130 05:34:02.457715 2867 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-455dk\" (UniqueName: \"kubernetes.io/projected/fc5babab-146e-463e-84e6-444584a41b6a-kube-api-access-455dk\") on node \"ci-4186-1-0-3-26ada394c1\" DevicePath \"\"" Jan 30 05:34:02.458436 kubelet[2867]: I0130 05:34:02.457764 2867 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6c04acb-168b-4772-81e8-9b6642052623-cilium-run\") on node \"ci-4186-1-0-3-26ada394c1\" DevicePath \"\"" Jan 30 05:34:02.721394 kubelet[2867]: I0130 05:34:02.720471 2867 scope.go:117] "RemoveContainer" containerID="1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb" Jan 30 05:34:02.733465 systemd[1]: Removed slice kubepods-burstable-podc6c04acb_168b_4772_81e8_9b6642052623.slice - libcontainer container kubepods-burstable-podc6c04acb_168b_4772_81e8_9b6642052623.slice. Jan 30 05:34:02.733951 systemd[1]: kubepods-burstable-podc6c04acb_168b_4772_81e8_9b6642052623.slice: Consumed 8.723s CPU time. Jan 30 05:34:02.768444 containerd[1503]: time="2025-01-30T05:34:02.767815137Z" level=info msg="RemoveContainer for \"1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb\"" Jan 30 05:34:02.787824 systemd[1]: Removed slice kubepods-besteffort-podfc5babab_146e_463e_84e6_444584a41b6a.slice - libcontainer container kubepods-besteffort-podfc5babab_146e_463e_84e6_444584a41b6a.slice. Jan 30 05:34:02.790571 containerd[1503]: time="2025-01-30T05:34:02.790462313Z" level=info msg="RemoveContainer for \"1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb\" returns successfully" Jan 30 05:34:02.791108 kubelet[2867]: I0130 05:34:02.791038 2867 scope.go:117] "RemoveContainer" containerID="9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d" Jan 30 05:34:02.824084 containerd[1503]: time="2025-01-30T05:34:02.823664907Z" level=info msg="RemoveContainer for \"9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d\"" Jan 30 05:34:02.829513 containerd[1503]: time="2025-01-30T05:34:02.829436890Z" level=info msg="RemoveContainer for \"9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d\" returns successfully" Jan 30 05:34:02.829764 kubelet[2867]: I0130 05:34:02.829709 2867 scope.go:117] "RemoveContainer" containerID="1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c" Jan 30 05:34:02.831387 containerd[1503]: time="2025-01-30T05:34:02.831016327Z" level=info msg="RemoveContainer for \"1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c\"" Jan 30 05:34:02.835725 containerd[1503]: time="2025-01-30T05:34:02.835644742Z" level=info msg="RemoveContainer for \"1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c\" returns successfully" Jan 30 05:34:02.836081 kubelet[2867]: I0130 05:34:02.835840 2867 scope.go:117] "RemoveContainer" containerID="0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8" Jan 30 05:34:02.837005 containerd[1503]: time="2025-01-30T05:34:02.836970243Z" level=info msg="RemoveContainer for \"0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8\"" Jan 30 05:34:02.841659 containerd[1503]: time="2025-01-30T05:34:02.841621280Z" level=info msg="RemoveContainer for \"0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8\" returns successfully" Jan 30 05:34:02.841911 kubelet[2867]: I0130 05:34:02.841814 2867 scope.go:117] "RemoveContainer" containerID="03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846" Jan 30 05:34:02.843052 containerd[1503]: time="2025-01-30T05:34:02.843011862Z" level=info msg="RemoveContainer for \"03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846\"" Jan 30 05:34:02.848287 containerd[1503]: time="2025-01-30T05:34:02.848217552Z" level=info msg="RemoveContainer for \"03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846\" returns successfully" Jan 30 05:34:02.848696 kubelet[2867]: I0130 05:34:02.848505 2867 scope.go:117] "RemoveContainer" containerID="1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb" Jan 30 05:34:02.849358 containerd[1503]: time="2025-01-30T05:34:02.849053732Z" level=error msg="ContainerStatus for \"1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb\": not found" Jan 30 05:34:02.860888 kubelet[2867]: E0130 05:34:02.860829 2867 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb\": not found" containerID="1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb" Jan 30 05:34:02.862929 kubelet[2867]: I0130 05:34:02.862808 2867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb"} err="failed to get container status \"1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e75e1d645c1f5110a2308c3c0252ee612ddc2377f81cc8e5dec9e1728e404eb\": not found" Jan 30 05:34:02.862929 kubelet[2867]: I0130 05:34:02.862922 2867 scope.go:117] "RemoveContainer" containerID="9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d" Jan 30 05:34:02.863293 containerd[1503]: time="2025-01-30T05:34:02.863253378Z" level=error msg="ContainerStatus for \"9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d\": not found" Jan 30 05:34:02.863578 kubelet[2867]: E0130 05:34:02.863524 2867 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d\": not found" containerID="9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d" Jan 30 05:34:02.863578 kubelet[2867]: I0130 05:34:02.863550 2867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d"} err="failed to get container status \"9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d793e5d20f44ec964bb90be3c9e71b8cc462a5744172caacfd5a29cfe862d3d\": not found" Jan 30 05:34:02.863578 kubelet[2867]: I0130 05:34:02.863568 2867 scope.go:117] "RemoveContainer" containerID="1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c" Jan 30 05:34:02.864110 kubelet[2867]: E0130 05:34:02.863961 2867 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c\": not found" containerID="1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c" Jan 30 05:34:02.864110 kubelet[2867]: I0130 05:34:02.863990 2867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c"} err="failed to get container status \"1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c\": not found" Jan 30 05:34:02.864110 kubelet[2867]: I0130 05:34:02.864009 2867 scope.go:117] "RemoveContainer" containerID="0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8" Jan 30 05:34:02.864218 containerd[1503]: time="2025-01-30T05:34:02.863838928Z" level=error msg="ContainerStatus for \"1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d0b7b1a52faa77487400fda2772821f19b386973c29531fa9de459e454c863c\": not found" Jan 30 05:34:02.864218 containerd[1503]: time="2025-01-30T05:34:02.864176201Z" level=error msg="ContainerStatus for \"0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8\": not found" Jan 30 05:34:02.864294 kubelet[2867]: E0130 05:34:02.864280 2867 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8\": not found" containerID="0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8" Jan 30 05:34:02.864327 kubelet[2867]: I0130 05:34:02.864301 2867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8"} err="failed to get container status \"0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"0009afedf30328118920b0d4b96cb079a891a84af3d52ec1cd588941439e07d8\": not found" Jan 30 05:34:02.864327 kubelet[2867]: I0130 05:34:02.864318 2867 scope.go:117] "RemoveContainer" containerID="03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846" Jan 30 05:34:02.864553 containerd[1503]: time="2025-01-30T05:34:02.864515078Z" level=error msg="ContainerStatus for \"03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846\": not found" Jan 30 05:34:02.864707 kubelet[2867]: E0130 05:34:02.864662 2867 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846\": not found" containerID="03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846" Jan 30 05:34:02.864707 kubelet[2867]: I0130 05:34:02.864688 2867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846"} err="failed to get container status \"03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846\": rpc error: code = NotFound desc = an error occurred when try to find container \"03da9eae64929b63df616ce1567be63885764844262db3803cc96cb4eb7cb846\": not found" Jan 30 05:34:02.864707 kubelet[2867]: I0130 05:34:02.864704 2867 scope.go:117] "RemoveContainer" containerID="072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52" Jan 30 05:34:02.865824 containerd[1503]: time="2025-01-30T05:34:02.865776919Z" level=info msg="RemoveContainer for \"072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52\"" Jan 30 05:34:02.871412 containerd[1503]: time="2025-01-30T05:34:02.871368945Z" level=info msg="RemoveContainer for \"072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52\" returns successfully" Jan 30 05:34:02.871771 kubelet[2867]: I0130 05:34:02.871650 2867 scope.go:117] "RemoveContainer" containerID="072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52" Jan 30 05:34:02.872110 containerd[1503]: time="2025-01-30T05:34:02.872033703Z" level=error msg="ContainerStatus for \"072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52\": not found" Jan 30 05:34:02.872219 kubelet[2867]: E0130 05:34:02.872165 2867 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52\": not found" containerID="072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52" Jan 30 05:34:02.872219 kubelet[2867]: I0130 05:34:02.872193 2867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52"} err="failed to get container status \"072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52\": rpc error: code = NotFound desc = an error occurred when try to find container \"072f4de0890e178b5b245221eff45555fc7f847bdbfb0d2444ac81583cfddd52\": not found" Jan 30 05:34:02.942613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e-rootfs.mount: Deactivated successfully. Jan 30 05:34:02.942856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd-rootfs.mount: Deactivated successfully. Jan 30 05:34:02.943019 systemd[1]: var-lib-kubelet-pods-c6c04acb\x2d168b\x2d4772\x2d81e8\x2d9b6642052623-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddqbfz.mount: Deactivated successfully. Jan 30 05:34:02.943206 systemd[1]: var-lib-kubelet-pods-fc5babab\x2d146e\x2d463e\x2d84e6\x2d444584a41b6a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d455dk.mount: Deactivated successfully. Jan 30 05:34:02.943383 systemd[1]: var-lib-kubelet-pods-c6c04acb\x2d168b\x2d4772\x2d81e8\x2d9b6642052623-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 05:34:02.943611 systemd[1]: var-lib-kubelet-pods-c6c04acb\x2d168b\x2d4772\x2d81e8\x2d9b6642052623-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 05:34:03.280877 kubelet[2867]: E0130 05:34:03.280651 2867 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 05:34:03.394388 kubelet[2867]: I0130 05:34:03.394280 2867 setters.go:580] "Node became not ready" node="ci-4186-1-0-3-26ada394c1" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T05:34:03Z","lastTransitionTime":"2025-01-30T05:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 05:34:03.570985 sshd[4467]: Invalid user kubelet from 111.67.201.36 port 50574 Jan 30 05:34:03.840332 sshd[4467]: Connection closed by invalid user kubelet 111.67.201.36 port 50574 [preauth] Jan 30 05:34:03.846057 systemd[1]: sshd@28-91.107.218.70:22-111.67.201.36:50574.service: Deactivated successfully. Jan 30 05:34:03.941419 sshd[4455]: Connection closed by 139.178.89.65 port 33208 Jan 30 05:34:03.942230 sshd-session[4451]: pam_unix(sshd:session): session closed for user core Jan 30 05:34:03.948695 systemd[1]: sshd@26-91.107.218.70:22-139.178.89.65:33208.service: Deactivated successfully. Jan 30 05:34:03.953164 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 05:34:03.955160 systemd-logind[1483]: Session 20 logged out. Waiting for processes to exit. Jan 30 05:34:03.957029 systemd-logind[1483]: Removed session 20. Jan 30 05:34:04.087580 kubelet[2867]: I0130 05:34:04.086360 2867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c04acb-168b-4772-81e8-9b6642052623" path="/var/lib/kubelet/pods/c6c04acb-168b-4772-81e8-9b6642052623/volumes" Jan 30 05:34:04.087580 kubelet[2867]: I0130 05:34:04.087339 2867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc5babab-146e-463e-84e6-444584a41b6a" path="/var/lib/kubelet/pods/fc5babab-146e-463e-84e6-444584a41b6a/volumes" Jan 30 05:34:04.135168 systemd[1]: Started sshd@29-91.107.218.70:22-139.178.89.65:53252.service - OpenSSH per-connection server daemon (139.178.89.65:53252). Jan 30 05:34:04.144580 systemd[1]: Started sshd@30-91.107.218.70:22-111.67.201.36:59388.service - OpenSSH per-connection server daemon (111.67.201.36:59388). Jan 30 05:34:05.135582 sshd[4626]: Accepted publickey for core from 139.178.89.65 port 53252 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:34:05.138941 sshd-session[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:34:05.151102 systemd-logind[1483]: New session 21 of user core. Jan 30 05:34:05.161235 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 05:34:05.405212 sshd[4627]: Connection closed by authenticating user root 111.67.201.36 port 59388 [preauth] Jan 30 05:34:05.408612 systemd[1]: sshd@30-91.107.218.70:22-111.67.201.36:59388.service: Deactivated successfully. Jan 30 05:34:05.680129 systemd[1]: Started sshd@31-91.107.218.70:22-111.67.201.36:60740.service - OpenSSH per-connection server daemon (111.67.201.36:60740). Jan 30 05:34:06.312595 kubelet[2867]: I0130 05:34:06.308774 2867 topology_manager.go:215] "Topology Admit Handler" podUID="e98ddede-02f0-4931-ac86-9b7cb1bbe669" podNamespace="kube-system" podName="cilium-p6nv6" Jan 30 05:34:06.316544 kubelet[2867]: E0130 05:34:06.316186 2867 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c6c04acb-168b-4772-81e8-9b6642052623" containerName="clean-cilium-state" Jan 30 05:34:06.316544 kubelet[2867]: E0130 05:34:06.316535 2867 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c6c04acb-168b-4772-81e8-9b6642052623" containerName="apply-sysctl-overwrites" Jan 30 05:34:06.316544 kubelet[2867]: E0130 05:34:06.316544 2867 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c6c04acb-168b-4772-81e8-9b6642052623" containerName="mount-bpf-fs" Jan 30 05:34:06.316544 kubelet[2867]: E0130 05:34:06.316551 2867 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fc5babab-146e-463e-84e6-444584a41b6a" containerName="cilium-operator" Jan 30 05:34:06.316702 kubelet[2867]: E0130 05:34:06.316557 2867 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c6c04acb-168b-4772-81e8-9b6642052623" containerName="mount-cgroup" Jan 30 05:34:06.316702 kubelet[2867]: E0130 05:34:06.316564 2867 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c6c04acb-168b-4772-81e8-9b6642052623" containerName="cilium-agent" Jan 30 05:34:06.316702 kubelet[2867]: I0130 05:34:06.316606 2867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc5babab-146e-463e-84e6-444584a41b6a" containerName="cilium-operator" Jan 30 05:34:06.316702 kubelet[2867]: I0130 05:34:06.316613 2867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c04acb-168b-4772-81e8-9b6642052623" containerName="cilium-agent" Jan 30 05:34:06.373021 systemd[1]: Created slice kubepods-burstable-pode98ddede_02f0_4931_ac86_9b7cb1bbe669.slice - libcontainer container kubepods-burstable-pode98ddede_02f0_4931_ac86_9b7cb1bbe669.slice. Jan 30 05:34:06.482315 sshd[4630]: Connection closed by 139.178.89.65 port 53252 Jan 30 05:34:06.483294 sshd-session[4626]: pam_unix(sshd:session): session closed for user core Jan 30 05:34:06.489451 systemd[1]: sshd@29-91.107.218.70:22-139.178.89.65:53252.service: Deactivated successfully. Jan 30 05:34:06.491686 kubelet[2867]: I0130 05:34:06.491616 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e98ddede-02f0-4931-ac86-9b7cb1bbe669-cilium-cgroup\") pod \"cilium-p6nv6\" (UID: \"e98ddede-02f0-4931-ac86-9b7cb1bbe669\") " pod="kube-system/cilium-p6nv6" Jan 30 05:34:06.492251 kubelet[2867]: I0130 05:34:06.492215 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e98ddede-02f0-4931-ac86-9b7cb1bbe669-cilium-ipsec-secrets\") pod \"cilium-p6nv6\" (UID: \"e98ddede-02f0-4931-ac86-9b7cb1bbe669\") " pod="kube-system/cilium-p6nv6" Jan 30 05:34:06.492581 kubelet[2867]: I0130 05:34:06.492553 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e98ddede-02f0-4931-ac86-9b7cb1bbe669-etc-cni-netd\") pod \"cilium-p6nv6\" (UID: \"e98ddede-02f0-4931-ac86-9b7cb1bbe669\") " pod="kube-system/cilium-p6nv6" Jan 30 05:34:06.493260 kubelet[2867]: I0130 05:34:06.493231 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e98ddede-02f0-4931-ac86-9b7cb1bbe669-clustermesh-secrets\") pod \"cilium-p6nv6\" (UID: \"e98ddede-02f0-4931-ac86-9b7cb1bbe669\") " pod="kube-system/cilium-p6nv6" Jan 30 05:34:06.493462 kubelet[2867]: I0130 05:34:06.493435 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e98ddede-02f0-4931-ac86-9b7cb1bbe669-cilium-config-path\") pod \"cilium-p6nv6\" (UID: \"e98ddede-02f0-4931-ac86-9b7cb1bbe669\") " pod="kube-system/cilium-p6nv6" Jan 30 05:34:06.493871 kubelet[2867]: I0130 05:34:06.493767 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e98ddede-02f0-4931-ac86-9b7cb1bbe669-hostproc\") pod \"cilium-p6nv6\" (UID: \"e98ddede-02f0-4931-ac86-9b7cb1bbe669\") " pod="kube-system/cilium-p6nv6" Jan 30 05:34:06.494320 kubelet[2867]: I0130 05:34:06.494239 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e98ddede-02f0-4931-ac86-9b7cb1bbe669-cni-path\") pod \"cilium-p6nv6\" (UID: \"e98ddede-02f0-4931-ac86-9b7cb1bbe669\") " pod="kube-system/cilium-p6nv6" Jan 30 05:34:06.494614 kubelet[2867]: I0130 05:34:06.494452 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e98ddede-02f0-4931-ac86-9b7cb1bbe669-host-proc-sys-net\") pod \"cilium-p6nv6\" (UID: \"e98ddede-02f0-4931-ac86-9b7cb1bbe669\") " pod="kube-system/cilium-p6nv6" Jan 30 05:34:06.494614 kubelet[2867]: I0130 05:34:06.494552 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e98ddede-02f0-4931-ac86-9b7cb1bbe669-bpf-maps\") pod \"cilium-p6nv6\" (UID: \"e98ddede-02f0-4931-ac86-9b7cb1bbe669\") " pod="kube-system/cilium-p6nv6" Jan 30 05:34:06.495063 kubelet[2867]: I0130 05:34:06.494585 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxdfj\" (UniqueName: \"kubernetes.io/projected/e98ddede-02f0-4931-ac86-9b7cb1bbe669-kube-api-access-mxdfj\") pod \"cilium-p6nv6\" (UID: \"e98ddede-02f0-4931-ac86-9b7cb1bbe669\") " pod="kube-system/cilium-p6nv6" Jan 30 05:34:06.495339 kubelet[2867]: I0130 05:34:06.495184 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e98ddede-02f0-4931-ac86-9b7cb1bbe669-xtables-lock\") pod \"cilium-p6nv6\" (UID: \"e98ddede-02f0-4931-ac86-9b7cb1bbe669\") " pod="kube-system/cilium-p6nv6" Jan 30 05:34:06.495581 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 05:34:06.495905 kubelet[2867]: I0130 05:34:06.495567 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e98ddede-02f0-4931-ac86-9b7cb1bbe669-host-proc-sys-kernel\") pod \"cilium-p6nv6\" (UID: \"e98ddede-02f0-4931-ac86-9b7cb1bbe669\") " pod="kube-system/cilium-p6nv6" Jan 30 05:34:06.495905 kubelet[2867]: I0130 05:34:06.495679 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e98ddede-02f0-4931-ac86-9b7cb1bbe669-hubble-tls\") pod \"cilium-p6nv6\" (UID: \"e98ddede-02f0-4931-ac86-9b7cb1bbe669\") " pod="kube-system/cilium-p6nv6" Jan 30 05:34:06.495905 kubelet[2867]: I0130 05:34:06.495725 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e98ddede-02f0-4931-ac86-9b7cb1bbe669-cilium-run\") pod \"cilium-p6nv6\" (UID: \"e98ddede-02f0-4931-ac86-9b7cb1bbe669\") " pod="kube-system/cilium-p6nv6" Jan 30 05:34:06.496452 kubelet[2867]: I0130 05:34:06.496039 2867 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e98ddede-02f0-4931-ac86-9b7cb1bbe669-lib-modules\") pod \"cilium-p6nv6\" (UID: \"e98ddede-02f0-4931-ac86-9b7cb1bbe669\") " pod="kube-system/cilium-p6nv6" Jan 30 05:34:06.499593 systemd-logind[1483]: Session 21 logged out. Waiting for processes to exit. Jan 30 05:34:06.502696 systemd-logind[1483]: Removed session 21. Jan 30 05:34:06.674011 systemd[1]: Started sshd@32-91.107.218.70:22-139.178.89.65:53266.service - OpenSSH per-connection server daemon (139.178.89.65:53266). Jan 30 05:34:06.679473 containerd[1503]: time="2025-01-30T05:34:06.679421764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p6nv6,Uid:e98ddede-02f0-4931-ac86-9b7cb1bbe669,Namespace:kube-system,Attempt:0,}" Jan 30 05:34:06.728050 containerd[1503]: time="2025-01-30T05:34:06.727881681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:34:06.728050 containerd[1503]: time="2025-01-30T05:34:06.727983632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:34:06.729751 containerd[1503]: time="2025-01-30T05:34:06.728021183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:34:06.729751 containerd[1503]: time="2025-01-30T05:34:06.728132391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:34:06.774758 systemd[1]: Started cri-containerd-80cf2ad3ea3f27ae43a84642d370b518393ea18c055f518ad10a5cd119671a13.scope - libcontainer container 80cf2ad3ea3f27ae43a84642d370b518393ea18c055f518ad10a5cd119671a13. Jan 30 05:34:06.809611 sshd[4638]: Invalid user mcserver from 111.67.201.36 port 60740 Jan 30 05:34:06.821114 containerd[1503]: time="2025-01-30T05:34:06.821073291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p6nv6,Uid:e98ddede-02f0-4931-ac86-9b7cb1bbe669,Namespace:kube-system,Attempt:0,} returns sandbox id \"80cf2ad3ea3f27ae43a84642d370b518393ea18c055f518ad10a5cd119671a13\"" Jan 30 05:34:06.828819 containerd[1503]: time="2025-01-30T05:34:06.828553123Z" level=info msg="CreateContainer within sandbox \"80cf2ad3ea3f27ae43a84642d370b518393ea18c055f518ad10a5cd119671a13\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 05:34:06.845445 containerd[1503]: time="2025-01-30T05:34:06.845369664Z" level=info msg="CreateContainer within sandbox \"80cf2ad3ea3f27ae43a84642d370b518393ea18c055f518ad10a5cd119671a13\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"607e53d1dc9e8861d1bd35d297779e998770f9fdc561d762ee848b9dbbbb5827\"" Jan 30 05:34:06.848632 containerd[1503]: time="2025-01-30T05:34:06.847621845Z" level=info msg="StartContainer for \"607e53d1dc9e8861d1bd35d297779e998770f9fdc561d762ee848b9dbbbb5827\"" Jan 30 05:34:06.892839 systemd[1]: Started cri-containerd-607e53d1dc9e8861d1bd35d297779e998770f9fdc561d762ee848b9dbbbb5827.scope - libcontainer container 607e53d1dc9e8861d1bd35d297779e998770f9fdc561d762ee848b9dbbbb5827. Jan 30 05:34:06.944828 containerd[1503]: time="2025-01-30T05:34:06.944685830Z" level=info msg="StartContainer for \"607e53d1dc9e8861d1bd35d297779e998770f9fdc561d762ee848b9dbbbb5827\" returns successfully" Jan 30 05:34:06.966234 systemd[1]: cri-containerd-607e53d1dc9e8861d1bd35d297779e998770f9fdc561d762ee848b9dbbbb5827.scope: Deactivated successfully. Jan 30 05:34:07.019559 containerd[1503]: time="2025-01-30T05:34:07.019441337Z" level=info msg="shim disconnected" id=607e53d1dc9e8861d1bd35d297779e998770f9fdc561d762ee848b9dbbbb5827 namespace=k8s.io Jan 30 05:34:07.020030 containerd[1503]: time="2025-01-30T05:34:07.019630312Z" level=warning msg="cleaning up after shim disconnected" id=607e53d1dc9e8861d1bd35d297779e998770f9fdc561d762ee848b9dbbbb5827 namespace=k8s.io Jan 30 05:34:07.020030 containerd[1503]: time="2025-01-30T05:34:07.019645710Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:07.048284 sshd[4638]: Connection closed by invalid user mcserver 111.67.201.36 port 60740 [preauth] Jan 30 05:34:07.052242 systemd[1]: sshd@31-91.107.218.70:22-111.67.201.36:60740.service: Deactivated successfully. Jan 30 05:34:07.333053 systemd[1]: Started sshd@33-91.107.218.70:22-111.67.201.36:40394.service - OpenSSH per-connection server daemon (111.67.201.36:40394). Jan 30 05:34:07.683775 sshd[4649]: Accepted publickey for core from 139.178.89.65 port 53266 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:34:07.687193 sshd-session[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:34:07.698340 systemd-logind[1483]: New session 22 of user core. Jan 30 05:34:07.707722 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 05:34:07.804046 containerd[1503]: time="2025-01-30T05:34:07.803979424Z" level=info msg="CreateContainer within sandbox \"80cf2ad3ea3f27ae43a84642d370b518393ea18c055f518ad10a5cd119671a13\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 05:34:07.835083 containerd[1503]: time="2025-01-30T05:34:07.833409021Z" level=info msg="CreateContainer within sandbox \"80cf2ad3ea3f27ae43a84642d370b518393ea18c055f518ad10a5cd119671a13\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"47c1c19d4cdead285584de1b41f1a2cdbdda3aeadd189803e96838e0b849b32a\"" Jan 30 05:34:07.840379 containerd[1503]: time="2025-01-30T05:34:07.840303714Z" level=info msg="StartContainer for \"47c1c19d4cdead285584de1b41f1a2cdbdda3aeadd189803e96838e0b849b32a\"" Jan 30 05:34:07.893645 systemd[1]: Started cri-containerd-47c1c19d4cdead285584de1b41f1a2cdbdda3aeadd189803e96838e0b849b32a.scope - libcontainer container 47c1c19d4cdead285584de1b41f1a2cdbdda3aeadd189803e96838e0b849b32a. Jan 30 05:34:07.926953 containerd[1503]: time="2025-01-30T05:34:07.926897429Z" level=info msg="StartContainer for \"47c1c19d4cdead285584de1b41f1a2cdbdda3aeadd189803e96838e0b849b32a\" returns successfully" Jan 30 05:34:07.936805 systemd[1]: cri-containerd-47c1c19d4cdead285584de1b41f1a2cdbdda3aeadd189803e96838e0b849b32a.scope: Deactivated successfully. Jan 30 05:34:07.978611 containerd[1503]: time="2025-01-30T05:34:07.978532300Z" level=info msg="shim disconnected" id=47c1c19d4cdead285584de1b41f1a2cdbdda3aeadd189803e96838e0b849b32a namespace=k8s.io Jan 30 05:34:07.978611 containerd[1503]: time="2025-01-30T05:34:07.978583116Z" level=warning msg="cleaning up after shim disconnected" id=47c1c19d4cdead285584de1b41f1a2cdbdda3aeadd189803e96838e0b849b32a namespace=k8s.io Jan 30 05:34:07.978611 containerd[1503]: time="2025-01-30T05:34:07.978590881Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:08.282158 kubelet[2867]: E0130 05:34:08.282091 2867 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 05:34:08.368901 sshd[4759]: Connection closed by 139.178.89.65 port 53266 Jan 30 05:34:08.370004 sshd-session[4649]: pam_unix(sshd:session): session closed for user core Jan 30 05:34:08.378140 systemd[1]: sshd@32-91.107.218.70:22-139.178.89.65:53266.service: Deactivated successfully. Jan 30 05:34:08.384823 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 05:34:08.386159 systemd-logind[1483]: Session 22 logged out. Waiting for processes to exit. Jan 30 05:34:08.388283 systemd-logind[1483]: Removed session 22. Jan 30 05:34:08.546997 systemd[1]: Started sshd@34-91.107.218.70:22-139.178.89.65:53282.service - OpenSSH per-connection server daemon (139.178.89.65:53282). Jan 30 05:34:08.609055 systemd[1]: run-containerd-runc-k8s.io-47c1c19d4cdead285584de1b41f1a2cdbdda3aeadd189803e96838e0b849b32a-runc.BbH8S7.mount: Deactivated successfully. Jan 30 05:34:08.609172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47c1c19d4cdead285584de1b41f1a2cdbdda3aeadd189803e96838e0b849b32a-rootfs.mount: Deactivated successfully. Jan 30 05:34:08.813692 containerd[1503]: time="2025-01-30T05:34:08.813188499Z" level=info msg="CreateContainer within sandbox \"80cf2ad3ea3f27ae43a84642d370b518393ea18c055f518ad10a5cd119671a13\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 05:34:08.846259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3797544096.mount: Deactivated successfully. Jan 30 05:34:08.849272 containerd[1503]: time="2025-01-30T05:34:08.849210430Z" level=info msg="CreateContainer within sandbox \"80cf2ad3ea3f27ae43a84642d370b518393ea18c055f518ad10a5cd119671a13\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2710394b8a3043b0dae489c3522743793e39f936fee2027e9334f3e2f594625f\"" Jan 30 05:34:08.850106 containerd[1503]: time="2025-01-30T05:34:08.850062230Z" level=info msg="StartContainer for \"2710394b8a3043b0dae489c3522743793e39f936fee2027e9334f3e2f594625f\"" Jan 30 05:34:08.891644 systemd[1]: Started cri-containerd-2710394b8a3043b0dae489c3522743793e39f936fee2027e9334f3e2f594625f.scope - libcontainer container 2710394b8a3043b0dae489c3522743793e39f936fee2027e9334f3e2f594625f. Jan 30 05:34:08.937976 containerd[1503]: time="2025-01-30T05:34:08.937660633Z" level=info msg="StartContainer for \"2710394b8a3043b0dae489c3522743793e39f936fee2027e9334f3e2f594625f\" returns successfully" Jan 30 05:34:08.946000 systemd[1]: cri-containerd-2710394b8a3043b0dae489c3522743793e39f936fee2027e9334f3e2f594625f.scope: Deactivated successfully. Jan 30 05:34:08.976266 sshd[4757]: Connection closed by authenticating user root 111.67.201.36 port 40394 [preauth] Jan 30 05:34:08.980049 systemd[1]: sshd@33-91.107.218.70:22-111.67.201.36:40394.service: Deactivated successfully. Jan 30 05:34:08.990562 containerd[1503]: time="2025-01-30T05:34:08.990452899Z" level=info msg="shim disconnected" id=2710394b8a3043b0dae489c3522743793e39f936fee2027e9334f3e2f594625f namespace=k8s.io Jan 30 05:34:08.990562 containerd[1503]: time="2025-01-30T05:34:08.990519454Z" level=warning msg="cleaning up after shim disconnected" id=2710394b8a3043b0dae489c3522743793e39f936fee2027e9334f3e2f594625f namespace=k8s.io Jan 30 05:34:08.990562 containerd[1503]: time="2025-01-30T05:34:08.990528501Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:09.193984 systemd[1]: Started sshd@35-91.107.218.70:22-111.67.201.36:47046.service - OpenSSH per-connection server daemon (111.67.201.36:47046). Jan 30 05:34:09.556479 sshd[4828]: Accepted publickey for core from 139.178.89.65 port 53282 ssh2: RSA SHA256:zZA1z7GFgtbU0jZJU58thBpHspAJycdRX50dJfyXWgo Jan 30 05:34:09.562388 sshd-session[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:34:09.576819 systemd-logind[1483]: New session 23 of user core. Jan 30 05:34:09.583718 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 05:34:09.611746 systemd[1]: run-containerd-runc-k8s.io-2710394b8a3043b0dae489c3522743793e39f936fee2027e9334f3e2f594625f-runc.tYvMdO.mount: Deactivated successfully. Jan 30 05:34:09.611997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2710394b8a3043b0dae489c3522743793e39f936fee2027e9334f3e2f594625f-rootfs.mount: Deactivated successfully. Jan 30 05:34:09.825982 containerd[1503]: time="2025-01-30T05:34:09.825870609Z" level=info msg="CreateContainer within sandbox \"80cf2ad3ea3f27ae43a84642d370b518393ea18c055f518ad10a5cd119671a13\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 05:34:09.854976 containerd[1503]: time="2025-01-30T05:34:09.854706108Z" level=info msg="CreateContainer within sandbox \"80cf2ad3ea3f27ae43a84642d370b518393ea18c055f518ad10a5cd119671a13\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f8cb3950d802757e32c51018b46ab3ebd27ff6db6fdef0b2f5e868b133c9fc9c\"" Jan 30 05:34:09.856240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4155338157.mount: Deactivated successfully. Jan 30 05:34:09.858015 containerd[1503]: time="2025-01-30T05:34:09.856858923Z" level=info msg="StartContainer for \"f8cb3950d802757e32c51018b46ab3ebd27ff6db6fdef0b2f5e868b133c9fc9c\"" Jan 30 05:34:09.902758 systemd[1]: Started cri-containerd-f8cb3950d802757e32c51018b46ab3ebd27ff6db6fdef0b2f5e868b133c9fc9c.scope - libcontainer container f8cb3950d802757e32c51018b46ab3ebd27ff6db6fdef0b2f5e868b133c9fc9c. Jan 30 05:34:09.942904 systemd[1]: cri-containerd-f8cb3950d802757e32c51018b46ab3ebd27ff6db6fdef0b2f5e868b133c9fc9c.scope: Deactivated successfully. Jan 30 05:34:09.943952 containerd[1503]: time="2025-01-30T05:34:09.943892966Z" level=info msg="StartContainer for \"f8cb3950d802757e32c51018b46ab3ebd27ff6db6fdef0b2f5e868b133c9fc9c\" returns successfully" Jan 30 05:34:09.983629 containerd[1503]: time="2025-01-30T05:34:09.983523345Z" level=info msg="shim disconnected" id=f8cb3950d802757e32c51018b46ab3ebd27ff6db6fdef0b2f5e868b133c9fc9c namespace=k8s.io Jan 30 05:34:09.983629 containerd[1503]: time="2025-01-30T05:34:09.983593768Z" level=warning msg="cleaning up after shim disconnected" id=f8cb3950d802757e32c51018b46ab3ebd27ff6db6fdef0b2f5e868b133c9fc9c namespace=k8s.io Jan 30 05:34:09.983629 containerd[1503]: time="2025-01-30T05:34:09.983606892Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:10.334103 sshd[4892]: Invalid user centos from 111.67.201.36 port 47046 Jan 30 05:34:10.569428 sshd[4892]: Connection closed by invalid user centos 111.67.201.36 port 47046 [preauth] Jan 30 05:34:10.574588 systemd[1]: sshd@35-91.107.218.70:22-111.67.201.36:47046.service: Deactivated successfully. Jan 30 05:34:10.611637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8cb3950d802757e32c51018b46ab3ebd27ff6db6fdef0b2f5e868b133c9fc9c-rootfs.mount: Deactivated successfully. Jan 30 05:34:10.859394 containerd[1503]: time="2025-01-30T05:34:10.858979571Z" level=info msg="CreateContainer within sandbox \"80cf2ad3ea3f27ae43a84642d370b518393ea18c055f518ad10a5cd119671a13\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 05:34:10.860970 systemd[1]: Started sshd@36-91.107.218.70:22-111.67.201.36:49934.service - OpenSSH per-connection server daemon (111.67.201.36:49934). Jan 30 05:34:10.911110 containerd[1503]: time="2025-01-30T05:34:10.909862469Z" level=info msg="CreateContainer within sandbox \"80cf2ad3ea3f27ae43a84642d370b518393ea18c055f518ad10a5cd119671a13\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9c702988d2a76722c54b5a3ff52005cbd3e2bd8fcf42ae14abe2ea2837160729\"" Jan 30 05:34:10.911519 containerd[1503]: time="2025-01-30T05:34:10.911499495Z" level=info msg="StartContainer for \"9c702988d2a76722c54b5a3ff52005cbd3e2bd8fcf42ae14abe2ea2837160729\"" Jan 30 05:34:10.951645 systemd[1]: Started cri-containerd-9c702988d2a76722c54b5a3ff52005cbd3e2bd8fcf42ae14abe2ea2837160729.scope - libcontainer container 9c702988d2a76722c54b5a3ff52005cbd3e2bd8fcf42ae14abe2ea2837160729. Jan 30 05:34:10.988006 containerd[1503]: time="2025-01-30T05:34:10.987880946Z" level=info msg="StartContainer for \"9c702988d2a76722c54b5a3ff52005cbd3e2bd8fcf42ae14abe2ea2837160729\" returns successfully" Jan 30 05:34:11.084404 kubelet[2867]: E0130 05:34:11.084339 2867 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-fcj9g" podUID="11761383-8bd4-4055-ab51-e99ef53a9247" Jan 30 05:34:11.756562 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 05:34:12.202160 sshd[4958]: Invalid user usr from 111.67.201.36 port 49934 Jan 30 05:34:12.435112 sshd[4958]: Connection closed by invalid user usr 111.67.201.36 port 49934 [preauth] Jan 30 05:34:12.439018 systemd[1]: sshd@36-91.107.218.70:22-111.67.201.36:49934.service: Deactivated successfully. Jan 30 05:34:12.711782 systemd[1]: Started sshd@37-91.107.218.70:22-111.67.201.36:58236.service - OpenSSH per-connection server daemon (111.67.201.36:58236). Jan 30 05:34:13.095009 kubelet[2867]: E0130 05:34:13.094879 2867 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-fcj9g" podUID="11761383-8bd4-4055-ab51-e99ef53a9247" Jan 30 05:34:13.979328 sshd[5121]: Connection closed by authenticating user root 111.67.201.36 port 58236 [preauth] Jan 30 05:34:13.984245 systemd[1]: sshd@37-91.107.218.70:22-111.67.201.36:58236.service: Deactivated successfully. Jan 30 05:34:14.219883 systemd[1]: Started sshd@38-91.107.218.70:22-111.67.201.36:59152.service - OpenSSH per-connection server daemon (111.67.201.36:59152). Jan 30 05:34:14.895730 systemd[1]: run-containerd-runc-k8s.io-9c702988d2a76722c54b5a3ff52005cbd3e2bd8fcf42ae14abe2ea2837160729-runc.2k1rKR.mount: Deactivated successfully. Jan 30 05:34:15.065684 systemd-networkd[1403]: lxc_health: Link UP Jan 30 05:34:15.072805 systemd-networkd[1403]: lxc_health: Gained carrier Jan 30 05:34:15.399034 sshd[5334]: Invalid user collector from 111.67.201.36 port 59152 Jan 30 05:34:15.636567 sshd[5334]: Connection closed by invalid user collector 111.67.201.36 port 59152 [preauth] Jan 30 05:34:15.643318 systemd[1]: sshd@38-91.107.218.70:22-111.67.201.36:59152.service: Deactivated successfully. Jan 30 05:34:15.871057 systemd[1]: Started sshd@39-91.107.218.70:22-111.67.201.36:39268.service - OpenSSH per-connection server daemon (111.67.201.36:39268). Jan 30 05:34:16.653934 systemd-networkd[1403]: lxc_health: Gained IPv6LL Jan 30 05:34:16.702480 kubelet[2867]: I0130 05:34:16.702133 2867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p6nv6" podStartSLOduration=10.702111067 podStartE2EDuration="10.702111067s" podCreationTimestamp="2025-01-30 05:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:34:11.867135777 +0000 UTC m=+223.970983041" watchObservedRunningTime="2025-01-30 05:34:16.702111067 +0000 UTC m=+228.805958301" Jan 30 05:34:17.079842 sshd[5576]: Invalid user odoo from 111.67.201.36 port 39268 Jan 30 05:34:17.291510 sshd[5576]: Connection closed by invalid user odoo 111.67.201.36 port 39268 [preauth] Jan 30 05:34:17.298960 systemd[1]: sshd@39-91.107.218.70:22-111.67.201.36:39268.service: Deactivated successfully. Jan 30 05:34:17.556767 systemd[1]: Started sshd@40-91.107.218.70:22-111.67.201.36:40046.service - OpenSSH per-connection server daemon (111.67.201.36:40046). Jan 30 05:34:18.788353 sshd[5611]: Invalid user devops from 111.67.201.36 port 40046 Jan 30 05:34:19.066689 sshd[5611]: Connection closed by invalid user devops 111.67.201.36 port 40046 [preauth] Jan 30 05:34:19.070242 systemd[1]: sshd@40-91.107.218.70:22-111.67.201.36:40046.service: Deactivated successfully. Jan 30 05:34:19.333781 systemd[1]: Started sshd@41-91.107.218.70:22-111.67.201.36:48352.service - OpenSSH per-connection server daemon (111.67.201.36:48352). Jan 30 05:34:20.496625 sshd[5651]: Invalid user test from 111.67.201.36 port 48352 Jan 30 05:34:20.768301 sshd[5651]: Connection closed by invalid user test 111.67.201.36 port 48352 [preauth] Jan 30 05:34:20.770816 systemd[1]: sshd@41-91.107.218.70:22-111.67.201.36:48352.service: Deactivated successfully. Jan 30 05:34:21.043089 systemd[1]: Started sshd@42-91.107.218.70:22-111.67.201.36:50712.service - OpenSSH per-connection server daemon (111.67.201.36:50712). Jan 30 05:34:21.718584 sshd[4894]: Connection closed by 139.178.89.65 port 53282 Jan 30 05:34:21.722722 sshd-session[4828]: pam_unix(sshd:session): session closed for user core Jan 30 05:34:21.739085 systemd[1]: sshd@34-91.107.218.70:22-139.178.89.65:53282.service: Deactivated successfully. Jan 30 05:34:21.745636 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 05:34:21.748463 systemd-logind[1483]: Session 23 logged out. Waiting for processes to exit. Jan 30 05:34:21.752841 systemd-logind[1483]: Removed session 23. Jan 30 05:34:22.134992 sshd[5662]: Invalid user nfsnobody from 111.67.201.36 port 50712 Jan 30 05:34:22.396622 sshd[5662]: Connection closed by invalid user nfsnobody 111.67.201.36 port 50712 [preauth] Jan 30 05:34:22.403289 systemd[1]: sshd@42-91.107.218.70:22-111.67.201.36:50712.service: Deactivated successfully. Jan 30 05:34:22.688770 systemd[1]: Started sshd@43-91.107.218.70:22-111.67.201.36:57458.service - OpenSSH per-connection server daemon (111.67.201.36:57458). Jan 30 05:34:23.889650 sshd[5689]: Invalid user test from 111.67.201.36 port 57458 Jan 30 05:34:24.418463 sshd[5689]: Connection closed by invalid user test 111.67.201.36 port 57458 [preauth] Jan 30 05:34:24.424159 systemd[1]: sshd@43-91.107.218.70:22-111.67.201.36:57458.service: Deactivated successfully. Jan 30 05:34:24.834088 systemd[1]: Started sshd@44-91.107.218.70:22-111.67.201.36:37572.service - OpenSSH per-connection server daemon (111.67.201.36:37572). Jan 30 05:34:25.854399 sshd[5694]: Invalid user kvm from 111.67.201.36 port 37572 Jan 30 05:34:26.111834 sshd[5694]: Connection closed by invalid user kvm 111.67.201.36 port 37572 [preauth] Jan 30 05:34:26.114638 systemd[1]: sshd@44-91.107.218.70:22-111.67.201.36:37572.service: Deactivated successfully. Jan 30 05:34:26.377119 systemd[1]: Started sshd@45-91.107.218.70:22-111.67.201.36:38574.service - OpenSSH per-connection server daemon (111.67.201.36:38574). Jan 30 05:34:27.302736 sshd[5699]: Invalid user metricbeat from 111.67.201.36 port 38574 Jan 30 05:34:27.573799 sshd[5699]: Connection closed by invalid user metricbeat 111.67.201.36 port 38574 [preauth] Jan 30 05:34:27.576576 systemd[1]: sshd@45-91.107.218.70:22-111.67.201.36:38574.service: Deactivated successfully. Jan 30 05:34:27.845934 systemd[1]: Started sshd@46-91.107.218.70:22-111.67.201.36:46836.service - OpenSSH per-connection server daemon (111.67.201.36:46836). Jan 30 05:34:28.129967 containerd[1503]: time="2025-01-30T05:34:28.129757589Z" level=info msg="StopPodSandbox for \"657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd\"" Jan 30 05:34:28.129967 containerd[1503]: time="2025-01-30T05:34:28.129948057Z" level=info msg="TearDown network for sandbox \"657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd\" successfully" Jan 30 05:34:28.131694 containerd[1503]: time="2025-01-30T05:34:28.130009332Z" level=info msg="StopPodSandbox for \"657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd\" returns successfully" Jan 30 05:34:28.131694 containerd[1503]: time="2025-01-30T05:34:28.131071758Z" level=info msg="RemovePodSandbox for \"657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd\"" Jan 30 05:34:28.139990 containerd[1503]: time="2025-01-30T05:34:28.139919289Z" level=info msg="Forcibly stopping sandbox \"657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd\"" Jan 30 05:34:28.140173 containerd[1503]: time="2025-01-30T05:34:28.140095500Z" level=info msg="TearDown network for sandbox \"657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd\" successfully" Jan 30 05:34:28.148465 containerd[1503]: time="2025-01-30T05:34:28.148411622Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:34:28.148465 containerd[1503]: time="2025-01-30T05:34:28.148464622Z" level=info msg="RemovePodSandbox \"657a729b399f1f4102932eb96b38b8ea8832daab6a9bc594b7904dc3417a60dd\" returns successfully" Jan 30 05:34:28.148948 containerd[1503]: time="2025-01-30T05:34:28.148905751Z" level=info msg="StopPodSandbox for \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\"" Jan 30 05:34:28.149081 containerd[1503]: time="2025-01-30T05:34:28.149049380Z" level=info msg="TearDown network for sandbox \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\" successfully" Jan 30 05:34:28.149115 containerd[1503]: time="2025-01-30T05:34:28.149075400Z" level=info msg="StopPodSandbox for \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\" returns successfully" Jan 30 05:34:28.149474 containerd[1503]: time="2025-01-30T05:34:28.149437691Z" level=info msg="RemovePodSandbox for \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\"" Jan 30 05:34:28.149548 containerd[1503]: time="2025-01-30T05:34:28.149477064Z" level=info msg="Forcibly stopping sandbox \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\"" Jan 30 05:34:28.149660 containerd[1503]: time="2025-01-30T05:34:28.149600476Z" level=info msg="TearDown network for sandbox \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\" successfully" Jan 30 05:34:28.155847 containerd[1503]: time="2025-01-30T05:34:28.155804471Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:34:28.155915 containerd[1503]: time="2025-01-30T05:34:28.155866237Z" level=info msg="RemovePodSandbox \"ada2bc31dfce65b4b6a40ce952cf3287df9d720cfc07c8f5408b1bfefb41d03e\" returns successfully" Jan 30 05:34:29.262930 sshd[5704]: Connection closed by authenticating user root 111.67.201.36 port 46836 [preauth] Jan 30 05:34:29.265364 systemd[1]: sshd@46-91.107.218.70:22-111.67.201.36:46836.service: Deactivated successfully. Jan 30 05:34:29.544160 systemd[1]: Started sshd@47-91.107.218.70:22-111.67.201.36:49436.service - OpenSSH per-connection server daemon (111.67.201.36:49436). Jan 30 05:34:30.768935 sshd[5711]: Connection closed by authenticating user root 111.67.201.36 port 49436 [preauth] Jan 30 05:34:30.774259 systemd[1]: sshd@47-91.107.218.70:22-111.67.201.36:49436.service: Deactivated successfully. Jan 30 05:34:31.035155 systemd[1]: Started sshd@48-91.107.218.70:22-111.67.201.36:55768.service - OpenSSH per-connection server daemon (111.67.201.36:55768). Jan 30 05:34:32.178262 sshd[5716]: Invalid user usr from 111.67.201.36 port 55768 Jan 30 05:34:32.528931 sshd[5716]: Connection closed by invalid user usr 111.67.201.36 port 55768 [preauth] Jan 30 05:34:32.534301 systemd[1]: sshd@48-91.107.218.70:22-111.67.201.36:55768.service: Deactivated successfully. Jan 30 05:34:32.847105 systemd[1]: Started sshd@49-91.107.218.70:22-111.67.201.36:34144.service - OpenSSH per-connection server daemon (111.67.201.36:34144). Jan 30 05:34:34.010243 sshd[5721]: Invalid user user from 111.67.201.36 port 34144 Jan 30 05:34:34.265307 sshd[5721]: Connection closed by invalid user user 111.67.201.36 port 34144 [preauth] Jan 30 05:34:34.268979 systemd[1]: sshd@49-91.107.218.70:22-111.67.201.36:34144.service: Deactivated successfully. Jan 30 05:34:34.555931 systemd[1]: Started sshd@50-91.107.218.70:22-111.67.201.36:36756.service - OpenSSH per-connection server daemon (111.67.201.36:36756). Jan 30 05:34:35.624701 sshd[5726]: Invalid user metricbeat from 111.67.201.36 port 36756 Jan 30 05:34:35.888423 sshd[5726]: Connection closed by invalid user metricbeat 111.67.201.36 port 36756 [preauth] Jan 30 05:34:35.893727 systemd[1]: sshd@50-91.107.218.70:22-111.67.201.36:36756.service: Deactivated successfully. Jan 30 05:34:36.151026 systemd[1]: Started sshd@51-91.107.218.70:22-111.67.201.36:45042.service - OpenSSH per-connection server daemon (111.67.201.36:45042). Jan 30 05:34:37.197079 sshd[5731]: Invalid user oracle from 111.67.201.36 port 45042 Jan 30 05:34:37.440007 sshd[5731]: Connection closed by invalid user oracle 111.67.201.36 port 45042 [preauth] Jan 30 05:34:37.445633 systemd[1]: sshd@51-91.107.218.70:22-111.67.201.36:45042.service: Deactivated successfully. Jan 30 05:34:37.726252 systemd[1]: Started sshd@52-91.107.218.70:22-111.67.201.36:45886.service - OpenSSH per-connection server daemon (111.67.201.36:45886). Jan 30 05:34:39.129691 sshd[5736]: Invalid user olm from 111.67.201.36 port 45886 Jan 30 05:34:39.406068 sshd[5736]: Connection closed by invalid user olm 111.67.201.36 port 45886 [preauth] Jan 30 05:34:39.411656 systemd[1]: sshd@52-91.107.218.70:22-111.67.201.36:45886.service: Deactivated successfully. Jan 30 05:34:39.670964 systemd[1]: Started sshd@53-91.107.218.70:22-111.67.201.36:54452.service - OpenSSH per-connection server daemon (111.67.201.36:54452). Jan 30 05:34:40.647880 sshd[5741]: Invalid user oracle from 111.67.201.36 port 54452 Jan 30 05:34:40.955316 sshd[5741]: Connection closed by invalid user oracle 111.67.201.36 port 54452 [preauth] Jan 30 05:34:40.961599 systemd[1]: sshd@53-91.107.218.70:22-111.67.201.36:54452.service: Deactivated successfully. Jan 30 05:34:41.241935 systemd[1]: Started sshd@54-91.107.218.70:22-111.67.201.36:60418.service - OpenSSH per-connection server daemon (111.67.201.36:60418). Jan 30 05:34:42.329423 sshd[5746]: Invalid user centos from 111.67.201.36 port 60418 Jan 30 05:34:42.610474 sshd[5746]: Connection closed by invalid user centos 111.67.201.36 port 60418 [preauth] Jan 30 05:34:42.615883 systemd[1]: sshd@54-91.107.218.70:22-111.67.201.36:60418.service: Deactivated successfully. Jan 30 05:34:42.846924 systemd[1]: Started sshd@55-91.107.218.70:22-111.67.201.36:35264.service - OpenSSH per-connection server daemon (111.67.201.36:35264). Jan 30 05:34:44.550587 sshd[5751]: Connection closed by authenticating user root 111.67.201.36 port 35264 [preauth] Jan 30 05:34:44.556720 systemd[1]: sshd@55-91.107.218.70:22-111.67.201.36:35264.service: Deactivated successfully. Jan 30 05:34:44.864055 systemd[1]: Started sshd@56-91.107.218.70:22-111.67.201.36:43848.service - OpenSSH per-connection server daemon (111.67.201.36:43848). Jan 30 05:34:46.003741 sshd[5756]: Invalid user deploy from 111.67.201.36 port 43848 Jan 30 05:34:46.561688 sshd[5756]: Connection closed by invalid user deploy 111.67.201.36 port 43848 [preauth] Jan 30 05:34:46.566865 systemd[1]: sshd@56-91.107.218.70:22-111.67.201.36:43848.service: Deactivated successfully. Jan 30 05:34:46.862974 systemd[1]: Started sshd@57-91.107.218.70:22-111.67.201.36:51734.service - OpenSSH per-connection server daemon (111.67.201.36:51734). Jan 30 05:34:47.731756 sshd[5763]: Invalid user cluster from 111.67.201.36 port 51734 Jan 30 05:34:47.910564 sshd[5763]: Connection closed by invalid user cluster 111.67.201.36 port 51734 [preauth] Jan 30 05:34:47.916017 systemd[1]: sshd@57-91.107.218.70:22-111.67.201.36:51734.service: Deactivated successfully. Jan 30 05:34:48.134807 systemd[1]: Started sshd@58-91.107.218.70:22-111.67.201.36:52736.service - OpenSSH per-connection server daemon (111.67.201.36:52736). Jan 30 05:34:49.820017 sshd[5768]: Connection closed by authenticating user root 111.67.201.36 port 52736 [preauth] Jan 30 05:34:49.826088 systemd[1]: sshd@58-91.107.218.70:22-111.67.201.36:52736.service: Deactivated successfully. Jan 30 05:34:50.103753 systemd[1]: Started sshd@59-91.107.218.70:22-111.67.201.36:32896.service - OpenSSH per-connection server daemon (111.67.201.36:32896). Jan 30 05:34:51.152429 sshd[5773]: Invalid user admin from 111.67.201.36 port 32896 Jan 30 05:34:51.389317 sshd[5773]: Connection closed by invalid user admin 111.67.201.36 port 32896 [preauth] Jan 30 05:34:51.395172 systemd[1]: sshd@59-91.107.218.70:22-111.67.201.36:32896.service: Deactivated successfully. Jan 30 05:34:51.671061 systemd[1]: Started sshd@60-91.107.218.70:22-111.67.201.36:33692.service - OpenSSH per-connection server daemon (111.67.201.36:33692). Jan 30 05:34:53.160652 sshd[5778]: Connection closed by authenticating user root 111.67.201.36 port 33692 [preauth] Jan 30 05:34:53.166447 systemd[1]: sshd@60-91.107.218.70:22-111.67.201.36:33692.service: Deactivated successfully. Jan 30 05:34:53.449027 systemd[1]: Started sshd@61-91.107.218.70:22-111.67.201.36:42198.service - OpenSSH per-connection server daemon (111.67.201.36:42198). Jan 30 05:34:54.624925 sshd[5783]: Invalid user oracle from 111.67.201.36 port 42198 Jan 30 05:34:54.976167 sshd[5783]: Connection closed by invalid user oracle 111.67.201.36 port 42198 [preauth] Jan 30 05:34:54.978532 systemd[1]: sshd@61-91.107.218.70:22-111.67.201.36:42198.service: Deactivated successfully. Jan 30 05:34:55.260182 systemd[1]: Started sshd@62-91.107.218.70:22-111.67.201.36:50534.service - OpenSSH per-connection server daemon (111.67.201.36:50534). Jan 30 05:34:56.222797 systemd[1]: cri-containerd-c2ff51555ea6a11fdffcfdc70ae84524af6a3fe30d191ca19af26f95c10f4bec.scope: Deactivated successfully. Jan 30 05:34:56.224057 systemd[1]: cri-containerd-c2ff51555ea6a11fdffcfdc70ae84524af6a3fe30d191ca19af26f95c10f4bec.scope: Consumed 1.959s CPU time, 16.1M memory peak, 0B memory swap peak. Jan 30 05:34:56.236692 kubelet[2867]: E0130 05:34:56.236577 2867 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44962->10.0.0.2:2379: read: connection timed out" Jan 30 05:34:56.281414 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2ff51555ea6a11fdffcfdc70ae84524af6a3fe30d191ca19af26f95c10f4bec-rootfs.mount: Deactivated successfully. Jan 30 05:34:56.289041 sshd[5788]: Invalid user vyos from 111.67.201.36 port 50534 Jan 30 05:34:56.307759 containerd[1503]: time="2025-01-30T05:34:56.307674070Z" level=info msg="shim disconnected" id=c2ff51555ea6a11fdffcfdc70ae84524af6a3fe30d191ca19af26f95c10f4bec namespace=k8s.io Jan 30 05:34:56.308427 containerd[1503]: time="2025-01-30T05:34:56.308347235Z" level=warning msg="cleaning up after shim disconnected" id=c2ff51555ea6a11fdffcfdc70ae84524af6a3fe30d191ca19af26f95c10f4bec namespace=k8s.io Jan 30 05:34:56.308427 containerd[1503]: time="2025-01-30T05:34:56.308413901Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:56.543576 sshd[5788]: Connection closed by invalid user vyos 111.67.201.36 port 50534 [preauth] Jan 30 05:34:56.549629 systemd[1]: sshd@62-91.107.218.70:22-111.67.201.36:50534.service: Deactivated successfully. Jan 30 05:34:56.825091 systemd[1]: Started sshd@63-91.107.218.70:22-111.67.201.36:51402.service - OpenSSH per-connection server daemon (111.67.201.36:51402). Jan 30 05:34:56.964667 kubelet[2867]: I0130 05:34:56.964266 2867 scope.go:117] "RemoveContainer" containerID="c2ff51555ea6a11fdffcfdc70ae84524af6a3fe30d191ca19af26f95c10f4bec" Jan 30 05:34:56.965755 systemd[1]: cri-containerd-7516bde5bcef8096bcc1f56a43cc31bdeb03bd62bdc21c956629cfa749809502.scope: Deactivated successfully. Jan 30 05:34:56.968023 systemd[1]: cri-containerd-7516bde5bcef8096bcc1f56a43cc31bdeb03bd62bdc21c956629cfa749809502.scope: Consumed 6.336s CPU time, 22.1M memory peak, 0B memory swap peak. Jan 30 05:34:56.975363 containerd[1503]: time="2025-01-30T05:34:56.975303978Z" level=info msg="CreateContainer within sandbox \"e0336c0d8505cbc59e9be0085556784b994e711365dd2625a195911ed4650912\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 30 05:34:57.013941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount842057175.mount: Deactivated successfully. Jan 30 05:34:57.021058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2183339503.mount: Deactivated successfully. Jan 30 05:34:57.025087 containerd[1503]: time="2025-01-30T05:34:57.025013022Z" level=info msg="CreateContainer within sandbox \"e0336c0d8505cbc59e9be0085556784b994e711365dd2625a195911ed4650912\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3a9382a63fd42880220da5afa5a31560c9a4c9fb09512ed2f0f1c6c06975737e\"" Jan 30 05:34:57.028095 containerd[1503]: time="2025-01-30T05:34:57.028036071Z" level=info msg="StartContainer for \"3a9382a63fd42880220da5afa5a31560c9a4c9fb09512ed2f0f1c6c06975737e\"" Jan 30 05:34:57.046605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7516bde5bcef8096bcc1f56a43cc31bdeb03bd62bdc21c956629cfa749809502-rootfs.mount: Deactivated successfully. Jan 30 05:34:57.062530 containerd[1503]: time="2025-01-30T05:34:57.062370341Z" level=info msg="shim disconnected" id=7516bde5bcef8096bcc1f56a43cc31bdeb03bd62bdc21c956629cfa749809502 namespace=k8s.io Jan 30 05:34:57.063018 containerd[1503]: time="2025-01-30T05:34:57.062800509Z" level=warning msg="cleaning up after shim disconnected" id=7516bde5bcef8096bcc1f56a43cc31bdeb03bd62bdc21c956629cfa749809502 namespace=k8s.io Jan 30 05:34:57.063018 containerd[1503]: time="2025-01-30T05:34:57.062814265Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:34:57.069729 systemd[1]: Started cri-containerd-3a9382a63fd42880220da5afa5a31560c9a4c9fb09512ed2f0f1c6c06975737e.scope - libcontainer container 3a9382a63fd42880220da5afa5a31560c9a4c9fb09512ed2f0f1c6c06975737e. Jan 30 05:34:57.120654 containerd[1503]: time="2025-01-30T05:34:57.120216571Z" level=info msg="StartContainer for \"3a9382a63fd42880220da5afa5a31560c9a4c9fb09512ed2f0f1c6c06975737e\" returns successfully" Jan 30 05:34:57.970303 kubelet[2867]: I0130 05:34:57.968699 2867 scope.go:117] "RemoveContainer" containerID="7516bde5bcef8096bcc1f56a43cc31bdeb03bd62bdc21c956629cfa749809502" Jan 30 05:34:57.971737 containerd[1503]: time="2025-01-30T05:34:57.971690326Z" level=info msg="CreateContainer within sandbox \"7ac9ef2d7dcb44972bda98286c0293b7915b647b0227c46fb8fd22ed6c475c74\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 30 05:34:57.996037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2397682624.mount: Deactivated successfully. Jan 30 05:34:58.003575 containerd[1503]: time="2025-01-30T05:34:58.003526124Z" level=info msg="CreateContainer within sandbox \"7ac9ef2d7dcb44972bda98286c0293b7915b647b0227c46fb8fd22ed6c475c74\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2a3615e5b79d96322c38a6f7b81da8a3549db7848794c4e4536f27f42bdf271e\"" Jan 30 05:34:58.004773 containerd[1503]: time="2025-01-30T05:34:58.004742038Z" level=info msg="StartContainer for \"2a3615e5b79d96322c38a6f7b81da8a3549db7848794c4e4536f27f42bdf271e\"" Jan 30 05:34:58.048641 systemd[1]: Started cri-containerd-2a3615e5b79d96322c38a6f7b81da8a3549db7848794c4e4536f27f42bdf271e.scope - libcontainer container 2a3615e5b79d96322c38a6f7b81da8a3549db7848794c4e4536f27f42bdf271e. Jan 30 05:34:58.100269 containerd[1503]: time="2025-01-30T05:34:58.100222426Z" level=info msg="StartContainer for \"2a3615e5b79d96322c38a6f7b81da8a3549db7848794c4e4536f27f42bdf271e\" returns successfully" Jan 30 05:34:58.189466 sshd[5819]: Invalid user kvm from 111.67.201.36 port 51402 Jan 30 05:34:58.435171 sshd[5819]: Connection closed by invalid user kvm 111.67.201.36 port 51402 [preauth] Jan 30 05:34:58.440908 systemd[1]: sshd@63-91.107.218.70:22-111.67.201.36:51402.service: Deactivated successfully. Jan 30 05:34:58.721704 systemd[1]: Started sshd@64-91.107.218.70:22-111.67.201.36:59772.service - OpenSSH per-connection server daemon (111.67.201.36:59772). Jan 30 05:34:59.744778 sshd[5921]: Invalid user deploy from 111.67.201.36 port 59772 Jan 30 05:35:00.078904 sshd[5921]: Connection closed by invalid user deploy 111.67.201.36 port 59772 [preauth] Jan 30 05:35:00.086472 systemd[1]: sshd@64-91.107.218.70:22-111.67.201.36:59772.service: Deactivated successfully. Jan 30 05:35:00.368577 systemd[1]: Started sshd@65-91.107.218.70:22-111.67.201.36:35034.service - OpenSSH per-connection server daemon (111.67.201.36:35034). Jan 30 05:35:00.807819 kubelet[2867]: E0130 05:35:00.807587 2867 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout" event="&Event{ObjectMeta:{kube-apiserver-ci-4186-1-0-3-26ada394c1.181f6198f76fce71 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4186-1-0-3-26ada394c1,UID:98e06c75162dc1a91b9a4bcf8545ff58,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-3-26ada394c1,},FirstTimestamp:2025-01-30 05:34:50.802523761 +0000 UTC m=+262.906371035,LastTimestamp:2025-01-30 05:34:50.802523761 +0000 UTC m=+262.906371035,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-3-26ada394c1,}" Jan 30 05:35:01.429601 kubelet[2867]: I0130 05:35:01.429480 2867 status_manager.go:853] "Failed to get status for pod" podUID="1f3527d596fdba06a192a1c65e70a442" pod="kube-system/kube-scheduler-ci-4186-1-0-3-26ada394c1" err="rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout" Jan 30 05:35:01.555482 sshd[5926]: Invalid user postgres from 111.67.201.36 port 35034 Jan 30 05:35:01.810581 sshd[5926]: Connection closed by invalid user postgres 111.67.201.36 port 35034 [preauth] Jan 30 05:35:01.817108 systemd[1]: sshd@65-91.107.218.70:22-111.67.201.36:35034.service: Deactivated successfully.