Jan 29 16:34:27.887689 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:34:27.887711 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:34:27.887722 kernel: BIOS-provided physical RAM map: Jan 29 16:34:27.887728 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 16:34:27.887733 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 16:34:27.887738 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 16:34:27.887744 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jan 29 16:34:27.887750 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jan 29 16:34:27.887758 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 16:34:27.887763 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 16:34:27.887769 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 16:34:27.887774 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 16:34:27.887779 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 16:34:27.887785 kernel: NX (Execute Disable) protection: active Jan 29 16:34:27.887794 kernel: APIC: Static calls initialized Jan 29 16:34:27.887800 kernel: SMBIOS 3.0.0 present. Jan 29 16:34:27.887806 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 29 16:34:27.887812 kernel: Hypervisor detected: KVM Jan 29 16:34:27.887817 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:34:27.887823 kernel: kvm-clock: using sched offset of 2808614063 cycles Jan 29 16:34:27.887829 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:34:27.887835 kernel: tsc: Detected 2445.404 MHz processor Jan 29 16:34:27.887841 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:34:27.887848 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:34:27.887856 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jan 29 16:34:27.887862 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 16:34:27.887868 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:34:27.887874 kernel: Using GB pages for direct mapping Jan 29 16:34:27.887880 kernel: ACPI: Early table checksum verification disabled Jan 29 16:34:27.887886 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Jan 29 16:34:27.887892 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:34:27.887898 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:34:27.887904 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:34:27.887912 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jan 29 16:34:27.887918 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:34:27.887924 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:34:27.887930 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:34:27.887936 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:34:27.887942 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Jan 29 16:34:27.887948 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Jan 29 16:34:27.887959 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jan 29 16:34:27.887965 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Jan 29 16:34:27.887971 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Jan 29 16:34:27.891290 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Jan 29 16:34:27.891297 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Jan 29 16:34:27.891304 kernel: No NUMA configuration found Jan 29 16:34:27.891310 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jan 29 16:34:27.891321 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jan 29 16:34:27.891327 kernel: Zone ranges: Jan 29 16:34:27.891334 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:34:27.891340 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jan 29 16:34:27.891346 kernel: Normal empty Jan 29 16:34:27.891352 kernel: Movable zone start for each node Jan 29 16:34:27.891359 kernel: Early memory node ranges Jan 29 16:34:27.891365 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 16:34:27.891371 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jan 29 16:34:27.891380 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jan 29 16:34:27.891386 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:34:27.891392 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 16:34:27.891398 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 16:34:27.891404 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 16:34:27.891411 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:34:27.891417 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:34:27.891423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 16:34:27.891429 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:34:27.891437 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:34:27.891444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:34:27.891450 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:34:27.891456 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:34:27.891462 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 16:34:27.891469 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 16:34:27.891475 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 16:34:27.891481 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 16:34:27.891487 kernel: Booting paravirtualized kernel on KVM Jan 29 16:34:27.891494 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:34:27.891502 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 16:34:27.891509 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 16:34:27.891515 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 16:34:27.891521 kernel: pcpu-alloc: [0] 0 1 Jan 29 16:34:27.891527 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 16:34:27.891534 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:34:27.891541 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:34:27.891547 kernel: random: crng init done Jan 29 16:34:27.891556 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:34:27.891562 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 16:34:27.891568 kernel: Fallback order for Node 0: 0 Jan 29 16:34:27.891575 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jan 29 16:34:27.891581 kernel: Policy zone: DMA32 Jan 29 16:34:27.891587 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:34:27.891594 kernel: Memory: 1920000K/2047464K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 127204K reserved, 0K cma-reserved) Jan 29 16:34:27.891600 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:34:27.891606 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:34:27.891615 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:34:27.891621 kernel: Dynamic Preempt: voluntary Jan 29 16:34:27.891627 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:34:27.891637 kernel: rcu: RCU event tracing is enabled. Jan 29 16:34:27.891644 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:34:27.891663 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:34:27.891670 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:34:27.891676 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:34:27.891683 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:34:27.891691 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:34:27.891698 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 16:34:27.891704 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:34:27.891710 kernel: Console: colour VGA+ 80x25 Jan 29 16:34:27.891716 kernel: printk: console [tty0] enabled Jan 29 16:34:27.891722 kernel: printk: console [ttyS0] enabled Jan 29 16:34:27.891728 kernel: ACPI: Core revision 20230628 Jan 29 16:34:27.891735 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 16:34:27.891741 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:34:27.891749 kernel: x2apic enabled Jan 29 16:34:27.891756 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:34:27.891762 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 16:34:27.891768 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 16:34:27.891774 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Jan 29 16:34:27.891781 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 16:34:27.891787 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 16:34:27.891793 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 16:34:27.891808 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:34:27.891815 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:34:27.891821 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:34:27.891828 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:34:27.891836 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 16:34:27.891843 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 16:34:27.891850 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 16:34:27.891856 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 16:34:27.891863 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 16:34:27.891873 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 16:34:27.891879 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 16:34:27.891886 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:34:27.891892 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:34:27.891899 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:34:27.891905 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:34:27.891912 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 16:34:27.891918 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:34:27.891927 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:34:27.891934 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:34:27.891940 kernel: landlock: Up and running. Jan 29 16:34:27.891947 kernel: SELinux: Initializing. Jan 29 16:34:27.891953 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:34:27.891960 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:34:27.891966 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 16:34:27.891988 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:34:27.891995 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:34:27.892005 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:34:27.892011 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 16:34:27.892018 kernel: ... version: 0 Jan 29 16:34:27.892024 kernel: ... bit width: 48 Jan 29 16:34:27.892030 kernel: ... generic registers: 6 Jan 29 16:34:27.892037 kernel: ... value mask: 0000ffffffffffff Jan 29 16:34:27.892043 kernel: ... max period: 00007fffffffffff Jan 29 16:34:27.892050 kernel: ... fixed-purpose events: 0 Jan 29 16:34:27.892056 kernel: ... event mask: 000000000000003f Jan 29 16:34:27.892065 kernel: signal: max sigframe size: 1776 Jan 29 16:34:27.892071 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:34:27.892078 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:34:27.892085 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:34:27.892091 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:34:27.892097 kernel: .... node #0, CPUs: #1 Jan 29 16:34:27.892104 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:34:27.892110 kernel: smpboot: Max logical packages: 1 Jan 29 16:34:27.892117 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Jan 29 16:34:27.892125 kernel: devtmpfs: initialized Jan 29 16:34:27.892132 kernel: x86/mm: Memory block size: 128MB Jan 29 16:34:27.892139 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:34:27.892145 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:34:27.892152 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:34:27.892158 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:34:27.892164 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:34:27.892171 kernel: audit: type=2000 audit(1738168467.717:1): state=initialized audit_enabled=0 res=1 Jan 29 16:34:27.892178 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:34:27.892186 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:34:27.892192 kernel: cpuidle: using governor menu Jan 29 16:34:27.892199 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:34:27.892205 kernel: dca service started, version 1.12.1 Jan 29 16:34:27.892212 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 16:34:27.892218 kernel: PCI: Using configuration type 1 for base access Jan 29 16:34:27.892225 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:34:27.892232 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:34:27.892238 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:34:27.892246 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:34:27.892253 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:34:27.892259 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:34:27.892266 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:34:27.892272 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:34:27.892279 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:34:27.892285 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:34:27.892292 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:34:27.892298 kernel: ACPI: Interpreter enabled Jan 29 16:34:27.892307 kernel: ACPI: PM: (supports S0 S5) Jan 29 16:34:27.892313 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:34:27.892324 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:34:27.892335 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:34:27.892346 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 16:34:27.892356 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:34:27.892536 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:34:27.892674 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 16:34:27.892820 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 16:34:27.892840 kernel: PCI host bridge to bus 0000:00 Jan 29 16:34:27.894240 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:34:27.894361 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:34:27.894463 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:34:27.894561 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jan 29 16:34:27.894673 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 16:34:27.894777 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 16:34:27.894875 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:34:27.896101 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 16:34:27.896238 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 29 16:34:27.896350 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jan 29 16:34:27.896457 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jan 29 16:34:27.896570 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jan 29 16:34:27.896689 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jan 29 16:34:27.896796 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:34:27.896909 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 16:34:27.899058 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jan 29 16:34:27.899193 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 16:34:27.899304 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jan 29 16:34:27.899425 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 16:34:27.899533 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jan 29 16:34:27.899659 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 16:34:27.899770 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jan 29 16:34:27.899885 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 16:34:27.900036 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jan 29 16:34:27.900160 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 16:34:27.900266 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jan 29 16:34:27.900378 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 16:34:27.900483 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jan 29 16:34:27.900594 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 16:34:27.900715 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jan 29 16:34:27.900833 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 29 16:34:27.900940 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jan 29 16:34:27.902123 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 16:34:27.902239 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 16:34:27.902353 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 16:34:27.902461 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jan 29 16:34:27.902578 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jan 29 16:34:27.902710 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 16:34:27.902818 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 16:34:27.902938 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 16:34:27.904082 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jan 29 16:34:27.904199 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 29 16:34:27.904309 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jan 29 16:34:27.904421 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 16:34:27.904526 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 16:34:27.904630 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 29 16:34:27.904762 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 16:34:27.904873 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jan 29 16:34:27.906007 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 16:34:27.906133 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 16:34:27.906241 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 16:34:27.906362 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 29 16:34:27.906477 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jan 29 16:34:27.906586 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jan 29 16:34:27.906713 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 16:34:27.906821 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 16:34:27.906926 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 16:34:27.908086 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 29 16:34:27.908203 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 29 16:34:27.908310 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 16:34:27.908415 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 16:34:27.908520 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 16:34:27.908637 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 16:34:27.908767 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jan 29 16:34:27.908884 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jan 29 16:34:27.910021 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 16:34:27.910137 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 16:34:27.910243 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 16:34:27.910365 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 29 16:34:27.910482 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jan 29 16:34:27.910592 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jan 29 16:34:27.910722 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 16:34:27.910828 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 16:34:27.910931 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 16:34:27.910941 kernel: acpiphp: Slot [0] registered Jan 29 16:34:27.912092 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 16:34:27.912208 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jan 29 16:34:27.912317 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jan 29 16:34:27.912433 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jan 29 16:34:27.912539 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 16:34:27.912643 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 16:34:27.912761 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 16:34:27.912771 kernel: acpiphp: Slot [0-2] registered Jan 29 16:34:27.912876 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 16:34:27.916133 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 29 16:34:27.916251 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 16:34:27.916261 kernel: acpiphp: Slot [0-3] registered Jan 29 16:34:27.916373 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 16:34:27.916481 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 16:34:27.916586 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 16:34:27.916595 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:34:27.916601 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:34:27.916608 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:34:27.916615 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:34:27.916622 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 16:34:27.916631 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 16:34:27.916638 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 16:34:27.916660 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 16:34:27.916667 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 16:34:27.916674 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 16:34:27.916680 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 16:34:27.916687 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 16:34:27.916694 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 16:34:27.916700 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 16:34:27.916709 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 16:34:27.916716 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 16:34:27.916722 kernel: iommu: Default domain type: Translated Jan 29 16:34:27.916729 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:34:27.916736 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:34:27.916743 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:34:27.916750 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 16:34:27.916756 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jan 29 16:34:27.916866 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 16:34:27.917001 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 16:34:27.917114 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:34:27.917123 kernel: vgaarb: loaded Jan 29 16:34:27.917130 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 16:34:27.917137 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 16:34:27.917144 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:34:27.917151 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:34:27.917158 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:34:27.917164 kernel: pnp: PnP ACPI init Jan 29 16:34:27.917284 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 16:34:27.917296 kernel: pnp: PnP ACPI: found 5 devices Jan 29 16:34:27.917303 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:34:27.917310 kernel: NET: Registered PF_INET protocol family Jan 29 16:34:27.917317 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:34:27.917323 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 16:34:27.917330 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:34:27.917337 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 16:34:27.917346 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 16:34:27.917353 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 16:34:27.917360 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:34:27.917366 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:34:27.917373 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:34:27.917380 kernel: NET: Registered PF_XDP protocol family Jan 29 16:34:27.917509 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 16:34:27.917645 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 16:34:27.917777 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 16:34:27.917884 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 16:34:27.918039 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 16:34:27.918149 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 16:34:27.918255 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 16:34:27.918360 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 16:34:27.918463 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 29 16:34:27.918568 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 16:34:27.918694 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 16:34:27.918799 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 16:34:27.918904 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 16:34:27.919024 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 16:34:27.919130 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 16:34:27.919235 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 16:34:27.919346 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 16:34:27.919468 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 16:34:27.919579 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 16:34:27.919699 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 16:34:27.919805 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 16:34:27.919910 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 16:34:27.924185 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 16:34:27.924300 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 16:34:27.924406 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 16:34:27.924511 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 29 16:34:27.924621 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 16:34:27.924741 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 16:34:27.924847 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 16:34:27.924950 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 29 16:34:27.925114 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 29 16:34:27.925220 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 16:34:27.925329 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 16:34:27.925433 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 29 16:34:27.925536 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 16:34:27.925639 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 16:34:27.925763 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:34:27.925867 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:34:27.925964 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:34:27.926078 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jan 29 16:34:27.926175 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 16:34:27.926269 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 16:34:27.926382 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 29 16:34:27.926485 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 29 16:34:27.926599 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 29 16:34:27.926721 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 16:34:27.926831 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 29 16:34:27.926933 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 16:34:27.930421 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 29 16:34:27.930531 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 16:34:27.930666 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 29 16:34:27.930774 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 16:34:27.930884 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jan 29 16:34:27.931014 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 16:34:27.931127 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 29 16:34:27.931230 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 29 16:34:27.931330 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 16:34:27.931444 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 29 16:34:27.931546 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jan 29 16:34:27.931660 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 16:34:27.931772 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 29 16:34:27.931874 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 29 16:34:27.931988 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 16:34:27.932004 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 16:34:27.932012 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:34:27.932019 kernel: Initialise system trusted keyrings Jan 29 16:34:27.932026 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 16:34:27.932033 kernel: Key type asymmetric registered Jan 29 16:34:27.932040 kernel: Asymmetric key parser 'x509' registered Jan 29 16:34:27.932047 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:34:27.932054 kernel: io scheduler mq-deadline registered Jan 29 16:34:27.932061 kernel: io scheduler kyber registered Jan 29 16:34:27.932068 kernel: io scheduler bfq registered Jan 29 16:34:27.932185 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 29 16:34:27.932294 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 29 16:34:27.932400 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 29 16:34:27.932506 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 29 16:34:27.932613 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 29 16:34:27.932736 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 29 16:34:27.932844 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 29 16:34:27.932949 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 29 16:34:27.934592 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 29 16:34:27.934726 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 29 16:34:27.934834 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 29 16:34:27.934938 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 29 16:34:27.935062 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 29 16:34:27.935168 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 29 16:34:27.935274 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 29 16:34:27.935378 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 29 16:34:27.935393 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 16:34:27.935498 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 29 16:34:27.935603 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 29 16:34:27.935612 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:34:27.935619 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 29 16:34:27.935626 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:34:27.935634 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:34:27.935641 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:34:27.935660 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:34:27.935671 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:34:27.935788 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 16:34:27.935800 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:34:27.935897 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 16:34:27.936429 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T16:34:27 UTC (1738168467) Jan 29 16:34:27.936539 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 16:34:27.936550 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 16:34:27.936558 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:34:27.936570 kernel: Segment Routing with IPv6 Jan 29 16:34:27.936578 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:34:27.936585 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:34:27.936592 kernel: Key type dns_resolver registered Jan 29 16:34:27.936600 kernel: IPI shorthand broadcast: enabled Jan 29 16:34:27.936607 kernel: sched_clock: Marking stable (1136008703, 135173614)->(1280611312, -9428995) Jan 29 16:34:27.936616 kernel: registered taskstats version 1 Jan 29 16:34:27.936624 kernel: Loading compiled-in X.509 certificates Jan 29 16:34:27.936631 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:34:27.936640 kernel: Key type .fscrypt registered Jan 29 16:34:27.936661 kernel: Key type fscrypt-provisioning registered Jan 29 16:34:27.936669 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:34:27.936676 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:34:27.936682 kernel: ima: No architecture policies found Jan 29 16:34:27.936689 kernel: clk: Disabling unused clocks Jan 29 16:34:27.936696 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:34:27.936703 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:34:27.936711 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:34:27.936720 kernel: Run /init as init process Jan 29 16:34:27.936727 kernel: with arguments: Jan 29 16:34:27.936734 kernel: /init Jan 29 16:34:27.936741 kernel: with environment: Jan 29 16:34:27.936750 kernel: HOME=/ Jan 29 16:34:27.936756 kernel: TERM=linux Jan 29 16:34:27.936763 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:34:27.936771 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:34:27.936784 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:34:27.936792 systemd[1]: Detected virtualization kvm. Jan 29 16:34:27.936799 systemd[1]: Detected architecture x86-64. Jan 29 16:34:27.936806 systemd[1]: Running in initrd. Jan 29 16:34:27.936813 systemd[1]: No hostname configured, using default hostname. Jan 29 16:34:27.936821 systemd[1]: Hostname set to <localhost>. Jan 29 16:34:27.936828 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:34:27.936835 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:34:27.936845 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:34:27.936852 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:34:27.936860 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:34:27.936868 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:34:27.936875 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:34:27.936884 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:34:27.936894 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:34:27.936901 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:34:27.936909 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:34:27.936916 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:34:27.936923 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:34:27.936931 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:34:27.936938 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:34:27.936945 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:34:27.936952 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:34:27.936962 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:34:27.936970 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:34:27.937018 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:34:27.937025 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:34:27.937033 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:34:27.937040 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:34:27.937047 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:34:27.937054 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:34:27.937065 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:34:27.937072 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:34:27.937079 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:34:27.937087 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:34:27.937094 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:34:27.937101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:34:27.937108 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:34:27.937116 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:34:27.937126 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:34:27.937158 systemd-journald[188]: Collecting audit messages is disabled. Jan 29 16:34:27.937181 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:34:27.937189 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:34:27.937197 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:34:27.937204 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:34:27.937213 systemd-journald[188]: Journal started Jan 29 16:34:27.937235 systemd-journald[188]: Runtime Journal (/run/log/journal/df5025e58bb243c9bf7751f1e29c085a) is 4.8M, max 38.3M, 33.5M free. Jan 29 16:34:27.938556 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:34:27.895018 systemd-modules-load[189]: Inserted module 'overlay' Jan 29 16:34:27.941827 kernel: Bridge firewalling registered Jan 29 16:34:27.945037 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:34:27.944465 systemd-modules-load[189]: Inserted module 'br_netfilter' Jan 29 16:34:27.946454 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:34:27.947145 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:34:27.959167 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:34:27.961148 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:34:27.964123 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:34:27.975803 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:34:27.980628 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:34:27.986125 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:34:27.986773 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:34:27.991146 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:34:28.002125 dracut-cmdline[221]: dracut-dracut-053 Jan 29 16:34:28.006053 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:34:28.024895 systemd-resolved[223]: Positive Trust Anchors: Jan 29 16:34:28.025543 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:34:28.025573 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:34:28.030321 systemd-resolved[223]: Defaulting to hostname 'linux'. Jan 29 16:34:28.031612 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:34:28.032125 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:34:28.079026 kernel: SCSI subsystem initialized Jan 29 16:34:28.087005 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:34:28.096010 kernel: iscsi: registered transport (tcp) Jan 29 16:34:28.115427 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:34:28.115503 kernel: QLogic iSCSI HBA Driver Jan 29 16:34:28.151695 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:34:28.156095 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:34:28.178318 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:34:28.178381 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:34:28.178392 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:34:28.218019 kernel: raid6: avx2x4 gen() 32562 MB/s Jan 29 16:34:28.235020 kernel: raid6: avx2x2 gen() 32285 MB/s Jan 29 16:34:28.252126 kernel: raid6: avx2x1 gen() 22214 MB/s Jan 29 16:34:28.252211 kernel: raid6: using algorithm avx2x4 gen() 32562 MB/s Jan 29 16:34:28.270212 kernel: raid6: .... xor() 4343 MB/s, rmw enabled Jan 29 16:34:28.270294 kernel: raid6: using avx2x2 recovery algorithm Jan 29 16:34:28.289011 kernel: xor: automatically using best checksumming function avx Jan 29 16:34:28.408016 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:34:28.420664 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:34:28.426146 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:34:28.457553 systemd-udevd[406]: Using default interface naming scheme 'v255'. Jan 29 16:34:28.463696 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:34:28.472236 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:34:28.483335 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jan 29 16:34:28.513869 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:34:28.522158 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:34:28.595682 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:34:28.607089 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:34:28.627539 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:34:28.629753 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:34:28.632045 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:34:28.633060 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:34:28.639303 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:34:28.651367 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:34:28.706060 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:34:28.706126 kernel: scsi host0: Virtio SCSI HBA Jan 29 16:34:28.725642 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 29 16:34:28.761368 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:34:28.764274 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:34:28.761490 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:34:28.762655 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:34:28.763109 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:34:28.771634 kernel: ACPI: bus type USB registered Jan 29 16:34:28.771654 kernel: AES CTR mode by8 optimization enabled Jan 29 16:34:28.771664 kernel: usbcore: registered new interface driver usbfs Jan 29 16:34:28.763223 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:34:28.765635 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:34:28.775227 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:34:28.779192 kernel: usbcore: registered new interface driver hub Jan 29 16:34:28.779219 kernel: usbcore: registered new device driver usb Jan 29 16:34:28.798016 kernel: libata version 3.00 loaded. Jan 29 16:34:28.839011 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 16:34:28.888461 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 16:34:28.888478 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 16:34:28.888646 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 16:34:28.888775 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 16:34:28.888907 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 29 16:34:28.889056 kernel: scsi host1: ahci Jan 29 16:34:28.889199 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 16:34:28.889330 kernel: scsi host2: ahci Jan 29 16:34:28.889458 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 16:34:28.889600 kernel: scsi host3: ahci Jan 29 16:34:28.889739 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 29 16:34:28.889867 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 29 16:34:28.890078 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 29 16:34:28.890211 kernel: scsi host4: ahci Jan 29 16:34:28.890344 kernel: hub 1-0:1.0: USB hub found Jan 29 16:34:28.891192 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 29 16:34:28.891344 kernel: hub 1-0:1.0: 4 ports detected Jan 29 16:34:28.891483 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 16:34:28.891643 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 16:34:28.891846 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 29 16:34:28.892084 kernel: hub 2-0:1.0: USB hub found Jan 29 16:34:28.892304 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 16:34:28.892450 kernel: hub 2-0:1.0: 4 ports detected Jan 29 16:34:28.892604 kernel: scsi host5: ahci Jan 29 16:34:28.892736 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:34:28.892747 kernel: scsi host6: ahci Jan 29 16:34:28.892884 kernel: GPT:17805311 != 80003071 Jan 29 16:34:28.892895 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Jan 29 16:34:28.892905 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:34:28.892913 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Jan 29 16:34:28.892922 kernel: GPT:17805311 != 80003071 Jan 29 16:34:28.892931 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Jan 29 16:34:28.892939 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:34:28.892948 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:34:28.892957 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Jan 29 16:34:28.892968 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 16:34:28.893179 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Jan 29 16:34:28.893190 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Jan 29 16:34:28.891040 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:34:28.895654 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:34:28.922497 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (461) Jan 29 16:34:28.921854 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:34:28.927993 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (450) Jan 29 16:34:28.954011 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 29 16:34:28.967487 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 29 16:34:28.974148 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 29 16:34:28.974671 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 29 16:34:28.983692 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 16:34:28.989097 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:34:28.995046 disk-uuid[573]: Primary Header is updated. Jan 29 16:34:28.995046 disk-uuid[573]: Secondary Entries is updated. Jan 29 16:34:28.995046 disk-uuid[573]: Secondary Header is updated. Jan 29 16:34:28.999992 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:34:29.006003 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:34:29.108182 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 16:34:29.197017 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 16:34:29.206883 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 16:34:29.206917 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 16:34:29.206927 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 16:34:29.206948 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 29 16:34:29.209841 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 16:34:29.209858 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 16:34:29.209867 kernel: ata1.00: applying bridge limits Jan 29 16:34:29.211010 kernel: ata1.00: configured for UDMA/100 Jan 29 16:34:29.213012 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 16:34:29.246016 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 16:34:29.252255 kernel: usbcore: registered new interface driver usbhid Jan 29 16:34:29.252286 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 16:34:29.263720 kernel: usbhid: USB HID core driver Jan 29 16:34:29.263737 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 16:34:29.263751 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 29 16:34:29.263764 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 29 16:34:29.264042 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 29 16:34:30.014046 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:34:30.015735 disk-uuid[574]: The operation has completed successfully. Jan 29 16:34:30.068871 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:34:30.068993 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:34:30.102086 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:34:30.105608 sh[592]: Success Jan 29 16:34:30.120054 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 16:34:30.169475 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:34:30.183077 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:34:30.185595 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:34:30.201174 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:34:30.201223 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:34:30.203832 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:34:30.203850 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:34:30.206032 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:34:30.213003 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 16:34:30.215265 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:34:30.216271 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:34:30.222155 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:34:30.224110 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:34:30.241078 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:34:30.241124 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:34:30.241141 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:34:30.246676 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:34:30.246709 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:34:30.258283 kernel: BTRFS info (device sda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:34:30.257940 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:34:30.264348 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:34:30.271128 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:34:30.343378 ignition[689]: Ignition 2.20.0 Jan 29 16:34:30.343389 ignition[689]: Stage: fetch-offline Jan 29 16:34:30.343423 ignition[689]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:34:30.345465 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:34:30.343432 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:34:30.343529 ignition[689]: parsed url from cmdline: "" Jan 29 16:34:30.348615 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:34:30.343533 ignition[689]: no config URL provided Jan 29 16:34:30.343538 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:34:30.343547 ignition[689]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:34:30.343552 ignition[689]: failed to fetch config: resource requires networking Jan 29 16:34:30.343707 ignition[689]: Ignition finished successfully Jan 29 16:34:30.356104 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:34:30.379566 systemd-networkd[781]: lo: Link UP Jan 29 16:34:30.379576 systemd-networkd[781]: lo: Gained carrier Jan 29 16:34:30.382123 systemd-networkd[781]: Enumeration completed Jan 29 16:34:30.382279 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:34:30.382785 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:34:30.382790 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:34:30.383430 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:34:30.383434 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:34:30.384073 systemd-networkd[781]: eth0: Link UP Jan 29 16:34:30.384077 systemd-networkd[781]: eth0: Gained carrier Jan 29 16:34:30.384083 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:34:30.384427 systemd[1]: Reached target network.target - Network. Jan 29 16:34:30.387050 systemd-networkd[781]: eth1: Link UP Jan 29 16:34:30.387054 systemd-networkd[781]: eth1: Gained carrier Jan 29 16:34:30.387062 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:34:30.391137 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:34:30.401108 ignition[784]: Ignition 2.20.0 Jan 29 16:34:30.401118 ignition[784]: Stage: fetch Jan 29 16:34:30.401244 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:34:30.401254 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:34:30.401331 ignition[784]: parsed url from cmdline: "" Jan 29 16:34:30.401335 ignition[784]: no config URL provided Jan 29 16:34:30.401339 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:34:30.401348 ignition[784]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:34:30.401366 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 29 16:34:30.401516 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 16:34:30.435051 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:34:30.449024 systemd-networkd[781]: eth0: DHCPv4 address 142.132.231.50/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 16:34:30.601672 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 29 16:34:30.605845 ignition[784]: GET result: OK Jan 29 16:34:30.605922 ignition[784]: parsing config with SHA512: 5194ce29da159b30f6dbf175501aa009c579df8df51a64e810d57206b443d573b9aeda8f4092dcb56a5239ae84e6c4bd5423d031d42c9707e5ae634aa6b9a420 Jan 29 16:34:30.610367 unknown[784]: fetched base config from "system" Jan 29 16:34:30.610926 ignition[784]: fetch: fetch complete Jan 29 16:34:30.610387 unknown[784]: fetched base config from "system" Jan 29 16:34:30.610936 ignition[784]: fetch: fetch passed Jan 29 16:34:30.610395 unknown[784]: fetched user config from "hetzner" Jan 29 16:34:30.611005 ignition[784]: Ignition finished successfully Jan 29 16:34:30.614074 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:34:30.621200 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:34:30.634247 ignition[791]: Ignition 2.20.0 Jan 29 16:34:30.634263 ignition[791]: Stage: kargs Jan 29 16:34:30.634422 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:34:30.634433 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:34:30.635212 ignition[791]: kargs: kargs passed Jan 29 16:34:30.636586 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:34:30.635254 ignition[791]: Ignition finished successfully Jan 29 16:34:30.646117 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:34:30.663623 ignition[798]: Ignition 2.20.0 Jan 29 16:34:30.663638 ignition[798]: Stage: disks Jan 29 16:34:30.663863 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:34:30.663877 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:34:30.664654 ignition[798]: disks: disks passed Jan 29 16:34:30.666047 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:34:30.664705 ignition[798]: Ignition finished successfully Jan 29 16:34:30.667497 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:34:30.668555 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:34:30.669554 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:34:30.670794 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:34:30.671950 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:34:30.677120 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:34:30.691506 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 16:34:30.693523 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:34:31.210134 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:34:31.316291 kernel: EXT4-fs (sda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:34:31.316899 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:34:31.317802 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:34:31.329078 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:34:31.332064 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:34:31.336120 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 16:34:31.337610 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:34:31.338663 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:34:31.342177 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:34:31.353242 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (815) Jan 29 16:34:31.353262 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:34:31.353273 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:34:31.353282 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:34:31.353297 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:34:31.353306 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:34:31.354594 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:34:31.365113 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:34:31.406312 coreos-metadata[817]: Jan 29 16:34:31.406 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 29 16:34:31.407395 coreos-metadata[817]: Jan 29 16:34:31.406 INFO Fetch successful Jan 29 16:34:31.409326 coreos-metadata[817]: Jan 29 16:34:31.408 INFO wrote hostname ci-4230-0-0-6-6baf09a0d0 to /sysroot/etc/hostname Jan 29 16:34:31.410206 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:34:31.410093 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:34:31.416023 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:34:31.420127 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:34:31.424000 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:34:31.436095 systemd-networkd[781]: eth1: Gained IPv6LL Jan 29 16:34:31.511081 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:34:31.517068 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:34:31.520329 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:34:31.525998 kernel: BTRFS info (device sda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:34:31.547445 ignition[933]: INFO : Ignition 2.20.0 Jan 29 16:34:31.547445 ignition[933]: INFO : Stage: mount Jan 29 16:34:31.549137 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:34:31.549137 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:34:31.549137 ignition[933]: INFO : mount: mount passed Jan 29 16:34:31.549137 ignition[933]: INFO : Ignition finished successfully Jan 29 16:34:31.550510 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:34:31.558095 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:34:31.559484 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:34:32.199705 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:34:32.204131 systemd-networkd[781]: eth0: Gained IPv6LL Jan 29 16:34:32.210228 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:34:32.220302 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (945) Jan 29 16:34:32.220343 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:34:32.222922 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:34:32.222949 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:34:32.228472 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:34:32.228500 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:34:32.231093 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:34:32.252561 ignition[961]: INFO : Ignition 2.20.0 Jan 29 16:34:32.253561 ignition[961]: INFO : Stage: files Jan 29 16:34:32.254339 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:34:32.254339 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:34:32.255628 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:34:32.256792 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:34:32.256792 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:34:32.260481 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:34:32.261236 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:34:32.262080 unknown[961]: wrote ssh authorized keys file for user: core Jan 29 16:34:32.263111 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:34:32.264694 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:34:32.266244 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 16:34:32.377687 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:34:32.797931 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:34:32.797931 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:34:32.797931 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 16:34:33.419336 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 16:34:33.555416 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:34:33.556725 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:34:33.556725 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:34:33.556725 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:34:33.556725 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:34:33.556725 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:34:33.556725 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:34:33.556725 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:34:33.556725 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:34:33.569812 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:34:33.569812 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:34:33.569812 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:34:33.569812 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:34:33.569812 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:34:33.569812 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 16:34:33.935413 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 16:34:34.341549 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:34:34.341549 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 16:34:34.343382 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:34:34.343382 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:34:34.343382 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 16:34:34.343382 ignition[961]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 16:34:34.343382 ignition[961]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 16:34:34.343382 ignition[961]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 16:34:34.343382 ignition[961]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 16:34:34.343382 ignition[961]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:34:34.343382 ignition[961]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:34:34.343382 ignition[961]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:34:34.343382 ignition[961]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:34:34.343382 ignition[961]: INFO : files: files passed Jan 29 16:34:34.343382 ignition[961]: INFO : Ignition finished successfully Jan 29 16:34:34.344849 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:34:34.353629 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:34:34.356390 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:34:34.357201 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:34:34.357308 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:34:34.368195 initrd-setup-root-after-ignition[991]: grep: Jan 29 16:34:34.368195 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:34:34.370955 initrd-setup-root-after-ignition[991]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:34:34.370955 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:34:34.370467 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:34:34.371209 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:34:34.378122 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:34:34.407104 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:34:34.407221 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:34:34.408607 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:34:34.409785 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:34:34.410287 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:34:34.416091 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:34:34.426425 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:34:34.432103 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:34:34.440349 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:34:34.441171 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:34:34.442428 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:34:34.443444 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:34:34.443570 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:34:34.444674 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:34:34.445281 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:34:34.446302 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:34:34.447234 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:34:34.448166 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:34:34.449166 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:34:34.450246 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:34:34.451320 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:34:34.452302 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:34:34.453346 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:34:34.454290 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:34:34.454383 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:34:34.455642 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:34:34.456400 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:34:34.457360 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:34:34.459060 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:34:34.459634 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:34:34.459729 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:34:34.461071 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:34:34.461214 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:34:34.462279 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:34:34.462425 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:34:34.463512 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 16:34:34.463733 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:34:34.470197 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:34:34.471569 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:34:34.474040 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:34:34.474746 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:34:34.479184 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:34:34.479339 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:34:34.484873 ignition[1015]: INFO : Ignition 2.20.0 Jan 29 16:34:34.484873 ignition[1015]: INFO : Stage: umount Jan 29 16:34:34.487640 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:34:34.487640 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:34:34.489238 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:34:34.491174 ignition[1015]: INFO : umount: umount passed Jan 29 16:34:34.491174 ignition[1015]: INFO : Ignition finished successfully Jan 29 16:34:34.490027 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:34:34.492055 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:34:34.492174 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:34:34.495658 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:34:34.495732 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:34:34.496509 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:34:34.496578 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:34:34.498818 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:34:34.498864 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:34:34.499717 systemd[1]: Stopped target network.target - Network. Jan 29 16:34:34.500122 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:34:34.500173 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:34:34.500778 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:34:34.503267 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:34:34.507042 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:34:34.507601 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:34:34.508244 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:34:34.510196 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:34:34.510256 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:34:34.510709 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:34:34.510745 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:34:34.511189 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:34:34.511287 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:34:34.511722 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:34:34.511768 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:34:34.512898 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:34:34.513826 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:34:34.517845 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:34:34.518478 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:34:34.518604 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:34:34.522650 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:34:34.523092 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:34:34.523195 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:34:34.524604 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:34:34.524682 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:34:34.525841 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:34:34.525890 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:34:34.528835 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:34:34.529183 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:34:34.529316 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:34:34.531012 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:34:34.531532 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:34:34.531622 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:34:34.541471 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:34:34.541947 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:34:34.542026 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:34:34.542636 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:34:34.542683 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:34:34.543287 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:34:34.543332 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:34:34.544135 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:34:34.547661 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:34:34.565714 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:34:34.566397 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:34:34.567203 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:34:34.567276 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:34:34.567819 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:34:34.567855 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:34:34.568934 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:34:34.569021 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:34:34.570402 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:34:34.570448 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:34:34.571577 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:34:34.571625 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:34:34.578167 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:34:34.578653 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:34:34.578705 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:34:34.581079 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 16:34:34.581126 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:34:34.581816 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:34:34.581862 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:34:34.582379 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:34:34.582424 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:34:34.584355 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:34:34.584455 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:34:34.585418 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:34:34.585506 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:34:34.587249 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:34:34.596394 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:34:34.603096 systemd[1]: Switching root. Jan 29 16:34:34.642416 systemd-journald[188]: Journal stopped Jan 29 16:34:35.707683 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Jan 29 16:34:35.707745 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:34:35.707762 kernel: SELinux: policy capability open_perms=1 Jan 29 16:34:35.707772 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:34:35.707781 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:34:35.707791 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:34:35.707808 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:34:35.707817 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:34:35.707830 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:34:35.707840 kernel: audit: type=1403 audit(1738168474.748:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:34:35.707851 systemd[1]: Successfully loaded SELinux policy in 43.339ms. Jan 29 16:34:35.707873 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.373ms. Jan 29 16:34:35.707885 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:34:35.707897 systemd[1]: Detected virtualization kvm. Jan 29 16:34:35.707907 systemd[1]: Detected architecture x86-64. Jan 29 16:34:35.707917 systemd[1]: Detected first boot. Jan 29 16:34:35.707927 systemd[1]: Hostname set to <ci-4230-0-0-6-6baf09a0d0>. Jan 29 16:34:35.707938 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:34:35.707948 zram_generator::config[1059]: No configuration found. Jan 29 16:34:35.707961 kernel: Guest personality initialized and is inactive Jan 29 16:34:35.707971 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:34:35.708233 kernel: Initialized host personality Jan 29 16:34:35.708246 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:34:35.708257 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:34:35.708269 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:34:35.708280 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:34:35.708290 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:34:35.708300 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:34:35.708319 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:34:35.708330 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:34:35.708340 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:34:35.708350 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:34:35.708361 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:34:35.708371 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:34:35.708381 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:34:35.708391 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:34:35.708404 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:34:35.708414 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:34:35.708425 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:34:35.708665 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:34:35.708679 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:34:35.708690 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:34:35.708704 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:34:35.708715 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:34:35.708726 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:34:35.708737 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:34:35.708751 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:34:35.708764 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:34:35.708776 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:34:35.708787 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:34:35.708798 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:34:35.708808 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:34:35.708819 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:34:35.708829 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:34:35.708839 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:34:35.708849 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:34:35.710024 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:34:35.710043 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:34:35.710054 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:34:35.710065 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:34:35.710076 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:34:35.710087 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:34:35.710098 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:34:35.710108 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:34:35.710119 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:34:35.710129 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:34:35.710157 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:34:35.710168 systemd[1]: Reached target machines.target - Containers. Jan 29 16:34:35.710179 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:34:35.710190 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:34:35.710200 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:34:35.710210 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:34:35.710221 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:34:35.710231 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:34:35.710244 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:34:35.710254 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:34:35.710265 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:34:35.710275 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:34:35.710285 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:34:35.710296 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:34:35.710306 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:34:35.710317 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:34:35.710328 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:34:35.710341 kernel: fuse: init (API version 7.39) Jan 29 16:34:35.710352 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:34:35.710362 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:34:35.710373 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:34:35.710383 kernel: loop: module loaded Jan 29 16:34:35.710393 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:34:35.710404 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:34:35.710438 systemd-journald[1143]: Collecting audit messages is disabled. Jan 29 16:34:35.710460 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:34:35.710471 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:34:35.710482 systemd[1]: Stopped verity-setup.service. Jan 29 16:34:35.710492 systemd-journald[1143]: Journal started Jan 29 16:34:35.710514 systemd-journald[1143]: Runtime Journal (/run/log/journal/df5025e58bb243c9bf7751f1e29c085a) is 4.8M, max 38.3M, 33.5M free. Jan 29 16:34:35.438541 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:34:35.451610 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 16:34:35.452320 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:34:35.732791 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:34:35.732861 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:34:35.724815 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:34:35.725398 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:34:35.725934 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:34:35.726482 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:34:35.729175 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:34:35.729762 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:34:35.730544 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:34:35.731334 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:34:35.732105 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:34:35.732339 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:34:35.734253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:34:35.734864 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:34:35.736333 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:34:35.736518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:34:35.739237 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:34:35.739444 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:34:35.740182 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:34:35.740398 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:34:35.741185 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:34:35.741963 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:34:35.749451 kernel: ACPI: bus type drm_connector registered Jan 29 16:34:35.751026 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:34:35.752814 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:34:35.753123 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:34:35.767693 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:34:35.773120 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:34:35.780239 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:34:35.786033 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:34:35.786661 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:34:35.786753 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:34:35.789423 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:34:35.796167 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:34:35.798495 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:34:35.800175 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:34:35.801807 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:34:35.809823 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:34:35.810860 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:34:35.813117 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:34:35.814152 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:34:35.824145 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:34:35.828687 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:34:35.849062 systemd-journald[1143]: Time spent on flushing to /var/log/journal/df5025e58bb243c9bf7751f1e29c085a is 77.045ms for 1150 entries. Jan 29 16:34:35.849062 systemd-journald[1143]: System Journal (/var/log/journal/df5025e58bb243c9bf7751f1e29c085a) is 8M, max 584.8M, 576.8M free. Jan 29 16:34:35.950379 systemd-journald[1143]: Received client request to flush runtime journal. Jan 29 16:34:35.950433 kernel: loop0: detected capacity change from 0 to 205544 Jan 29 16:34:35.837209 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:34:35.842284 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:34:35.842879 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:34:35.845314 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:34:35.894227 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:34:35.897776 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:34:35.905180 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:34:35.920462 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:34:35.951274 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:34:35.954466 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:34:35.956303 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:34:35.963117 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 29 16:34:35.963155 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 29 16:34:35.978708 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:34:35.985972 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:34:35.988006 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:34:36.003188 kernel: loop1: detected capacity change from 0 to 138176 Jan 29 16:34:35.999668 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:34:36.016423 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 16:34:36.050137 kernel: loop2: detected capacity change from 0 to 8 Jan 29 16:34:36.068562 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:34:36.083188 kernel: loop3: detected capacity change from 0 to 147912 Jan 29 16:34:36.079521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:34:36.129047 kernel: loop4: detected capacity change from 0 to 205544 Jan 29 16:34:36.130669 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jan 29 16:34:36.131089 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jan 29 16:34:36.139766 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:34:36.159046 kernel: loop5: detected capacity change from 0 to 138176 Jan 29 16:34:36.188023 kernel: loop6: detected capacity change from 0 to 8 Jan 29 16:34:36.191034 kernel: loop7: detected capacity change from 0 to 147912 Jan 29 16:34:36.211115 (sd-merge)[1213]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 29 16:34:36.211815 (sd-merge)[1213]: Merged extensions into '/usr'. Jan 29 16:34:36.216552 systemd[1]: Reload requested from client PID 1185 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:34:36.216663 systemd[1]: Reloading... Jan 29 16:34:36.313136 zram_generator::config[1242]: No configuration found. Jan 29 16:34:36.447330 ldconfig[1180]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:34:36.465744 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:34:36.534679 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:34:36.535292 systemd[1]: Reloading finished in 317 ms. Jan 29 16:34:36.554626 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:34:36.555759 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:34:36.570220 systemd[1]: Starting ensure-sysext.service... Jan 29 16:34:36.574382 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:34:36.596515 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:34:36.596530 systemd[1]: Reloading... Jan 29 16:34:36.612332 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:34:36.612941 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:34:36.613887 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:34:36.614235 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jan 29 16:34:36.614366 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jan 29 16:34:36.618103 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:34:36.618190 systemd-tmpfiles[1286]: Skipping /boot Jan 29 16:34:36.632695 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:34:36.632718 systemd-tmpfiles[1286]: Skipping /boot Jan 29 16:34:36.678007 zram_generator::config[1315]: No configuration found. Jan 29 16:34:36.787149 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:34:36.848966 systemd[1]: Reloading finished in 252 ms. Jan 29 16:34:36.859952 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:34:36.872392 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:34:36.884272 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:34:36.889052 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:34:36.891287 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:34:36.895847 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:34:36.900211 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:34:36.905166 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:34:36.908782 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:34:36.908935 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:34:36.918268 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:34:36.926588 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:34:36.932044 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:34:36.932709 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:34:36.932807 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:34:36.932912 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:34:36.934111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:34:36.934307 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:34:36.935197 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:34:36.935396 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:34:36.941273 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:34:36.949302 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:34:36.955831 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:34:36.956243 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:34:36.966274 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:34:36.976436 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:34:36.977164 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:34:36.977386 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:34:36.977593 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:34:36.980021 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:34:36.981383 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:34:36.982051 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:34:36.983690 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:34:36.984351 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:34:36.987562 systemd-udevd[1367]: Using default interface naming scheme 'v255'. Jan 29 16:34:36.994410 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:34:36.995614 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:34:36.995892 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:34:37.002070 systemd[1]: Finished ensure-sysext.service. Jan 29 16:34:37.005288 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:34:37.005515 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:34:37.010624 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:34:37.014120 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:34:37.020151 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:34:37.020694 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:34:37.020724 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:34:37.020765 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:34:37.026184 augenrules[1402]: No rules Jan 29 16:34:37.031254 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:34:37.036299 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:34:37.038534 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:34:37.039166 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:34:37.039377 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:34:37.040165 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:34:37.040363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:34:37.050389 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:34:37.051064 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:34:37.052253 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:34:37.053082 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:34:37.054367 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:34:37.057786 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:34:37.064483 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:34:37.065441 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:34:37.071179 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:34:37.074008 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:34:37.076784 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:34:37.194015 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:34:37.234077 systemd-networkd[1421]: lo: Link UP Jan 29 16:34:37.234090 systemd-networkd[1421]: lo: Gained carrier Jan 29 16:34:37.235877 systemd-networkd[1421]: Enumeration completed Jan 29 16:34:37.236061 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:34:37.243141 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:34:37.251170 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:34:37.270177 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:34:37.276310 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 29 16:34:37.275841 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:34:37.276902 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:34:37.279820 systemd-resolved[1364]: Positive Trust Anchors: Jan 29 16:34:37.279837 systemd-resolved[1364]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:34:37.279881 systemd-resolved[1364]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:34:37.288341 systemd-resolved[1364]: Using system hostname 'ci-4230-0-0-6-6baf09a0d0'. Jan 29 16:34:37.290062 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:34:37.293671 systemd-networkd[1421]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:34:37.293994 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:34:37.294140 systemd-networkd[1421]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:34:37.294597 systemd[1]: Reached target network.target - Network. Jan 29 16:34:37.295187 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:34:37.296122 systemd-networkd[1421]: eth0: Link UP Jan 29 16:34:37.296177 systemd-networkd[1421]: eth0: Gained carrier Jan 29 16:34:37.296438 systemd-networkd[1421]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:34:37.308550 systemd-networkd[1421]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:34:37.308562 systemd-networkd[1421]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:34:37.309029 systemd-networkd[1421]: eth1: Link UP Jan 29 16:34:37.309049 systemd-networkd[1421]: eth1: Gained carrier Jan 29 16:34:37.309061 systemd-networkd[1421]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:34:37.324018 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:34:37.331723 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 29 16:34:37.331773 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:34:37.331872 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:34:37.342432 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:34:37.345278 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:34:37.349138 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:34:37.349701 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:34:37.349728 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:34:37.349752 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:34:37.349763 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:34:37.350194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:34:37.350386 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:34:37.357433 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:34:37.357643 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:34:37.358107 systemd-networkd[1421]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:34:37.359492 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:34:37.360151 systemd-networkd[1421]: eth0: DHCPv4 address 142.132.231.50/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 16:34:37.361330 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Jan 29 16:34:37.367994 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 29 16:34:37.371811 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 29 16:34:37.374152 kernel: Console: switching to colour dummy device 80x25 Jan 29 16:34:37.374190 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 16:34:37.374206 kernel: [drm] features: -context_init Jan 29 16:34:37.376426 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:34:37.379603 kernel: [drm] number of scanouts: 1 Jan 29 16:34:37.379632 kernel: [drm] number of cap sets: 0 Jan 29 16:34:37.378275 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:34:37.379418 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:34:37.390998 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1422) Jan 29 16:34:37.394003 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jan 29 16:34:37.396667 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 29 16:34:37.396705 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 16:34:37.399882 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 16:34:37.400110 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 16:34:37.400258 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 16:34:37.400276 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 16:34:37.420008 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 16:34:38.681874 systemd-resolved[1364]: Clock change detected. Flushing caches. Jan 29 16:34:38.681982 systemd-timesyncd[1403]: Contacted time server 88.99.30.99:123 (0.flatcar.pool.ntp.org). Jan 29 16:34:38.682040 systemd-timesyncd[1403]: Initial clock synchronization to Wed 2025-01-29 16:34:38.681437 UTC. Jan 29 16:34:38.697006 kernel: EDAC MC: Ver: 3.0.0 Jan 29 16:34:38.709978 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:34:38.769484 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:34:38.769925 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:34:38.776507 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 16:34:38.791962 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:34:38.794402 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:34:38.806919 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:34:38.837220 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:34:38.922530 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:34:38.928093 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:34:38.939140 lvm[1484]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:34:38.975174 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:34:38.975601 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:34:38.975695 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:34:38.975955 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:34:38.976113 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:34:38.976495 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:34:38.976728 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:34:38.977054 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:34:38.977963 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:34:38.978011 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:34:38.978253 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:34:38.982334 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:34:38.984513 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:34:38.988685 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:34:38.990651 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:34:38.991202 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:34:39.000639 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:34:39.001761 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:34:39.007993 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:34:39.018009 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:34:39.018724 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:34:39.019296 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:34:39.020734 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:34:39.020782 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:34:39.024144 lvm[1488]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:34:39.026935 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:34:39.039978 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:34:39.043269 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:34:39.049656 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:34:39.053942 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:34:39.056584 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:34:39.062368 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:34:39.067517 jq[1494]: false Jan 29 16:34:39.073353 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:34:39.088972 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 29 16:34:39.092433 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:34:39.099740 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:34:39.110947 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:34:39.112760 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:34:39.113341 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:34:39.114251 dbus-daemon[1491]: [system] SELinux support is enabled Jan 29 16:34:39.117989 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:34:39.122619 extend-filesystems[1495]: Found loop4 Jan 29 16:34:39.122619 extend-filesystems[1495]: Found loop5 Jan 29 16:34:39.122619 extend-filesystems[1495]: Found loop6 Jan 29 16:34:39.122619 extend-filesystems[1495]: Found loop7 Jan 29 16:34:39.122619 extend-filesystems[1495]: Found sda Jan 29 16:34:39.122619 extend-filesystems[1495]: Found sda1 Jan 29 16:34:39.122619 extend-filesystems[1495]: Found sda2 Jan 29 16:34:39.122619 extend-filesystems[1495]: Found sda3 Jan 29 16:34:39.122619 extend-filesystems[1495]: Found usr Jan 29 16:34:39.122619 extend-filesystems[1495]: Found sda4 Jan 29 16:34:39.122619 extend-filesystems[1495]: Found sda6 Jan 29 16:34:39.122619 extend-filesystems[1495]: Found sda7 Jan 29 16:34:39.122619 extend-filesystems[1495]: Found sda9 Jan 29 16:34:39.122619 extend-filesystems[1495]: Checking size of /dev/sda9 Jan 29 16:34:39.183033 coreos-metadata[1490]: Jan 29 16:34:39.124 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 29 16:34:39.183033 coreos-metadata[1490]: Jan 29 16:34:39.129 INFO Fetch successful Jan 29 16:34:39.183033 coreos-metadata[1490]: Jan 29 16:34:39.129 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 29 16:34:39.183033 coreos-metadata[1490]: Jan 29 16:34:39.131 INFO Fetch successful Jan 29 16:34:39.183204 extend-filesystems[1495]: Resized partition /dev/sda9 Jan 29 16:34:39.125902 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:34:39.132266 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:34:39.141444 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:34:39.185747 update_engine[1510]: I20250129 16:34:39.146152 1510 main.cc:92] Flatcar Update Engine starting Jan 29 16:34:39.185747 update_engine[1510]: I20250129 16:34:39.148727 1510 update_check_scheduler.cc:74] Next update check in 2m13s Jan 29 16:34:39.162208 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:34:39.192022 jq[1512]: true Jan 29 16:34:39.192218 extend-filesystems[1521]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:34:39.162901 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:34:39.163209 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:34:39.163895 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:34:39.175832 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:34:39.176076 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:34:39.205711 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 29 16:34:39.211223 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:34:39.211265 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:34:39.213870 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:34:39.213895 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:34:39.216592 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:34:39.222224 (ntainerd)[1525]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:34:39.226064 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:34:39.240660 jq[1524]: true Jan 29 16:34:39.265881 tar[1520]: linux-amd64/helm Jan 29 16:34:39.295681 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:34:39.300554 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:34:39.330705 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1430) Jan 29 16:34:39.330658 systemd-logind[1509]: New seat seat0. Jan 29 16:34:39.337968 systemd-logind[1509]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 16:34:39.338110 systemd-logind[1509]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:34:39.338749 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:34:39.391138 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 29 16:34:39.409928 bash[1560]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:34:39.410661 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:34:39.415281 extend-filesystems[1521]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 16:34:39.415281 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 29 16:34:39.415281 extend-filesystems[1521]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 29 16:34:39.429207 extend-filesystems[1495]: Resized filesystem in /dev/sda9 Jan 29 16:34:39.429207 extend-filesystems[1495]: Found sr0 Jan 29 16:34:39.423004 systemd[1]: Starting sshkeys.service... Jan 29 16:34:39.427133 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:34:39.444104 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:34:39.492910 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 16:34:39.502479 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 16:34:39.536579 containerd[1525]: time="2025-01-29T16:34:39.536461601Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:34:39.567091 coreos-metadata[1570]: Jan 29 16:34:39.566 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 29 16:34:39.571829 coreos-metadata[1570]: Jan 29 16:34:39.569 INFO Fetch successful Jan 29 16:34:39.573094 unknown[1570]: wrote ssh authorized keys file for user: core Jan 29 16:34:39.597056 containerd[1525]: time="2025-01-29T16:34:39.597012710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:34:39.601609 containerd[1525]: time="2025-01-29T16:34:39.600218964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:34:39.601609 containerd[1525]: time="2025-01-29T16:34:39.600246696Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:34:39.601609 containerd[1525]: time="2025-01-29T16:34:39.600262586Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:34:39.601609 containerd[1525]: time="2025-01-29T16:34:39.600409582Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:34:39.601609 containerd[1525]: time="2025-01-29T16:34:39.600424249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:34:39.601609 containerd[1525]: time="2025-01-29T16:34:39.600491325Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:34:39.601609 containerd[1525]: time="2025-01-29T16:34:39.600502296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:34:39.601609 containerd[1525]: time="2025-01-29T16:34:39.600732127Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:34:39.601609 containerd[1525]: time="2025-01-29T16:34:39.600745532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:34:39.601609 containerd[1525]: time="2025-01-29T16:34:39.600756522Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:34:39.601609 containerd[1525]: time="2025-01-29T16:34:39.600765018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:34:39.602438 containerd[1525]: time="2025-01-29T16:34:39.601966182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:34:39.602715 containerd[1525]: time="2025-01-29T16:34:39.602690100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:34:39.602907 containerd[1525]: time="2025-01-29T16:34:39.602870458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:34:39.602907 containerd[1525]: time="2025-01-29T16:34:39.602892720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:34:39.603018 containerd[1525]: time="2025-01-29T16:34:39.602983310Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:34:39.603065 containerd[1525]: time="2025-01-29T16:34:39.603043944Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:34:39.607739 update-ssh-keys[1576]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:34:39.610248 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 16:34:39.616992 systemd[1]: Finished sshkeys.service. Jan 29 16:34:39.620355 containerd[1525]: time="2025-01-29T16:34:39.620317617Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:34:39.620602 containerd[1525]: time="2025-01-29T16:34:39.620556145Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:34:39.620626 containerd[1525]: time="2025-01-29T16:34:39.620608443Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:34:39.620656 containerd[1525]: time="2025-01-29T16:34:39.620627449Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:34:39.620704 containerd[1525]: time="2025-01-29T16:34:39.620682522Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:34:39.621049 containerd[1525]: time="2025-01-29T16:34:39.621015697Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:34:39.621323 containerd[1525]: time="2025-01-29T16:34:39.621302014Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:34:39.621824 containerd[1525]: time="2025-01-29T16:34:39.621428742Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:34:39.621824 containerd[1525]: time="2025-01-29T16:34:39.621465721Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:34:39.621824 containerd[1525]: time="2025-01-29T16:34:39.621479657Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:34:39.621824 containerd[1525]: time="2025-01-29T16:34:39.621492642Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:34:39.621824 containerd[1525]: time="2025-01-29T16:34:39.621504905Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:34:39.621824 containerd[1525]: time="2025-01-29T16:34:39.621517699Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:34:39.622153 containerd[1525]: time="2025-01-29T16:34:39.621528889Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:34:39.622182 containerd[1525]: time="2025-01-29T16:34:39.622156056Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:34:39.622182 containerd[1525]: time="2025-01-29T16:34:39.622169512Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:34:39.622182 containerd[1525]: time="2025-01-29T16:34:39.622180352Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:34:39.622238 containerd[1525]: time="2025-01-29T16:34:39.622190551Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:34:39.622238 containerd[1525]: time="2025-01-29T16:34:39.622227941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:34:39.622274 containerd[1525]: time="2025-01-29T16:34:39.622240395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:34:39.622274 containerd[1525]: time="2025-01-29T16:34:39.622252578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:34:39.622274 containerd[1525]: time="2025-01-29T16:34:39.622263638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:34:39.622333 containerd[1525]: time="2025-01-29T16:34:39.622273967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:34:39.622333 containerd[1525]: time="2025-01-29T16:34:39.622308792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:34:39.622333 containerd[1525]: time="2025-01-29T16:34:39.622324853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:34:39.622378 containerd[1525]: time="2025-01-29T16:34:39.622338959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:34:39.622378 containerd[1525]: time="2025-01-29T16:34:39.622349780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:34:39.622422 containerd[1525]: time="2025-01-29T16:34:39.622380006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:34:39.622422 containerd[1525]: time="2025-01-29T16:34:39.622392059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:34:39.622422 containerd[1525]: time="2025-01-29T16:34:39.622401617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:34:39.622422 containerd[1525]: time="2025-01-29T16:34:39.622415413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:34:39.622490 containerd[1525]: time="2025-01-29T16:34:39.622427004Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:34:39.623505 containerd[1525]: time="2025-01-29T16:34:39.622726376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:34:39.623505 containerd[1525]: time="2025-01-29T16:34:39.622747787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:34:39.623505 containerd[1525]: time="2025-01-29T16:34:39.622757274Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:34:39.623505 containerd[1525]: time="2025-01-29T16:34:39.622843676Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:34:39.623505 containerd[1525]: time="2025-01-29T16:34:39.622861589Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:34:39.623505 containerd[1525]: time="2025-01-29T16:34:39.622870246Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:34:39.623903 containerd[1525]: time="2025-01-29T16:34:39.622880716Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:34:39.623929 containerd[1525]: time="2025-01-29T16:34:39.623901931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:34:39.623955 containerd[1525]: time="2025-01-29T16:34:39.623928581Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:34:39.623994 containerd[1525]: time="2025-01-29T16:34:39.623971642Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:34:39.624018 containerd[1525]: time="2025-01-29T16:34:39.623995808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:34:39.624333 containerd[1525]: time="2025-01-29T16:34:39.624274250Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:34:39.624447 containerd[1525]: time="2025-01-29T16:34:39.624335955Z" level=info msg="Connect containerd service" Jan 29 16:34:39.624447 containerd[1525]: time="2025-01-29T16:34:39.624390127Z" level=info msg="using legacy CRI server" Jan 29 16:34:39.624447 containerd[1525]: time="2025-01-29T16:34:39.624398253Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:34:39.624537 containerd[1525]: time="2025-01-29T16:34:39.624516053Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:34:39.625292 containerd[1525]: time="2025-01-29T16:34:39.625265299Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:34:39.625924 containerd[1525]: time="2025-01-29T16:34:39.625881706Z" level=info msg="Start subscribing containerd event" Jan 29 16:34:39.625954 containerd[1525]: time="2025-01-29T16:34:39.625932801Z" level=info msg="Start recovering state" Jan 29 16:34:39.626005 containerd[1525]: time="2025-01-29T16:34:39.625984688Z" level=info msg="Start event monitor" Jan 29 16:34:39.626038 containerd[1525]: time="2025-01-29T16:34:39.626014183Z" level=info msg="Start snapshots syncer" Jan 29 16:34:39.626038 containerd[1525]: time="2025-01-29T16:34:39.626022800Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:34:39.626038 containerd[1525]: time="2025-01-29T16:34:39.626030745Z" level=info msg="Start streaming server" Jan 29 16:34:39.629719 containerd[1525]: time="2025-01-29T16:34:39.627971656Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:34:39.629719 containerd[1525]: time="2025-01-29T16:34:39.628037189Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:34:39.629719 containerd[1525]: time="2025-01-29T16:34:39.628665637Z" level=info msg="containerd successfully booted in 0.094950s" Jan 29 16:34:39.628526 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:34:39.684283 locksmithd[1537]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:34:39.742306 sshd_keygen[1523]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:34:39.765024 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:34:39.774081 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:34:39.783212 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:34:39.783470 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:34:39.797837 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:34:39.804950 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:34:39.816300 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:34:39.825080 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:34:39.825606 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:34:39.912524 tar[1520]: linux-amd64/LICENSE Jan 29 16:34:39.912612 tar[1520]: linux-amd64/README.md Jan 29 16:34:39.924452 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:34:39.924916 systemd-networkd[1421]: eth0: Gained IPv6LL Jan 29 16:34:39.930717 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:34:39.931550 systemd-networkd[1421]: eth1: Gained IPv6LL Jan 29 16:34:39.935767 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:34:39.945178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:34:39.951412 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:34:39.982288 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:34:40.637340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:34:40.641166 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:34:40.643468 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:34:40.645556 systemd[1]: Startup finished in 1.258s (kernel) + 7.058s (initrd) + 4.682s (userspace) = 12.999s. Jan 29 16:34:41.101883 kubelet[1621]: E0129 16:34:41.101727 1621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:34:41.105369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:34:41.105601 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:34:41.106111 systemd[1]: kubelet.service: Consumed 807ms CPU time, 235.4M memory peak. Jan 29 16:34:51.356272 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:34:51.369222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:34:51.483248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:34:51.487155 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:34:51.522026 kubelet[1640]: E0129 16:34:51.521951 1640 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:34:51.527339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:34:51.527523 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:34:51.527865 systemd[1]: kubelet.service: Consumed 152ms CPU time, 96.2M memory peak. Jan 29 16:35:01.778230 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:35:01.788058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:35:01.930794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:35:01.935607 (kubelet)[1655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:35:01.985036 kubelet[1655]: E0129 16:35:01.984918 1655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:35:01.988970 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:35:01.989175 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:35:01.989615 systemd[1]: kubelet.service: Consumed 181ms CPU time, 97.5M memory peak. Jan 29 16:35:08.775880 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:35:08.787057 systemd[1]: Started sshd@0-142.132.231.50:22-103.31.39.159:50310.service - OpenSSH per-connection server daemon (103.31.39.159:50310). Jan 29 16:35:09.925935 sshd[1663]: Invalid user taylor from 103.31.39.159 port 50310 Jan 29 16:35:10.146891 sshd[1663]: Received disconnect from 103.31.39.159 port 50310:11: Bye Bye [preauth] Jan 29 16:35:10.146891 sshd[1663]: Disconnected from invalid user taylor 103.31.39.159 port 50310 [preauth] Jan 29 16:35:10.148948 systemd[1]: sshd@0-142.132.231.50:22-103.31.39.159:50310.service: Deactivated successfully. Jan 29 16:35:11.998558 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 16:35:12.003937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:35:12.144068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:35:12.144180 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:35:12.181396 kubelet[1675]: E0129 16:35:12.181328 1675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:35:12.185511 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:35:12.185710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:35:12.186105 systemd[1]: kubelet.service: Consumed 169ms CPU time, 97.6M memory peak. Jan 29 16:35:17.062100 systemd[1]: Started sshd@1-142.132.231.50:22-72.240.125.133:60066.service - OpenSSH per-connection server daemon (72.240.125.133:60066). Jan 29 16:35:17.804703 sshd[1683]: Invalid user debian from 72.240.125.133 port 60066 Jan 29 16:35:17.940496 sshd[1683]: Received disconnect from 72.240.125.133 port 60066:11: Bye Bye [preauth] Jan 29 16:35:17.940496 sshd[1683]: Disconnected from invalid user debian 72.240.125.133 port 60066 [preauth] Jan 29 16:35:17.942987 systemd[1]: sshd@1-142.132.231.50:22-72.240.125.133:60066.service: Deactivated successfully. Jan 29 16:35:22.248849 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 16:35:22.254412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:35:22.386359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:35:22.389979 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:35:22.420441 kubelet[1695]: E0129 16:35:22.420339 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:35:22.424350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:35:22.424543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:35:22.425110 systemd[1]: kubelet.service: Consumed 158ms CPU time, 97.5M memory peak. Jan 29 16:35:24.650078 update_engine[1510]: I20250129 16:35:24.649987 1510 update_attempter.cc:509] Updating boot flags... Jan 29 16:35:24.687866 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1711) Jan 29 16:35:24.751538 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1715) Jan 29 16:35:24.804840 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1715) Jan 29 16:35:32.498723 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 16:35:32.503998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:35:32.632644 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:35:32.655175 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:35:32.696086 kubelet[1731]: E0129 16:35:32.696023 1731 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:35:32.699854 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:35:32.700040 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:35:32.700348 systemd[1]: kubelet.service: Consumed 163ms CPU time, 98M memory peak. Jan 29 16:35:42.748659 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 16:35:42.753982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:35:42.893141 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:35:42.905170 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:35:42.938903 kubelet[1747]: E0129 16:35:42.938842 1747 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:35:42.942122 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:35:42.942302 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:35:42.942660 systemd[1]: kubelet.service: Consumed 162ms CPU time, 95.3M memory peak. Jan 29 16:35:52.998641 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 16:35:53.004054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:35:53.159909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:35:53.165641 (kubelet)[1762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:35:53.211883 kubelet[1762]: E0129 16:35:53.211767 1762 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:35:53.215727 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:35:53.216058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:35:53.216582 systemd[1]: kubelet.service: Consumed 181ms CPU time, 97.2M memory peak. Jan 29 16:35:57.400041 systemd[1]: Started sshd@2-142.132.231.50:22-152.32.133.149:32452.service - OpenSSH per-connection server daemon (152.32.133.149:32452). Jan 29 16:35:58.838136 sshd[1770]: Invalid user ftpuser from 152.32.133.149 port 32452 Jan 29 16:35:59.116600 sshd[1770]: Received disconnect from 152.32.133.149 port 32452:11: Bye Bye [preauth] Jan 29 16:35:59.116600 sshd[1770]: Disconnected from invalid user ftpuser 152.32.133.149 port 32452 [preauth] Jan 29 16:35:59.119723 systemd[1]: sshd@2-142.132.231.50:22-152.32.133.149:32452.service: Deactivated successfully. Jan 29 16:36:03.248682 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 29 16:36:03.254281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:36:03.379670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:36:03.383869 (kubelet)[1782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:36:03.420854 kubelet[1782]: E0129 16:36:03.420759 1782 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:36:03.423960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:36:03.424156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:36:03.424597 systemd[1]: kubelet.service: Consumed 165ms CPU time, 97.6M memory peak. Jan 29 16:36:13.498773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 29 16:36:13.503967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:36:13.639102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:36:13.643457 (kubelet)[1797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:36:13.680680 kubelet[1797]: E0129 16:36:13.680601 1797 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:36:13.684654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:36:13.684871 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:36:13.685213 systemd[1]: kubelet.service: Consumed 168ms CPU time, 93.8M memory peak. Jan 29 16:36:23.748700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 29 16:36:23.754994 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:36:23.887233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:36:23.892187 (kubelet)[1812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:36:23.928136 kubelet[1812]: E0129 16:36:23.928042 1812 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:36:23.930357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:36:23.930536 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:36:23.930975 systemd[1]: kubelet.service: Consumed 167ms CPU time, 94.1M memory peak. Jan 29 16:36:28.283143 systemd[1]: Started sshd@3-142.132.231.50:22-72.240.125.133:57376.service - OpenSSH per-connection server daemon (72.240.125.133:57376). Jan 29 16:36:28.997223 sshd[1820]: Invalid user alex from 72.240.125.133 port 57376 Jan 29 16:36:29.143482 sshd[1820]: Received disconnect from 72.240.125.133 port 57376:11: Bye Bye [preauth] Jan 29 16:36:29.143482 sshd[1820]: Disconnected from invalid user alex 72.240.125.133 port 57376 [preauth] Jan 29 16:36:29.146406 systemd[1]: sshd@3-142.132.231.50:22-72.240.125.133:57376.service: Deactivated successfully. Jan 29 16:36:33.999434 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 29 16:36:34.004996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:36:34.134520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:36:34.137926 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:36:34.174719 kubelet[1832]: E0129 16:36:34.174637 1832 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:36:34.177735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:36:34.177955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:36:34.178435 systemd[1]: kubelet.service: Consumed 161ms CPU time, 93.6M memory peak. Jan 29 16:36:39.489111 systemd[1]: Started sshd@4-142.132.231.50:22-147.75.109.163:49130.service - OpenSSH per-connection server daemon (147.75.109.163:49130). Jan 29 16:36:40.470392 sshd[1840]: Accepted publickey for core from 147.75.109.163 port 49130 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:40.472309 sshd-session[1840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:40.483691 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:36:40.495213 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:36:40.497875 systemd-logind[1509]: New session 1 of user core. Jan 29 16:36:40.508657 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:36:40.517173 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:36:40.523658 (systemd)[1844]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:36:40.526548 systemd-logind[1509]: New session c1 of user core. Jan 29 16:36:40.660338 systemd[1844]: Queued start job for default target default.target. Jan 29 16:36:40.666986 systemd[1844]: Created slice app.slice - User Application Slice. Jan 29 16:36:40.667012 systemd[1844]: Reached target paths.target - Paths. Jan 29 16:36:40.667053 systemd[1844]: Reached target timers.target - Timers. Jan 29 16:36:40.668491 systemd[1844]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:36:40.680721 systemd[1844]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:36:40.680879 systemd[1844]: Reached target sockets.target - Sockets. Jan 29 16:36:40.680922 systemd[1844]: Reached target basic.target - Basic System. Jan 29 16:36:40.680971 systemd[1844]: Reached target default.target - Main User Target. Jan 29 16:36:40.681005 systemd[1844]: Startup finished in 146ms. Jan 29 16:36:40.681302 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:36:40.688924 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:36:41.392068 systemd[1]: Started sshd@5-142.132.231.50:22-147.75.109.163:49136.service - OpenSSH per-connection server daemon (147.75.109.163:49136). Jan 29 16:36:42.370070 sshd[1855]: Accepted publickey for core from 147.75.109.163 port 49136 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:42.371708 sshd-session[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:42.376960 systemd-logind[1509]: New session 2 of user core. Jan 29 16:36:42.387017 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:36:43.049667 sshd[1857]: Connection closed by 147.75.109.163 port 49136 Jan 29 16:36:43.050558 sshd-session[1855]: pam_unix(sshd:session): session closed for user core Jan 29 16:36:43.054988 systemd-logind[1509]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:36:43.055748 systemd[1]: sshd@5-142.132.231.50:22-147.75.109.163:49136.service: Deactivated successfully. Jan 29 16:36:43.058258 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:36:43.059732 systemd-logind[1509]: Removed session 2. Jan 29 16:36:43.223075 systemd[1]: Started sshd@6-142.132.231.50:22-147.75.109.163:49144.service - OpenSSH per-connection server daemon (147.75.109.163:49144). Jan 29 16:36:44.205731 sshd[1863]: Accepted publickey for core from 147.75.109.163 port 49144 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:44.207521 sshd-session[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:44.208668 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 29 16:36:44.217078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:36:44.223354 systemd-logind[1509]: New session 3 of user core. Jan 29 16:36:44.226228 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:36:44.354263 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:36:44.357960 (kubelet)[1874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:36:44.385607 kubelet[1874]: E0129 16:36:44.385544 1874 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:36:44.389196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:36:44.389396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:36:44.389715 systemd[1]: kubelet.service: Consumed 161ms CPU time, 97.5M memory peak. Jan 29 16:36:44.881195 sshd[1868]: Connection closed by 147.75.109.163 port 49144 Jan 29 16:36:44.882360 sshd-session[1863]: pam_unix(sshd:session): session closed for user core Jan 29 16:36:44.885708 systemd[1]: sshd@6-142.132.231.50:22-147.75.109.163:49144.service: Deactivated successfully. Jan 29 16:36:44.888093 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:36:44.890345 systemd-logind[1509]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:36:44.891684 systemd-logind[1509]: Removed session 3. Jan 29 16:36:45.057082 systemd[1]: Started sshd@7-142.132.231.50:22-147.75.109.163:49160.service - OpenSSH per-connection server daemon (147.75.109.163:49160). Jan 29 16:36:46.033791 sshd[1887]: Accepted publickey for core from 147.75.109.163 port 49160 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:46.035469 sshd-session[1887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:46.041722 systemd-logind[1509]: New session 4 of user core. Jan 29 16:36:46.048985 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:36:46.712447 sshd[1889]: Connection closed by 147.75.109.163 port 49160 Jan 29 16:36:46.713081 sshd-session[1887]: pam_unix(sshd:session): session closed for user core Jan 29 16:36:46.716995 systemd-logind[1509]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:36:46.717832 systemd[1]: sshd@7-142.132.231.50:22-147.75.109.163:49160.service: Deactivated successfully. Jan 29 16:36:46.719878 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:36:46.721057 systemd-logind[1509]: Removed session 4. Jan 29 16:36:46.890158 systemd[1]: Started sshd@8-142.132.231.50:22-147.75.109.163:49164.service - OpenSSH per-connection server daemon (147.75.109.163:49164). Jan 29 16:36:47.870990 sshd[1895]: Accepted publickey for core from 147.75.109.163 port 49164 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:47.872916 sshd-session[1895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:47.878551 systemd-logind[1509]: New session 5 of user core. Jan 29 16:36:47.894000 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:36:48.400778 sudo[1898]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:36:48.401234 sudo[1898]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:36:48.419011 sudo[1898]: pam_unix(sudo:session): session closed for user root Jan 29 16:36:48.578063 sshd[1897]: Connection closed by 147.75.109.163 port 49164 Jan 29 16:36:48.578964 sshd-session[1895]: pam_unix(sshd:session): session closed for user core Jan 29 16:36:48.583618 systemd[1]: sshd@8-142.132.231.50:22-147.75.109.163:49164.service: Deactivated successfully. Jan 29 16:36:48.585632 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:36:48.586518 systemd-logind[1509]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:36:48.587843 systemd-logind[1509]: Removed session 5. Jan 29 16:36:48.757360 systemd[1]: Started sshd@9-142.132.231.50:22-147.75.109.163:36744.service - OpenSSH per-connection server daemon (147.75.109.163:36744). Jan 29 16:36:49.740867 sshd[1904]: Accepted publickey for core from 147.75.109.163 port 36744 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:49.742529 sshd-session[1904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:49.748081 systemd-logind[1509]: New session 6 of user core. Jan 29 16:36:49.761995 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:36:50.263651 sudo[1908]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:36:50.264034 sudo[1908]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:36:50.268296 sudo[1908]: pam_unix(sudo:session): session closed for user root Jan 29 16:36:50.274960 sudo[1907]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:36:50.275312 sudo[1907]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:36:50.299072 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:36:50.327402 augenrules[1930]: No rules Jan 29 16:36:50.327997 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:36:50.328261 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:36:50.329377 sudo[1907]: pam_unix(sudo:session): session closed for user root Jan 29 16:36:50.488313 sshd[1906]: Connection closed by 147.75.109.163 port 36744 Jan 29 16:36:50.489014 sshd-session[1904]: pam_unix(sshd:session): session closed for user core Jan 29 16:36:50.492351 systemd[1]: sshd@9-142.132.231.50:22-147.75.109.163:36744.service: Deactivated successfully. Jan 29 16:36:50.494497 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:36:50.496298 systemd-logind[1509]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:36:50.497487 systemd-logind[1509]: Removed session 6. Jan 29 16:36:50.663258 systemd[1]: Started sshd@10-142.132.231.50:22-147.75.109.163:36752.service - OpenSSH per-connection server daemon (147.75.109.163:36752). Jan 29 16:36:51.658844 sshd[1939]: Accepted publickey for core from 147.75.109.163 port 36752 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:51.660470 sshd-session[1939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:51.665942 systemd-logind[1509]: New session 7 of user core. Jan 29 16:36:51.674000 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:36:51.793042 systemd[1]: Started sshd@11-142.132.231.50:22-103.31.39.159:33264.service - OpenSSH per-connection server daemon (103.31.39.159:33264). Jan 29 16:36:52.178193 sudo[1945]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:36:52.178593 sudo[1945]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:36:52.439172 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:36:52.440663 (dockerd)[1962]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:36:52.646828 update_engine[1510]: I20250129 16:36:52.646708 1510 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 16:36:52.646828 update_engine[1510]: I20250129 16:36:52.646754 1510 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 16:36:52.648047 update_engine[1510]: I20250129 16:36:52.647303 1510 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 16:36:52.648047 update_engine[1510]: I20250129 16:36:52.647821 1510 omaha_request_params.cc:62] Current group set to alpha Jan 29 16:36:52.648047 update_engine[1510]: I20250129 16:36:52.647914 1510 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 16:36:52.648047 update_engine[1510]: I20250129 16:36:52.647924 1510 update_attempter.cc:643] Scheduling an action processor start. Jan 29 16:36:52.648047 update_engine[1510]: I20250129 16:36:52.647938 1510 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 16:36:52.648047 update_engine[1510]: I20250129 16:36:52.647964 1510 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 16:36:52.648047 update_engine[1510]: I20250129 16:36:52.648018 1510 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 16:36:52.648047 update_engine[1510]: I20250129 16:36:52.648027 1510 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Jan 29 16:36:52.648047 update_engine[1510]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Jan 29 16:36:52.648047 update_engine[1510]: <os version="Chateau" platform="CoreOS" sp="4230.0.0_x86_64"></os> Jan 29 16:36:52.648047 update_engine[1510]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4230.0.0" track="alpha" bootid="{e739d810-a8ae-4d91-acbf-da76f77c7853}" oem="hetzner" oemversion="0" alephversion="4230.0.0" machineid="df5025e58bb243c9bf7751f1e29c085a" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" > Jan 29 16:36:52.648047 update_engine[1510]: <ping active="1"></ping> Jan 29 16:36:52.648047 update_engine[1510]: <updatecheck></updatecheck> Jan 29 16:36:52.648047 update_engine[1510]: <event eventtype="3" eventresult="2" previousversion="0.0.0.0"></event> Jan 29 16:36:52.648047 update_engine[1510]: </app> Jan 29 16:36:52.648047 update_engine[1510]: </request> Jan 29 16:36:52.648047 update_engine[1510]: I20250129 16:36:52.648034 1510 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:36:52.649321 locksmithd[1537]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 16:36:52.649520 update_engine[1510]: I20250129 16:36:52.649474 1510 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:36:52.649795 update_engine[1510]: I20250129 16:36:52.649765 1510 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:36:52.650928 update_engine[1510]: E20250129 16:36:52.650848 1510 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:36:52.650928 update_engine[1510]: I20250129 16:36:52.650905 1510 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 16:36:52.665256 dockerd[1962]: time="2025-01-29T16:36:52.665194074Z" level=info msg="Starting up" Jan 29 16:36:52.724131 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport561200731-merged.mount: Deactivated successfully. Jan 29 16:36:52.757648 dockerd[1962]: time="2025-01-29T16:36:52.757600308Z" level=info msg="Loading containers: start." Jan 29 16:36:52.919056 kernel: Initializing XFRM netlink socket Jan 29 16:36:52.960468 sshd[1943]: Invalid user jaewon from 103.31.39.159 port 33264 Jan 29 16:36:52.998007 systemd-networkd[1421]: docker0: Link UP Jan 29 16:36:53.025295 dockerd[1962]: time="2025-01-29T16:36:53.025233313Z" level=info msg="Loading containers: done." Jan 29 16:36:53.042606 dockerd[1962]: time="2025-01-29T16:36:53.042551529Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:36:53.042790 dockerd[1962]: time="2025-01-29T16:36:53.042666022Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:36:53.042858 dockerd[1962]: time="2025-01-29T16:36:53.042790595Z" level=info msg="Daemon has completed initialization" Jan 29 16:36:53.073402 dockerd[1962]: time="2025-01-29T16:36:53.073238203Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:36:53.073945 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:36:53.178587 sshd[1943]: Received disconnect from 103.31.39.159 port 33264:11: Bye Bye [preauth] Jan 29 16:36:53.178587 sshd[1943]: Disconnected from invalid user jaewon 103.31.39.159 port 33264 [preauth] Jan 29 16:36:53.180989 systemd[1]: sshd@11-142.132.231.50:22-103.31.39.159:33264.service: Deactivated successfully. Jan 29 16:36:53.719823 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1431307507-merged.mount: Deactivated successfully. Jan 29 16:36:54.049083 containerd[1525]: time="2025-01-29T16:36:54.049021260Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 16:36:54.498700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 29 16:36:54.505234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:36:54.617155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3142119084.mount: Deactivated successfully. Jan 29 16:36:54.636917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:36:54.646107 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:36:54.680220 kubelet[2165]: E0129 16:36:54.680171 2165 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:36:54.683468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:36:54.683651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:36:54.683985 systemd[1]: kubelet.service: Consumed 152ms CPU time, 95.5M memory peak. Jan 29 16:36:55.499538 containerd[1525]: time="2025-01-29T16:36:55.499480554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:36:55.500439 containerd[1525]: time="2025-01-29T16:36:55.500391694Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976813" Jan 29 16:36:55.501136 containerd[1525]: time="2025-01-29T16:36:55.501085514Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:36:55.503464 containerd[1525]: time="2025-01-29T16:36:55.503418448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:36:55.504729 containerd[1525]: time="2025-01-29T16:36:55.504337014Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 1.455260557s" Jan 29 16:36:55.504729 containerd[1525]: time="2025-01-29T16:36:55.504374386Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 16:36:55.505718 containerd[1525]: time="2025-01-29T16:36:55.505630117Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 16:36:56.625712 containerd[1525]: time="2025-01-29T16:36:56.625662517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:36:56.626757 containerd[1525]: time="2025-01-29T16:36:56.626552276Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701163" Jan 29 16:36:56.627423 containerd[1525]: time="2025-01-29T16:36:56.627382818Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:36:56.630048 containerd[1525]: time="2025-01-29T16:36:56.630000243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:36:56.630953 containerd[1525]: time="2025-01-29T16:36:56.630845254Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.125190489s" Jan 29 16:36:56.630953 containerd[1525]: time="2025-01-29T16:36:56.630871014Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 16:36:56.631404 containerd[1525]: time="2025-01-29T16:36:56.631371446Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 16:36:57.607042 containerd[1525]: time="2025-01-29T16:36:57.606978827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:36:57.607824 containerd[1525]: time="2025-01-29T16:36:57.607772667Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652073" Jan 29 16:36:57.608419 containerd[1525]: time="2025-01-29T16:36:57.608382411Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:36:57.610763 containerd[1525]: time="2025-01-29T16:36:57.610722111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:36:57.611968 containerd[1525]: time="2025-01-29T16:36:57.611829761Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 980.331249ms" Jan 29 16:36:57.611968 containerd[1525]: time="2025-01-29T16:36:57.611860691Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 16:36:57.612572 containerd[1525]: time="2025-01-29T16:36:57.612261619Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 16:36:58.591889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3851796883.mount: Deactivated successfully. Jan 29 16:36:58.897362 containerd[1525]: time="2025-01-29T16:36:58.897069788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:36:58.898419 containerd[1525]: time="2025-01-29T16:36:58.898122960Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231154" Jan 29 16:36:58.899187 containerd[1525]: time="2025-01-29T16:36:58.899145253Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:36:58.901092 containerd[1525]: time="2025-01-29T16:36:58.901063614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:36:58.901944 containerd[1525]: time="2025-01-29T16:36:58.901773560Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.289485019s" Jan 29 16:36:58.901944 containerd[1525]: time="2025-01-29T16:36:58.901819218Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 16:36:58.902372 containerd[1525]: time="2025-01-29T16:36:58.902331442Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 16:36:59.393837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1879524000.mount: Deactivated successfully. Jan 29 16:37:00.059465 containerd[1525]: time="2025-01-29T16:37:00.059371792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:37:00.060602 containerd[1525]: time="2025-01-29T16:37:00.060554473Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Jan 29 16:37:00.061615 containerd[1525]: time="2025-01-29T16:37:00.061573046Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:37:00.067548 containerd[1525]: time="2025-01-29T16:37:00.067520976Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.165152012s" Jan 29 16:37:00.067908 containerd[1525]: time="2025-01-29T16:37:00.067628514Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 16:37:00.068000 containerd[1525]: time="2025-01-29T16:37:00.067959204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:37:00.069154 containerd[1525]: time="2025-01-29T16:37:00.068907662Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 16:37:00.588481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617690259.mount: Deactivated successfully. Jan 29 16:37:00.594629 containerd[1525]: time="2025-01-29T16:37:00.594583476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:37:00.595541 containerd[1525]: time="2025-01-29T16:37:00.595496966Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321158" Jan 29 16:37:00.596209 containerd[1525]: time="2025-01-29T16:37:00.596162474Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:37:00.598693 containerd[1525]: time="2025-01-29T16:37:00.598630725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:37:00.599607 containerd[1525]: time="2025-01-29T16:37:00.599485721Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 530.549684ms" Jan 29 16:37:00.599607 containerd[1525]: time="2025-01-29T16:37:00.599516190Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 16:37:00.600271 containerd[1525]: time="2025-01-29T16:37:00.600089350Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 16:37:01.147010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1467847472.mount: Deactivated successfully. Jan 29 16:37:02.504381 containerd[1525]: time="2025-01-29T16:37:02.504314478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:37:02.505450 containerd[1525]: time="2025-01-29T16:37:02.505390058Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780035" Jan 29 16:37:02.506340 containerd[1525]: time="2025-01-29T16:37:02.506296351Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:37:02.509134 containerd[1525]: time="2025-01-29T16:37:02.509075206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:37:02.510388 containerd[1525]: time="2025-01-29T16:37:02.510152409Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.91003782s" Jan 29 16:37:02.510388 containerd[1525]: time="2025-01-29T16:37:02.510179893Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 16:37:02.655958 update_engine[1510]: I20250129 16:37:02.655853 1510 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:37:02.656444 update_engine[1510]: I20250129 16:37:02.656143 1510 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:37:02.656496 update_engine[1510]: I20250129 16:37:02.656440 1510 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:37:02.656794 update_engine[1510]: E20250129 16:37:02.656754 1510 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:37:02.656846 update_engine[1510]: I20250129 16:37:02.656832 1510 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 29 16:37:04.748475 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 29 16:37:04.757408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:37:04.891123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:37:04.895784 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:37:04.897326 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:37:04.897574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:37:04.897799 systemd[1]: kubelet.service: Consumed 109ms CPU time, 83.7M memory peak. Jan 29 16:37:04.905012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:37:04.924676 systemd[1]: Reload requested from client PID 2374 ('systemctl') (unit session-7.scope)... Jan 29 16:37:04.924696 systemd[1]: Reloading... Jan 29 16:37:05.054838 zram_generator::config[2422]: No configuration found. Jan 29 16:37:05.161335 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:37:05.257308 systemd[1]: Reloading finished in 332 ms. Jan 29 16:37:05.308621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:37:05.313534 (kubelet)[2464]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:37:05.314306 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:37:05.315461 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:37:05.315697 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:37:05.315730 systemd[1]: kubelet.service: Consumed 112ms CPU time, 83M memory peak. Jan 29 16:37:05.318141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:37:05.458448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:37:05.462929 (kubelet)[2476]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:37:05.495975 kubelet[2476]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:37:05.495975 kubelet[2476]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:37:05.495975 kubelet[2476]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:37:05.497147 kubelet[2476]: I0129 16:37:05.497103 2476 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:37:05.727376 kubelet[2476]: I0129 16:37:05.727241 2476 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:37:05.727376 kubelet[2476]: I0129 16:37:05.727269 2476 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:37:05.728853 kubelet[2476]: I0129 16:37:05.728658 2476 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:37:05.752847 kubelet[2476]: I0129 16:37:05.752723 2476 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:37:05.753203 kubelet[2476]: E0129 16:37:05.753174 2476 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://142.132.231.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 142.132.231.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:37:05.763094 kubelet[2476]: E0129 16:37:05.763045 2476 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:37:05.763094 kubelet[2476]: I0129 16:37:05.763084 2476 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:37:05.768852 kubelet[2476]: I0129 16:37:05.768829 2476 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:37:05.769917 kubelet[2476]: I0129 16:37:05.769887 2476 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:37:05.770100 kubelet[2476]: I0129 16:37:05.770058 2476 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:37:05.770253 kubelet[2476]: I0129 16:37:05.770090 2476 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-6-6baf09a0d0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:37:05.770253 kubelet[2476]: I0129 16:37:05.770249 2476 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:37:05.770357 kubelet[2476]: I0129 16:37:05.770257 2476 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:37:05.770414 kubelet[2476]: I0129 16:37:05.770362 2476 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:37:05.772205 kubelet[2476]: I0129 16:37:05.772007 2476 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:37:05.772205 kubelet[2476]: I0129 16:37:05.772038 2476 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:37:05.772205 kubelet[2476]: I0129 16:37:05.772067 2476 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:37:05.772205 kubelet[2476]: I0129 16:37:05.772081 2476 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:37:05.779552 kubelet[2476]: W0129 16:37:05.779001 2476 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://142.132.231.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-6-6baf09a0d0&limit=500&resourceVersion=0": dial tcp 142.132.231.50:6443: connect: connection refused Jan 29 16:37:05.779552 kubelet[2476]: E0129 16:37:05.779060 2476 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://142.132.231.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-6-6baf09a0d0&limit=500&resourceVersion=0\": dial tcp 142.132.231.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:37:05.779552 kubelet[2476]: W0129 16:37:05.779337 2476 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://142.132.231.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 142.132.231.50:6443: connect: connection refused Jan 29 16:37:05.779552 kubelet[2476]: E0129 16:37:05.779363 2476 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://142.132.231.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 142.132.231.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:37:05.779819 kubelet[2476]: I0129 16:37:05.779777 2476 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:37:05.784910 kubelet[2476]: I0129 16:37:05.784891 2476 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:37:05.785584 kubelet[2476]: W0129 16:37:05.785557 2476 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:37:05.787180 kubelet[2476]: I0129 16:37:05.787166 2476 server.go:1269] "Started kubelet" Jan 29 16:37:05.790570 kubelet[2476]: I0129 16:37:05.790555 2476 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:37:05.795728 kubelet[2476]: E0129 16:37:05.792283 2476 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://142.132.231.50:6443/api/v1/namespaces/default/events\": dial tcp 142.132.231.50:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-0-6-6baf09a0d0.181f3727ebc5552d default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-6-6baf09a0d0,UID:ci-4230-0-0-6-6baf09a0d0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-6-6baf09a0d0,},FirstTimestamp:2025-01-29 16:37:05.787131181 +0000 UTC m=+0.320724149,LastTimestamp:2025-01-29 16:37:05.787131181 +0000 UTC m=+0.320724149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-6-6baf09a0d0,}" Jan 29 16:37:05.797150 kubelet[2476]: I0129 16:37:05.797137 2476 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:37:05.797436 kubelet[2476]: E0129 16:37:05.797411 2476 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-0-0-6-6baf09a0d0\" not found" Jan 29 16:37:05.798123 kubelet[2476]: I0129 16:37:05.798080 2476 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:37:05.798956 kubelet[2476]: I0129 16:37:05.798934 2476 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:37:05.799684 kubelet[2476]: I0129 16:37:05.799631 2476 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:37:05.800316 kubelet[2476]: I0129 16:37:05.799875 2476 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:37:05.801307 kubelet[2476]: I0129 16:37:05.800778 2476 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:37:05.801307 kubelet[2476]: E0129 16:37:05.801067 2476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.231.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-6-6baf09a0d0?timeout=10s\": dial tcp 142.132.231.50:6443: connect: connection refused" interval="200ms" Jan 29 16:37:05.801410 kubelet[2476]: I0129 16:37:05.801352 2476 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:37:05.802028 kubelet[2476]: I0129 16:37:05.802012 2476 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:37:05.802573 kubelet[2476]: W0129 16:37:05.802474 2476 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://142.132.231.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 142.132.231.50:6443: connect: connection refused Jan 29 16:37:05.802573 kubelet[2476]: E0129 16:37:05.802534 2476 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://142.132.231.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 142.132.231.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:37:05.803198 kubelet[2476]: I0129 16:37:05.802997 2476 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:37:05.803198 kubelet[2476]: I0129 16:37:05.803059 2476 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:37:05.804636 kubelet[2476]: E0129 16:37:05.804564 2476 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:37:05.804743 kubelet[2476]: I0129 16:37:05.804672 2476 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:37:05.812924 kubelet[2476]: I0129 16:37:05.812777 2476 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:37:05.813965 kubelet[2476]: I0129 16:37:05.813950 2476 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:37:05.814256 kubelet[2476]: I0129 16:37:05.814024 2476 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:37:05.814256 kubelet[2476]: I0129 16:37:05.814051 2476 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:37:05.814256 kubelet[2476]: E0129 16:37:05.814086 2476 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:37:05.822181 kubelet[2476]: W0129 16:37:05.822025 2476 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://142.132.231.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 142.132.231.50:6443: connect: connection refused Jan 29 16:37:05.822181 kubelet[2476]: E0129 16:37:05.822119 2476 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://142.132.231.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 142.132.231.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:37:05.833288 kubelet[2476]: I0129 16:37:05.833255 2476 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:37:05.833288 kubelet[2476]: I0129 16:37:05.833273 2476 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:37:05.833436 kubelet[2476]: I0129 16:37:05.833297 2476 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:37:05.835428 kubelet[2476]: I0129 16:37:05.835401 2476 policy_none.go:49] "None policy: Start" Jan 29 16:37:05.836139 kubelet[2476]: I0129 16:37:05.836104 2476 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:37:05.836193 kubelet[2476]: I0129 16:37:05.836142 2476 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:37:05.843131 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:37:05.856041 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:37:05.859232 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:37:05.868022 kubelet[2476]: I0129 16:37:05.867978 2476 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:37:05.868284 kubelet[2476]: I0129 16:37:05.868258 2476 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:37:05.868320 kubelet[2476]: I0129 16:37:05.868285 2476 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:37:05.869629 kubelet[2476]: I0129 16:37:05.869412 2476 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:37:05.871866 kubelet[2476]: E0129 16:37:05.871775 2476 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-0-0-6-6baf09a0d0\" not found" Jan 29 16:37:05.927927 systemd[1]: Created slice kubepods-burstable-pod074ef48a87697e5d3bc73b4c972f1e05.slice - libcontainer container kubepods-burstable-pod074ef48a87697e5d3bc73b4c972f1e05.slice. Jan 29 16:37:05.948964 systemd[1]: Created slice kubepods-burstable-pod918c54d808a697f1e5909038f67127de.slice - libcontainer container kubepods-burstable-pod918c54d808a697f1e5909038f67127de.slice. Jan 29 16:37:05.960057 systemd[1]: Created slice kubepods-burstable-pod2fdacadf288f818292d20354530a44bf.slice - libcontainer container kubepods-burstable-pod2fdacadf288f818292d20354530a44bf.slice. Jan 29 16:37:05.971017 kubelet[2476]: I0129 16:37:05.970975 2476 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:05.971277 kubelet[2476]: E0129 16:37:05.971244 2476 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://142.132.231.50:6443/api/v1/nodes\": dial tcp 142.132.231.50:6443: connect: connection refused" node="ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:06.002480 kubelet[2476]: I0129 16:37:06.002443 2476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/074ef48a87697e5d3bc73b4c972f1e05-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-6-6baf09a0d0\" (UID: \"074ef48a87697e5d3bc73b4c972f1e05\") " pod="kube-system/kube-apiserver-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:06.002896 kubelet[2476]: E0129 16:37:06.002784 2476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.231.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-6-6baf09a0d0?timeout=10s\": dial tcp 142.132.231.50:6443: connect: connection refused" interval="400ms" Jan 29 16:37:06.103564 kubelet[2476]: I0129 16:37:06.103456 2476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/074ef48a87697e5d3bc73b4c972f1e05-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-6-6baf09a0d0\" (UID: \"074ef48a87697e5d3bc73b4c972f1e05\") " pod="kube-system/kube-apiserver-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:06.103564 kubelet[2476]: I0129 16:37:06.103529 2476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/074ef48a87697e5d3bc73b4c972f1e05-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-6-6baf09a0d0\" (UID: \"074ef48a87697e5d3bc73b4c972f1e05\") " pod="kube-system/kube-apiserver-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:06.103564 kubelet[2476]: I0129 16:37:06.103549 2476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/918c54d808a697f1e5909038f67127de-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-6-6baf09a0d0\" (UID: \"918c54d808a697f1e5909038f67127de\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:06.103564 kubelet[2476]: I0129 16:37:06.103566 2476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/918c54d808a697f1e5909038f67127de-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-6-6baf09a0d0\" (UID: \"918c54d808a697f1e5909038f67127de\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:06.103564 kubelet[2476]: I0129 16:37:06.103581 2476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/918c54d808a697f1e5909038f67127de-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-6-6baf09a0d0\" (UID: \"918c54d808a697f1e5909038f67127de\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:06.104139 kubelet[2476]: I0129 16:37:06.103595 2476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/918c54d808a697f1e5909038f67127de-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-6-6baf09a0d0\" (UID: \"918c54d808a697f1e5909038f67127de\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:06.104139 kubelet[2476]: I0129 16:37:06.103684 2476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/918c54d808a697f1e5909038f67127de-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-6-6baf09a0d0\" (UID: \"918c54d808a697f1e5909038f67127de\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:06.104139 kubelet[2476]: I0129 16:37:06.103715 2476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fdacadf288f818292d20354530a44bf-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-6-6baf09a0d0\" (UID: \"2fdacadf288f818292d20354530a44bf\") " pod="kube-system/kube-scheduler-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:06.173584 kubelet[2476]: I0129 16:37:06.173486 2476 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:06.173947 kubelet[2476]: E0129 16:37:06.173903 2476 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://142.132.231.50:6443/api/v1/nodes\": dial tcp 142.132.231.50:6443: connect: connection refused" node="ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:06.246494 containerd[1525]: time="2025-01-29T16:37:06.246428191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-6-6baf09a0d0,Uid:074ef48a87697e5d3bc73b4c972f1e05,Namespace:kube-system,Attempt:0,}" Jan 29 16:37:06.257451 containerd[1525]: time="2025-01-29T16:37:06.257154771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-6-6baf09a0d0,Uid:918c54d808a697f1e5909038f67127de,Namespace:kube-system,Attempt:0,}" Jan 29 16:37:06.264241 containerd[1525]: time="2025-01-29T16:37:06.264079784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-6-6baf09a0d0,Uid:2fdacadf288f818292d20354530a44bf,Namespace:kube-system,Attempt:0,}" Jan 29 16:37:06.404151 kubelet[2476]: E0129 16:37:06.404097 2476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.231.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-6-6baf09a0d0?timeout=10s\": dial tcp 142.132.231.50:6443: connect: connection refused" interval="800ms" Jan 29 16:37:06.576179 kubelet[2476]: I0129 16:37:06.576043 2476 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:06.576587 kubelet[2476]: E0129 16:37:06.576436 2476 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://142.132.231.50:6443/api/v1/nodes\": dial tcp 142.132.231.50:6443: connect: connection refused" node="ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:06.753443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount125474820.mount: Deactivated successfully. Jan 29 16:37:06.764256 containerd[1525]: time="2025-01-29T16:37:06.764207448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:37:06.799327 containerd[1525]: time="2025-01-29T16:37:06.799269300Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Jan 29 16:37:06.800529 containerd[1525]: time="2025-01-29T16:37:06.800494304Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:37:06.803992 containerd[1525]: time="2025-01-29T16:37:06.803798260Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:37:06.804563 containerd[1525]: time="2025-01-29T16:37:06.804506186Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:37:06.805101 containerd[1525]: time="2025-01-29T16:37:06.805060696Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:37:06.806123 containerd[1525]: time="2025-01-29T16:37:06.806095764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:37:06.806839 containerd[1525]: time="2025-01-29T16:37:06.806564869Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 558.504651ms" Jan 29 16:37:06.807134 containerd[1525]: time="2025-01-29T16:37:06.807102326Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:37:06.810691 containerd[1525]: time="2025-01-29T16:37:06.810656636Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 546.510333ms" Jan 29 16:37:06.813381 containerd[1525]: time="2025-01-29T16:37:06.813326468Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 556.085502ms" Jan 29 16:37:06.866069 kubelet[2476]: W0129 16:37:06.865895 2476 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://142.132.231.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 142.132.231.50:6443: connect: connection refused Jan 29 16:37:06.866069 kubelet[2476]: E0129 16:37:06.865958 2476 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://142.132.231.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 142.132.231.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:37:06.946505 containerd[1525]: time="2025-01-29T16:37:06.946024538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:37:06.946505 containerd[1525]: time="2025-01-29T16:37:06.946070516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:37:06.946505 containerd[1525]: time="2025-01-29T16:37:06.946083000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:37:06.946505 containerd[1525]: time="2025-01-29T16:37:06.946148518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:37:06.946505 containerd[1525]: time="2025-01-29T16:37:06.942394362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:37:06.946505 containerd[1525]: time="2025-01-29T16:37:06.945903925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:37:06.946505 containerd[1525]: time="2025-01-29T16:37:06.945915768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:37:06.946505 containerd[1525]: time="2025-01-29T16:37:06.945985072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:37:06.952364 containerd[1525]: time="2025-01-29T16:37:06.952013505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:37:06.952364 containerd[1525]: time="2025-01-29T16:37:06.952052119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:37:06.952364 containerd[1525]: time="2025-01-29T16:37:06.952065135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:37:06.952364 containerd[1525]: time="2025-01-29T16:37:06.952127264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:37:06.979035 systemd[1]: Started cri-containerd-295de29410d4a553a2d5a232d1bc5cf634178988ee7cf26251c2eac533937cf3.scope - libcontainer container 295de29410d4a553a2d5a232d1bc5cf634178988ee7cf26251c2eac533937cf3. Jan 29 16:37:06.985460 systemd[1]: Started cri-containerd-6d598aa82e805f03b265c89079ecdaf51385841005a5be994a234a6fecf4877e.scope - libcontainer container 6d598aa82e805f03b265c89079ecdaf51385841005a5be994a234a6fecf4877e. Jan 29 16:37:06.988611 systemd[1]: Started cri-containerd-c1362370f34165baba74077f086d241c4fb7e0cd8eb9b2d7bf43ea25f5e84c4c.scope - libcontainer container c1362370f34165baba74077f086d241c4fb7e0cd8eb9b2d7bf43ea25f5e84c4c. Jan 29 16:37:07.043635 containerd[1525]: time="2025-01-29T16:37:07.043584914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-6-6baf09a0d0,Uid:918c54d808a697f1e5909038f67127de,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1362370f34165baba74077f086d241c4fb7e0cd8eb9b2d7bf43ea25f5e84c4c\"" Jan 29 16:37:07.049923 containerd[1525]: time="2025-01-29T16:37:07.049819840Z" level=info msg="CreateContainer within sandbox \"c1362370f34165baba74077f086d241c4fb7e0cd8eb9b2d7bf43ea25f5e84c4c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:37:07.059013 kubelet[2476]: W0129 16:37:07.058936 2476 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://142.132.231.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-6-6baf09a0d0&limit=500&resourceVersion=0": dial tcp 142.132.231.50:6443: connect: connection refused Jan 29 16:37:07.059249 kubelet[2476]: E0129 16:37:07.059125 2476 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://142.132.231.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-6-6baf09a0d0&limit=500&resourceVersion=0\": dial tcp 142.132.231.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:37:07.067672 containerd[1525]: time="2025-01-29T16:37:07.067293292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-6-6baf09a0d0,Uid:074ef48a87697e5d3bc73b4c972f1e05,Namespace:kube-system,Attempt:0,} returns sandbox id \"295de29410d4a553a2d5a232d1bc5cf634178988ee7cf26251c2eac533937cf3\"" Jan 29 16:37:07.070754 containerd[1525]: time="2025-01-29T16:37:07.070710002Z" level=info msg="CreateContainer within sandbox \"295de29410d4a553a2d5a232d1bc5cf634178988ee7cf26251c2eac533937cf3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:37:07.071537 containerd[1525]: time="2025-01-29T16:37:07.071033887Z" level=info msg="CreateContainer within sandbox \"c1362370f34165baba74077f086d241c4fb7e0cd8eb9b2d7bf43ea25f5e84c4c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7a240faf10c2fa5b7a14e137e8f8e85d3efd0509206b066434a0c3c50c219395\"" Jan 29 16:37:07.071593 containerd[1525]: time="2025-01-29T16:37:07.071578087Z" level=info msg="StartContainer for \"7a240faf10c2fa5b7a14e137e8f8e85d3efd0509206b066434a0c3c50c219395\"" Jan 29 16:37:07.083064 containerd[1525]: time="2025-01-29T16:37:07.082997933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-6-6baf09a0d0,Uid:2fdacadf288f818292d20354530a44bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d598aa82e805f03b265c89079ecdaf51385841005a5be994a234a6fecf4877e\"" Jan 29 16:37:07.086561 containerd[1525]: time="2025-01-29T16:37:07.086453116Z" level=info msg="CreateContainer within sandbox \"6d598aa82e805f03b265c89079ecdaf51385841005a5be994a234a6fecf4877e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:37:07.093000 containerd[1525]: time="2025-01-29T16:37:07.092941492Z" level=info msg="CreateContainer within sandbox \"295de29410d4a553a2d5a232d1bc5cf634178988ee7cf26251c2eac533937cf3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d923b4e0d769480b714443af7dc80823d17698d9b6bf8a5ab3f78f13912ce711\"" Jan 29 16:37:07.093766 containerd[1525]: time="2025-01-29T16:37:07.093688672Z" level=info msg="StartContainer for \"d923b4e0d769480b714443af7dc80823d17698d9b6bf8a5ab3f78f13912ce711\"" Jan 29 16:37:07.105954 systemd[1]: Started cri-containerd-7a240faf10c2fa5b7a14e137e8f8e85d3efd0509206b066434a0c3c50c219395.scope - libcontainer container 7a240faf10c2fa5b7a14e137e8f8e85d3efd0509206b066434a0c3c50c219395. Jan 29 16:37:07.108338 kubelet[2476]: W0129 16:37:07.108222 2476 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://142.132.231.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 142.132.231.50:6443: connect: connection refused Jan 29 16:37:07.108338 kubelet[2476]: E0129 16:37:07.108281 2476 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://142.132.231.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 142.132.231.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:37:07.111120 containerd[1525]: time="2025-01-29T16:37:07.111033336Z" level=info msg="CreateContainer within sandbox \"6d598aa82e805f03b265c89079ecdaf51385841005a5be994a234a6fecf4877e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f52daf43dec4c88f524936309bd5881573d623febd610a560b9d43d5d79173f4\"" Jan 29 16:37:07.112930 containerd[1525]: time="2025-01-29T16:37:07.112162473Z" level=info msg="StartContainer for \"f52daf43dec4c88f524936309bd5881573d623febd610a560b9d43d5d79173f4\"" Jan 29 16:37:07.148411 systemd[1]: Started cri-containerd-d923b4e0d769480b714443af7dc80823d17698d9b6bf8a5ab3f78f13912ce711.scope - libcontainer container d923b4e0d769480b714443af7dc80823d17698d9b6bf8a5ab3f78f13912ce711. Jan 29 16:37:07.170973 systemd[1]: Started cri-containerd-f52daf43dec4c88f524936309bd5881573d623febd610a560b9d43d5d79173f4.scope - libcontainer container f52daf43dec4c88f524936309bd5881573d623febd610a560b9d43d5d79173f4. Jan 29 16:37:07.183600 containerd[1525]: time="2025-01-29T16:37:07.180065091Z" level=info msg="StartContainer for \"7a240faf10c2fa5b7a14e137e8f8e85d3efd0509206b066434a0c3c50c219395\" returns successfully" Jan 29 16:37:07.204992 kubelet[2476]: E0129 16:37:07.204943 2476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.231.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-6-6baf09a0d0?timeout=10s\": dial tcp 142.132.231.50:6443: connect: connection refused" interval="1.6s" Jan 29 16:37:07.236986 containerd[1525]: time="2025-01-29T16:37:07.236917235Z" level=info msg="StartContainer for \"d923b4e0d769480b714443af7dc80823d17698d9b6bf8a5ab3f78f13912ce711\" returns successfully" Jan 29 16:37:07.258997 containerd[1525]: time="2025-01-29T16:37:07.258874695Z" level=info msg="StartContainer for \"f52daf43dec4c88f524936309bd5881573d623febd610a560b9d43d5d79173f4\" returns successfully" Jan 29 16:37:07.277509 kubelet[2476]: W0129 16:37:07.277102 2476 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://142.132.231.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 142.132.231.50:6443: connect: connection refused Jan 29 16:37:07.277509 kubelet[2476]: E0129 16:37:07.277171 2476 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://142.132.231.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 142.132.231.50:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:37:07.381346 kubelet[2476]: I0129 16:37:07.381302 2476 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:07.383058 kubelet[2476]: E0129 16:37:07.381582 2476 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://142.132.231.50:6443/api/v1/nodes\": dial tcp 142.132.231.50:6443: connect: connection refused" node="ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:08.844473 kubelet[2476]: E0129 16:37:08.844418 2476 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-0-0-6-6baf09a0d0\" not found" node="ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:08.984642 kubelet[2476]: I0129 16:37:08.984367 2476 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:08.998128 kubelet[2476]: I0129 16:37:08.998107 2476 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:08.998289 kubelet[2476]: E0129 16:37:08.998274 2476 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230-0-0-6-6baf09a0d0\": node \"ci-4230-0-0-6-6baf09a0d0\" not found" Jan 29 16:37:09.009914 kubelet[2476]: E0129 16:37:09.009843 2476 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-0-0-6-6baf09a0d0\" not found" Jan 29 16:37:09.110553 kubelet[2476]: E0129 16:37:09.110408 2476 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-0-0-6-6baf09a0d0\" not found" Jan 29 16:37:09.211260 kubelet[2476]: E0129 16:37:09.211193 2476 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-0-0-6-6baf09a0d0\" not found" Jan 29 16:37:09.311498 kubelet[2476]: E0129 16:37:09.311439 2476 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-0-0-6-6baf09a0d0\" not found" Jan 29 16:37:09.412258 kubelet[2476]: E0129 16:37:09.412135 2476 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-0-0-6-6baf09a0d0\" not found" Jan 29 16:37:09.513010 kubelet[2476]: E0129 16:37:09.512935 2476 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-0-0-6-6baf09a0d0\" not found" Jan 29 16:37:09.613765 kubelet[2476]: E0129 16:37:09.613698 2476 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-0-0-6-6baf09a0d0\" not found" Jan 29 16:37:09.783417 kubelet[2476]: I0129 16:37:09.783353 2476 apiserver.go:52] "Watching apiserver" Jan 29 16:37:09.802491 kubelet[2476]: I0129 16:37:09.802370 2476 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:37:10.925572 systemd[1]: Reload requested from client PID 2750 ('systemctl') (unit session-7.scope)... Jan 29 16:37:10.925605 systemd[1]: Reloading... Jan 29 16:37:11.021874 zram_generator::config[2795]: No configuration found. Jan 29 16:37:11.120601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:37:11.231583 systemd[1]: Reloading finished in 305 ms. Jan 29 16:37:11.258072 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:37:11.269560 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:37:11.269869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:37:11.269927 systemd[1]: kubelet.service: Consumed 729ms CPU time, 114.7M memory peak. Jan 29 16:37:11.274120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:37:11.408065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:37:11.412937 (kubelet)[2846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:37:11.465301 kubelet[2846]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:37:11.465301 kubelet[2846]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:37:11.465301 kubelet[2846]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:37:11.465301 kubelet[2846]: I0129 16:37:11.464193 2846 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:37:11.473835 kubelet[2846]: I0129 16:37:11.473265 2846 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:37:11.473835 kubelet[2846]: I0129 16:37:11.473288 2846 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:37:11.473835 kubelet[2846]: I0129 16:37:11.473498 2846 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:37:11.474836 kubelet[2846]: I0129 16:37:11.474820 2846 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:37:11.479038 kubelet[2846]: I0129 16:37:11.479021 2846 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:37:11.482883 kubelet[2846]: E0129 16:37:11.482705 2846 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:37:11.482883 kubelet[2846]: I0129 16:37:11.482751 2846 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:37:11.486227 kubelet[2846]: I0129 16:37:11.486208 2846 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:37:11.486926 kubelet[2846]: I0129 16:37:11.486897 2846 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:37:11.487072 kubelet[2846]: I0129 16:37:11.487036 2846 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:37:11.487207 kubelet[2846]: I0129 16:37:11.487063 2846 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-6-6baf09a0d0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:37:11.487207 kubelet[2846]: I0129 16:37:11.487203 2846 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:37:11.487328 kubelet[2846]: I0129 16:37:11.487212 2846 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:37:11.489280 kubelet[2846]: I0129 16:37:11.489255 2846 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:37:11.489391 kubelet[2846]: I0129 16:37:11.489375 2846 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:37:11.489391 kubelet[2846]: I0129 16:37:11.489391 2846 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:37:11.489577 kubelet[2846]: I0129 16:37:11.489417 2846 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:37:11.489577 kubelet[2846]: I0129 16:37:11.489432 2846 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:37:11.500280 kubelet[2846]: I0129 16:37:11.500247 2846 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:37:11.500935 kubelet[2846]: I0129 16:37:11.500677 2846 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:37:11.501502 kubelet[2846]: I0129 16:37:11.501469 2846 server.go:1269] "Started kubelet" Jan 29 16:37:11.504841 kubelet[2846]: I0129 16:37:11.502696 2846 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:37:11.505022 kubelet[2846]: I0129 16:37:11.504980 2846 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:37:11.505248 kubelet[2846]: I0129 16:37:11.505225 2846 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:37:11.506762 kubelet[2846]: I0129 16:37:11.506738 2846 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:37:11.507101 kubelet[2846]: I0129 16:37:11.507058 2846 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:37:11.511226 kubelet[2846]: I0129 16:37:11.510507 2846 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:37:11.513976 kubelet[2846]: E0129 16:37:11.513780 2846 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:37:11.518490 kubelet[2846]: I0129 16:37:11.517379 2846 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:37:11.518490 kubelet[2846]: I0129 16:37:11.517474 2846 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:37:11.518490 kubelet[2846]: I0129 16:37:11.517588 2846 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:37:11.518990 kubelet[2846]: I0129 16:37:11.518962 2846 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:37:11.519237 kubelet[2846]: I0129 16:37:11.519064 2846 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:37:11.522008 kubelet[2846]: I0129 16:37:11.521501 2846 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:37:11.524957 kubelet[2846]: I0129 16:37:11.524927 2846 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:37:11.526070 kubelet[2846]: I0129 16:37:11.526053 2846 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:37:11.526145 kubelet[2846]: I0129 16:37:11.526135 2846 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:37:11.526216 kubelet[2846]: I0129 16:37:11.526206 2846 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:37:11.526326 kubelet[2846]: E0129 16:37:11.526296 2846 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:37:11.571784 kubelet[2846]: I0129 16:37:11.571750 2846 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:37:11.571894 kubelet[2846]: I0129 16:37:11.571835 2846 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:37:11.571894 kubelet[2846]: I0129 16:37:11.571855 2846 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:37:11.572038 kubelet[2846]: I0129 16:37:11.571993 2846 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:37:11.572038 kubelet[2846]: I0129 16:37:11.572013 2846 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:37:11.572038 kubelet[2846]: I0129 16:37:11.572032 2846 policy_none.go:49] "None policy: Start" Jan 29 16:37:11.572651 kubelet[2846]: I0129 16:37:11.572628 2846 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:37:11.573006 kubelet[2846]: I0129 16:37:11.572742 2846 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:37:11.573006 kubelet[2846]: I0129 16:37:11.572946 2846 state_mem.go:75] "Updated machine memory state" Jan 29 16:37:11.577541 kubelet[2846]: I0129 16:37:11.577514 2846 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:37:11.577708 kubelet[2846]: I0129 16:37:11.577685 2846 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:37:11.577757 kubelet[2846]: I0129 16:37:11.577705 2846 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:37:11.578170 kubelet[2846]: I0129 16:37:11.578157 2846 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:37:11.688175 kubelet[2846]: I0129 16:37:11.688105 2846 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:11.698170 kubelet[2846]: I0129 16:37:11.698124 2846 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:11.698295 kubelet[2846]: I0129 16:37:11.698201 2846 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:11.718097 kubelet[2846]: I0129 16:37:11.718061 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/074ef48a87697e5d3bc73b4c972f1e05-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-6-6baf09a0d0\" (UID: \"074ef48a87697e5d3bc73b4c972f1e05\") " pod="kube-system/kube-apiserver-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:11.718097 kubelet[2846]: I0129 16:37:11.718095 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/074ef48a87697e5d3bc73b4c972f1e05-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-6-6baf09a0d0\" (UID: \"074ef48a87697e5d3bc73b4c972f1e05\") " pod="kube-system/kube-apiserver-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:11.718260 kubelet[2846]: I0129 16:37:11.718115 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/918c54d808a697f1e5909038f67127de-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-6-6baf09a0d0\" (UID: \"918c54d808a697f1e5909038f67127de\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:11.718260 kubelet[2846]: I0129 16:37:11.718132 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/918c54d808a697f1e5909038f67127de-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-6-6baf09a0d0\" (UID: \"918c54d808a697f1e5909038f67127de\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:11.718260 kubelet[2846]: I0129 16:37:11.718146 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/918c54d808a697f1e5909038f67127de-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-6-6baf09a0d0\" (UID: \"918c54d808a697f1e5909038f67127de\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:11.718260 kubelet[2846]: I0129 16:37:11.718163 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/918c54d808a697f1e5909038f67127de-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-6-6baf09a0d0\" (UID: \"918c54d808a697f1e5909038f67127de\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:11.718260 kubelet[2846]: I0129 16:37:11.718178 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/918c54d808a697f1e5909038f67127de-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-6-6baf09a0d0\" (UID: \"918c54d808a697f1e5909038f67127de\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:11.718375 kubelet[2846]: I0129 16:37:11.718195 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fdacadf288f818292d20354530a44bf-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-6-6baf09a0d0\" (UID: \"2fdacadf288f818292d20354530a44bf\") " pod="kube-system/kube-scheduler-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:11.718375 kubelet[2846]: I0129 16:37:11.718209 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/074ef48a87697e5d3bc73b4c972f1e05-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-6-6baf09a0d0\" (UID: \"074ef48a87697e5d3bc73b4c972f1e05\") " pod="kube-system/kube-apiserver-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:11.934026 sudo[2879]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 16:37:11.934496 sudo[2879]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 16:37:12.459690 sudo[2879]: pam_unix(sudo:session): session closed for user root Jan 29 16:37:12.492272 kubelet[2846]: I0129 16:37:12.491969 2846 apiserver.go:52] "Watching apiserver" Jan 29 16:37:12.518641 kubelet[2846]: I0129 16:37:12.518580 2846 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:37:12.561721 kubelet[2846]: E0129 16:37:12.560532 2846 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-0-0-6-6baf09a0d0\" already exists" pod="kube-system/kube-apiserver-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:12.565244 kubelet[2846]: E0129 16:37:12.565219 2846 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230-0-0-6-6baf09a0d0\" already exists" pod="kube-system/kube-controller-manager-ci-4230-0-0-6-6baf09a0d0" Jan 29 16:37:12.592529 kubelet[2846]: I0129 16:37:12.592318 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-0-0-6-6baf09a0d0" podStartSLOduration=1.5922872 podStartE2EDuration="1.5922872s" podCreationTimestamp="2025-01-29 16:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:37:12.580481065 +0000 UTC m=+1.160855443" watchObservedRunningTime="2025-01-29 16:37:12.5922872 +0000 UTC m=+1.172661549" Jan 29 16:37:12.603486 kubelet[2846]: I0129 16:37:12.602711 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-0-0-6-6baf09a0d0" podStartSLOduration=1.602701266 podStartE2EDuration="1.602701266s" podCreationTimestamp="2025-01-29 16:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:37:12.593091476 +0000 UTC m=+1.173465846" watchObservedRunningTime="2025-01-29 16:37:12.602701266 +0000 UTC m=+1.183075615" Jan 29 16:37:12.603766 kubelet[2846]: I0129 16:37:12.603623 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-0-0-6-6baf09a0d0" podStartSLOduration=1.603616797 podStartE2EDuration="1.603616797s" podCreationTimestamp="2025-01-29 16:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:37:12.602556578 +0000 UTC m=+1.182930926" watchObservedRunningTime="2025-01-29 16:37:12.603616797 +0000 UTC m=+1.183991146" Jan 29 16:37:12.646624 update_engine[1510]: I20250129 16:37:12.646136 1510 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:37:12.646624 update_engine[1510]: I20250129 16:37:12.646342 1510 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:37:12.646624 update_engine[1510]: I20250129 16:37:12.646533 1510 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:37:12.648206 update_engine[1510]: E20250129 16:37:12.647716 1510 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:37:12.648206 update_engine[1510]: I20250129 16:37:12.647763 1510 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 29 16:37:13.642679 sudo[1945]: pam_unix(sudo:session): session closed for user root Jan 29 16:37:13.800031 sshd[1941]: Connection closed by 147.75.109.163 port 36752 Jan 29 16:37:13.802242 sshd-session[1939]: pam_unix(sshd:session): session closed for user core Jan 29 16:37:13.806602 systemd[1]: sshd@10-142.132.231.50:22-147.75.109.163:36752.service: Deactivated successfully. Jan 29 16:37:13.809248 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:37:13.809727 systemd[1]: session-7.scope: Consumed 3.696s CPU time, 218.6M memory peak. Jan 29 16:37:13.811338 systemd-logind[1509]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:37:13.813676 systemd-logind[1509]: Removed session 7. Jan 29 16:37:16.679783 kubelet[2846]: I0129 16:37:16.679718 2846 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:37:16.680949 kubelet[2846]: I0129 16:37:16.680356 2846 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:37:16.680994 containerd[1525]: time="2025-01-29T16:37:16.680072402Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:37:17.505354 systemd[1]: Created slice kubepods-burstable-podd5b196d2_5245_42fc_b1bc_8b384cc3fae1.slice - libcontainer container kubepods-burstable-podd5b196d2_5245_42fc_b1bc_8b384cc3fae1.slice. Jan 29 16:37:17.525769 systemd[1]: Created slice kubepods-besteffort-pode515de87_dd3f_4311_ba87_d858d600463e.slice - libcontainer container kubepods-besteffort-pode515de87_dd3f_4311_ba87_d858d600463e.slice. Jan 29 16:37:17.654266 kubelet[2846]: I0129 16:37:17.654210 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e515de87-dd3f-4311-ba87-d858d600463e-lib-modules\") pod \"kube-proxy-9w2gw\" (UID: \"e515de87-dd3f-4311-ba87-d858d600463e\") " pod="kube-system/kube-proxy-9w2gw" Jan 29 16:37:17.654266 kubelet[2846]: I0129 16:37:17.654255 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-cilium-cgroup\") pod \"cilium-qlcdh\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " pod="kube-system/cilium-qlcdh" Jan 29 16:37:17.654266 kubelet[2846]: I0129 16:37:17.654270 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-host-proc-sys-net\") pod \"cilium-qlcdh\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " pod="kube-system/cilium-qlcdh" Jan 29 16:37:17.654457 kubelet[2846]: I0129 16:37:17.654289 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e515de87-dd3f-4311-ba87-d858d600463e-xtables-lock\") pod \"kube-proxy-9w2gw\" (UID: \"e515de87-dd3f-4311-ba87-d858d600463e\") " pod="kube-system/kube-proxy-9w2gw" Jan 29 16:37:17.654457 kubelet[2846]: I0129 16:37:17.654306 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-etc-cni-netd\") pod \"cilium-qlcdh\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " pod="kube-system/cilium-qlcdh" Jan 29 16:37:17.654457 kubelet[2846]: I0129 16:37:17.654319 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-hubble-tls\") pod \"cilium-qlcdh\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " pod="kube-system/cilium-qlcdh" Jan 29 16:37:17.654457 kubelet[2846]: I0129 16:37:17.654331 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcfj5\" (UniqueName: \"kubernetes.io/projected/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-kube-api-access-fcfj5\") pod \"cilium-qlcdh\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " pod="kube-system/cilium-qlcdh" Jan 29 16:37:17.654457 kubelet[2846]: I0129 16:37:17.654351 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-cilium-run\") pod \"cilium-qlcdh\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " pod="kube-system/cilium-qlcdh" Jan 29 16:37:17.654457 kubelet[2846]: I0129 16:37:17.654374 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-hostproc\") pod \"cilium-qlcdh\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " pod="kube-system/cilium-qlcdh" Jan 29 16:37:17.654586 kubelet[2846]: I0129 16:37:17.654397 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-host-proc-sys-kernel\") pod \"cilium-qlcdh\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " pod="kube-system/cilium-qlcdh" Jan 29 16:37:17.654586 kubelet[2846]: I0129 16:37:17.654414 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkqf6\" (UniqueName: \"kubernetes.io/projected/e515de87-dd3f-4311-ba87-d858d600463e-kube-api-access-nkqf6\") pod \"kube-proxy-9w2gw\" (UID: \"e515de87-dd3f-4311-ba87-d858d600463e\") " pod="kube-system/kube-proxy-9w2gw" Jan 29 16:37:17.654586 kubelet[2846]: I0129 16:37:17.654426 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-bpf-maps\") pod \"cilium-qlcdh\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " pod="kube-system/cilium-qlcdh" Jan 29 16:37:17.654586 kubelet[2846]: I0129 16:37:17.654446 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-cni-path\") pod \"cilium-qlcdh\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " pod="kube-system/cilium-qlcdh" Jan 29 16:37:17.654586 kubelet[2846]: I0129 16:37:17.654459 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-lib-modules\") pod \"cilium-qlcdh\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " pod="kube-system/cilium-qlcdh" Jan 29 16:37:17.654586 kubelet[2846]: I0129 16:37:17.654475 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e515de87-dd3f-4311-ba87-d858d600463e-kube-proxy\") pod \"kube-proxy-9w2gw\" (UID: \"e515de87-dd3f-4311-ba87-d858d600463e\") " pod="kube-system/kube-proxy-9w2gw" Jan 29 16:37:17.654702 kubelet[2846]: I0129 16:37:17.654489 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-xtables-lock\") pod \"cilium-qlcdh\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " pod="kube-system/cilium-qlcdh" Jan 29 16:37:17.654702 kubelet[2846]: I0129 16:37:17.654501 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-cilium-config-path\") pod \"cilium-qlcdh\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " pod="kube-system/cilium-qlcdh" Jan 29 16:37:17.654702 kubelet[2846]: I0129 16:37:17.654515 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-clustermesh-secrets\") pod \"cilium-qlcdh\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " pod="kube-system/cilium-qlcdh" Jan 29 16:37:17.817777 containerd[1525]: time="2025-01-29T16:37:17.817133174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qlcdh,Uid:d5b196d2-5245-42fc-b1bc-8b384cc3fae1,Namespace:kube-system,Attempt:0,}" Jan 29 16:37:17.840108 containerd[1525]: time="2025-01-29T16:37:17.840042939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9w2gw,Uid:e515de87-dd3f-4311-ba87-d858d600463e,Namespace:kube-system,Attempt:0,}" Jan 29 16:37:17.879113 systemd[1]: Created slice kubepods-besteffort-podf8a1c440_7300_40bc_9ce1_4c0c6cabc043.slice - libcontainer container kubepods-besteffort-podf8a1c440_7300_40bc_9ce1_4c0c6cabc043.slice. Jan 29 16:37:17.892055 containerd[1525]: time="2025-01-29T16:37:17.891367479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:37:17.892055 containerd[1525]: time="2025-01-29T16:37:17.891429980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:37:17.892055 containerd[1525]: time="2025-01-29T16:37:17.891442914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:37:17.892055 containerd[1525]: time="2025-01-29T16:37:17.891520183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:37:17.906128 containerd[1525]: time="2025-01-29T16:37:17.905136795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:37:17.906128 containerd[1525]: time="2025-01-29T16:37:17.905204626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:37:17.906128 containerd[1525]: time="2025-01-29T16:37:17.905221739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:37:17.906128 containerd[1525]: time="2025-01-29T16:37:17.905288637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:37:17.910974 systemd[1]: Started cri-containerd-e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624.scope - libcontainer container e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624. Jan 29 16:37:17.930991 systemd[1]: Started cri-containerd-d7df117e2cfccab1394c33c45800b620d02937e8e779d26f9c1471ce426f2527.scope - libcontainer container d7df117e2cfccab1394c33c45800b620d02937e8e779d26f9c1471ce426f2527. Jan 29 16:37:17.954061 containerd[1525]: time="2025-01-29T16:37:17.954029888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qlcdh,Uid:d5b196d2-5245-42fc-b1bc-8b384cc3fae1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\"" Jan 29 16:37:17.960450 containerd[1525]: time="2025-01-29T16:37:17.960403264Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:37:17.967694 containerd[1525]: time="2025-01-29T16:37:17.967667551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9w2gw,Uid:e515de87-dd3f-4311-ba87-d858d600463e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7df117e2cfccab1394c33c45800b620d02937e8e779d26f9c1471ce426f2527\"" Jan 29 16:37:17.970507 containerd[1525]: time="2025-01-29T16:37:17.970470934Z" level=info msg="CreateContainer within sandbox \"d7df117e2cfccab1394c33c45800b620d02937e8e779d26f9c1471ce426f2527\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:37:17.986585 containerd[1525]: time="2025-01-29T16:37:17.986527672Z" level=info msg="CreateContainer within sandbox \"d7df117e2cfccab1394c33c45800b620d02937e8e779d26f9c1471ce426f2527\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b94642e01985e659663b81ca9e63d4d35ac083da007c4daef62782eebb27a4a8\"" Jan 29 16:37:17.988564 containerd[1525]: time="2025-01-29T16:37:17.987401017Z" level=info msg="StartContainer for \"b94642e01985e659663b81ca9e63d4d35ac083da007c4daef62782eebb27a4a8\"" Jan 29 16:37:18.018463 systemd[1]: Started cri-containerd-b94642e01985e659663b81ca9e63d4d35ac083da007c4daef62782eebb27a4a8.scope - libcontainer container b94642e01985e659663b81ca9e63d4d35ac083da007c4daef62782eebb27a4a8. Jan 29 16:37:18.053992 containerd[1525]: time="2025-01-29T16:37:18.053951992Z" level=info msg="StartContainer for \"b94642e01985e659663b81ca9e63d4d35ac083da007c4daef62782eebb27a4a8\" returns successfully" Jan 29 16:37:18.056623 kubelet[2846]: I0129 16:37:18.056542 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xzps\" (UniqueName: \"kubernetes.io/projected/f8a1c440-7300-40bc-9ce1-4c0c6cabc043-kube-api-access-2xzps\") pod \"cilium-operator-5d85765b45-d9dnb\" (UID: \"f8a1c440-7300-40bc-9ce1-4c0c6cabc043\") " pod="kube-system/cilium-operator-5d85765b45-d9dnb" Jan 29 16:37:18.056623 kubelet[2846]: I0129 16:37:18.056572 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8a1c440-7300-40bc-9ce1-4c0c6cabc043-cilium-config-path\") pod \"cilium-operator-5d85765b45-d9dnb\" (UID: \"f8a1c440-7300-40bc-9ce1-4c0c6cabc043\") " pod="kube-system/cilium-operator-5d85765b45-d9dnb" Jan 29 16:37:18.185567 containerd[1525]: time="2025-01-29T16:37:18.185416690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-d9dnb,Uid:f8a1c440-7300-40bc-9ce1-4c0c6cabc043,Namespace:kube-system,Attempt:0,}" Jan 29 16:37:18.213912 containerd[1525]: time="2025-01-29T16:37:18.213712195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:37:18.214069 containerd[1525]: time="2025-01-29T16:37:18.213936675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:37:18.214069 containerd[1525]: time="2025-01-29T16:37:18.213956894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:37:18.214482 containerd[1525]: time="2025-01-29T16:37:18.214269745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:37:18.239319 systemd[1]: Started cri-containerd-4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886.scope - libcontainer container 4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886. Jan 29 16:37:18.297025 containerd[1525]: time="2025-01-29T16:37:18.296916101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-d9dnb,Uid:f8a1c440-7300-40bc-9ce1-4c0c6cabc043,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886\"" Jan 29 16:37:19.374081 containerd[1525]: time="2025-01-29T16:37:19.374024664Z" level=error msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" failed" error="failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b: 502 Bad Gateway" Jan 29 16:37:19.374639 containerd[1525]: time="2025-01-29T16:37:19.374148572Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=2687" Jan 29 16:37:19.374677 kubelet[2846]: E0129 16:37:19.374267 2846 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b: 502 Bad Gateway" image="quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5" Jan 29 16:37:19.374677 kubelet[2846]: E0129 16:37:19.374328 2846 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b: 502 Bad Gateway" image="quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5" Jan 29 16:37:19.374677 kubelet[2846]: E0129 16:37:19.374546 2846 kuberuntime_manager.go:1272] "Unhandled Error" err=< Jan 29 16:37:19.374677 kubelet[2846]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jan 29 16:37:19.374677 kubelet[2846]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jan 29 16:37:19.374677 kubelet[2846]: rm /hostbin/cilium-mount Jan 29 16:37:19.375187 kubelet[2846]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fcfj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-qlcdh_kube-system(d5b196d2-5245-42fc-b1bc-8b384cc3fae1): ErrImagePull: failed to pull and unpack image "quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b: 502 Bad Gateway Jan 29 16:37:19.375187 kubelet[2846]: > logger="UnhandledError" Jan 29 16:37:19.375877 containerd[1525]: time="2025-01-29T16:37:19.375592642Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:37:19.376086 kubelet[2846]: E0129 16:37:19.375965 2846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with ErrImagePull: \"failed to pull and unpack image \\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://quay.io/v2/cilium/cilium/blobs/sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b: 502 Bad Gateway\"" pod="kube-system/cilium-qlcdh" podUID="d5b196d2-5245-42fc-b1bc-8b384cc3fae1" Jan 29 16:37:19.571933 kubelet[2846]: E0129 16:37:19.571760 2846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"\"" pod="kube-system/cilium-qlcdh" podUID="d5b196d2-5245-42fc-b1bc-8b384cc3fae1" Jan 29 16:37:19.588058 kubelet[2846]: I0129 16:37:19.588006 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9w2gw" podStartSLOduration=2.5879915970000003 podStartE2EDuration="2.587991597s" podCreationTimestamp="2025-01-29 16:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:37:18.57790229 +0000 UTC m=+7.158276638" watchObservedRunningTime="2025-01-29 16:37:19.587991597 +0000 UTC m=+8.168365946" Jan 29 16:37:20.889444 systemd[1]: Started sshd@12-142.132.231.50:22-152.32.133.149:58626.service - OpenSSH per-connection server daemon (152.32.133.149:58626). Jan 29 16:37:21.915620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1368753041.mount: Deactivated successfully. Jan 29 16:37:22.315277 sshd[3213]: Invalid user test from 152.32.133.149 port 58626 Jan 29 16:37:22.585492 sshd[3213]: Received disconnect from 152.32.133.149 port 58626:11: Bye Bye [preauth] Jan 29 16:37:22.585492 sshd[3213]: Disconnected from invalid user test 152.32.133.149 port 58626 [preauth] Jan 29 16:37:22.588594 systemd[1]: sshd@12-142.132.231.50:22-152.32.133.149:58626.service: Deactivated successfully. Jan 29 16:37:22.650376 update_engine[1510]: I20250129 16:37:22.650295 1510 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:37:22.650853 update_engine[1510]: I20250129 16:37:22.650560 1510 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:37:22.650959 update_engine[1510]: I20250129 16:37:22.650921 1510 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:37:22.651208 update_engine[1510]: E20250129 16:37:22.651172 1510 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:37:22.651267 update_engine[1510]: I20250129 16:37:22.651226 1510 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 16:37:22.651267 update_engine[1510]: I20250129 16:37:22.651237 1510 omaha_request_action.cc:617] Omaha request response: Jan 29 16:37:22.651321 update_engine[1510]: E20250129 16:37:22.651307 1510 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 29 16:37:22.651383 update_engine[1510]: I20250129 16:37:22.651356 1510 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 29 16:37:22.651383 update_engine[1510]: I20250129 16:37:22.651369 1510 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 16:37:22.651383 update_engine[1510]: I20250129 16:37:22.651376 1510 update_attempter.cc:306] Processing Done. Jan 29 16:37:22.651464 update_engine[1510]: E20250129 16:37:22.651391 1510 update_attempter.cc:619] Update failed. Jan 29 16:37:22.651464 update_engine[1510]: I20250129 16:37:22.651397 1510 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 29 16:37:22.651464 update_engine[1510]: I20250129 16:37:22.651403 1510 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 29 16:37:22.651464 update_engine[1510]: I20250129 16:37:22.651409 1510 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 29 16:37:22.651565 update_engine[1510]: I20250129 16:37:22.651473 1510 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 16:37:22.651565 update_engine[1510]: I20250129 16:37:22.651491 1510 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 16:37:22.651565 update_engine[1510]: I20250129 16:37:22.651497 1510 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Jan 29 16:37:22.651565 update_engine[1510]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Jan 29 16:37:22.651565 update_engine[1510]: <os version="Chateau" platform="CoreOS" sp="4230.0.0_x86_64"></os> Jan 29 16:37:22.651565 update_engine[1510]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4230.0.0" track="alpha" bootid="{e739d810-a8ae-4d91-acbf-da76f77c7853}" oem="hetzner" oemversion="0" alephversion="4230.0.0" machineid="df5025e58bb243c9bf7751f1e29c085a" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" > Jan 29 16:37:22.651565 update_engine[1510]: <event eventtype="3" eventresult="0" errorcode="268437456"></event> Jan 29 16:37:22.651565 update_engine[1510]: </app> Jan 29 16:37:22.651565 update_engine[1510]: </request> Jan 29 16:37:22.651565 update_engine[1510]: I20250129 16:37:22.651504 1510 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:37:22.651781 update_engine[1510]: I20250129 16:37:22.651624 1510 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:37:22.651781 update_engine[1510]: I20250129 16:37:22.651746 1510 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:37:22.652093 locksmithd[1537]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 29 16:37:22.652389 update_engine[1510]: E20250129 16:37:22.652055 1510 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:37:22.652389 update_engine[1510]: I20250129 16:37:22.652107 1510 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 16:37:22.652389 update_engine[1510]: I20250129 16:37:22.652160 1510 omaha_request_action.cc:617] Omaha request response: Jan 29 16:37:22.652389 update_engine[1510]: I20250129 16:37:22.652176 1510 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 16:37:22.652389 update_engine[1510]: I20250129 16:37:22.652184 1510 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 16:37:22.652389 update_engine[1510]: I20250129 16:37:22.652189 1510 update_attempter.cc:306] Processing Done. Jan 29 16:37:22.652389 update_engine[1510]: I20250129 16:37:22.652196 1510 update_attempter.cc:310] Error event sent. Jan 29 16:37:22.652389 update_engine[1510]: I20250129 16:37:22.652205 1510 update_check_scheduler.cc:74] Next update check in 47m57s Jan 29 16:37:22.652648 locksmithd[1537]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 29 16:37:36.060254 systemd[1]: Started sshd@13-142.132.231.50:22-72.240.125.133:54686.service - OpenSSH per-connection server daemon (72.240.125.133:54686). Jan 29 16:37:36.763488 sshd[3226]: Invalid user sammy from 72.240.125.133 port 54686 Jan 29 16:37:36.891552 sshd[3226]: Received disconnect from 72.240.125.133 port 54686:11: Bye Bye [preauth] Jan 29 16:37:36.891552 sshd[3226]: Disconnected from invalid user sammy 72.240.125.133 port 54686 [preauth] Jan 29 16:37:36.895056 systemd[1]: sshd@13-142.132.231.50:22-72.240.125.133:54686.service: Deactivated successfully. Jan 29 16:37:57.910128 systemd[1]: Started sshd@14-142.132.231.50:22-83.222.191.62:34926.service - OpenSSH per-connection server daemon (83.222.191.62:34926). Jan 29 16:38:00.964079 sshd[3235]: Invalid user ruolin from 83.222.191.62 port 34926 Jan 29 16:38:01.719705 sshd[3235]: Connection closed by invalid user ruolin 83.222.191.62 port 34926 [preauth] Jan 29 16:38:01.722919 systemd[1]: sshd@14-142.132.231.50:22-83.222.191.62:34926.service: Deactivated successfully. Jan 29 16:38:01.768126 systemd[1]: Started sshd@15-142.132.231.50:22-83.222.191.62:25700.service - OpenSSH per-connection server daemon (83.222.191.62:25700). Jan 29 16:38:04.286246 sshd[3240]: Invalid user rust from 83.222.191.62 port 25700 Jan 29 16:38:05.092904 sshd[3240]: Connection closed by invalid user rust 83.222.191.62 port 25700 [preauth] Jan 29 16:38:05.094928 systemd[1]: sshd@15-142.132.231.50:22-83.222.191.62:25700.service: Deactivated successfully. Jan 29 16:38:05.142110 systemd[1]: Started sshd@16-142.132.231.50:22-83.222.191.62:25716.service - OpenSSH per-connection server daemon (83.222.191.62:25716). Jan 29 16:38:08.103762 sshd[3245]: Invalid user rust from 83.222.191.62 port 25716 Jan 29 16:38:08.997015 sshd[3245]: Connection closed by invalid user rust 83.222.191.62 port 25716 [preauth] Jan 29 16:38:09.000254 systemd[1]: sshd@16-142.132.231.50:22-83.222.191.62:25716.service: Deactivated successfully. Jan 29 16:38:09.048074 systemd[1]: Started sshd@17-142.132.231.50:22-83.222.191.62:25736.service - OpenSSH per-connection server daemon (83.222.191.62:25736). Jan 29 16:38:12.284716 sshd[3250]: Invalid user rust from 83.222.191.62 port 25736 Jan 29 16:38:12.960525 sshd[3250]: Connection closed by invalid user rust 83.222.191.62 port 25736 [preauth] Jan 29 16:38:12.963782 systemd[1]: sshd@17-142.132.231.50:22-83.222.191.62:25736.service: Deactivated successfully. Jan 29 16:38:13.013178 systemd[1]: Started sshd@18-142.132.231.50:22-83.222.191.62:55658.service - OpenSSH per-connection server daemon (83.222.191.62:55658). Jan 29 16:38:15.648427 sshd[3257]: Invalid user rust from 83.222.191.62 port 55658 Jan 29 16:38:16.298348 sshd[3257]: Connection closed by invalid user rust 83.222.191.62 port 55658 [preauth] Jan 29 16:38:16.301083 systemd[1]: sshd@18-142.132.231.50:22-83.222.191.62:55658.service: Deactivated successfully. Jan 29 16:38:16.347118 systemd[1]: Started sshd@19-142.132.231.50:22-83.222.191.62:55676.service - OpenSSH per-connection server daemon (83.222.191.62:55676). Jan 29 16:38:19.007963 sshd[3262]: Invalid user rust from 83.222.191.62 port 55676 Jan 29 16:38:19.738262 sshd[3262]: Connection closed by invalid user rust 83.222.191.62 port 55676 [preauth] Jan 29 16:38:19.742199 systemd[1]: sshd@19-142.132.231.50:22-83.222.191.62:55676.service: Deactivated successfully. Jan 29 16:38:19.784234 systemd[1]: Started sshd@20-142.132.231.50:22-83.222.191.62:55712.service - OpenSSH per-connection server daemon (83.222.191.62:55712). Jan 29 16:38:22.351516 sshd[3270]: Invalid user rust from 83.222.191.62 port 55712 Jan 29 16:38:22.998974 sshd[3270]: Connection closed by invalid user rust 83.222.191.62 port 55712 [preauth] Jan 29 16:38:23.003057 systemd[1]: sshd@20-142.132.231.50:22-83.222.191.62:55712.service: Deactivated successfully. Jan 29 16:38:23.047108 systemd[1]: Started sshd@21-142.132.231.50:22-83.222.191.62:64540.service - OpenSSH per-connection server daemon (83.222.191.62:64540). Jan 29 16:38:25.939982 sshd[3275]: Invalid user rust from 83.222.191.62 port 64540 Jan 29 16:38:26.704472 sshd[3275]: Connection closed by invalid user rust 83.222.191.62 port 64540 [preauth] Jan 29 16:38:26.707647 systemd[1]: sshd@21-142.132.231.50:22-83.222.191.62:64540.service: Deactivated successfully. Jan 29 16:38:26.746163 systemd[1]: Started sshd@22-142.132.231.50:22-83.222.191.62:64564.service - OpenSSH per-connection server daemon (83.222.191.62:64564). Jan 29 16:38:29.381645 sshd[3280]: Invalid user rust from 83.222.191.62 port 64564 Jan 29 16:38:30.005699 sshd[3280]: Connection closed by invalid user rust 83.222.191.62 port 64564 [preauth] Jan 29 16:38:30.008481 systemd[1]: sshd@22-142.132.231.50:22-83.222.191.62:64564.service: Deactivated successfully. Jan 29 16:38:30.050077 systemd[1]: Started sshd@23-142.132.231.50:22-83.222.191.62:64572.service - OpenSSH per-connection server daemon (83.222.191.62:64572). Jan 29 16:38:33.515942 sshd[3285]: Invalid user rustserver from 83.222.191.62 port 64572 Jan 29 16:38:34.095252 sshd[3285]: Connection closed by invalid user rustserver 83.222.191.62 port 64572 [preauth] Jan 29 16:38:34.097190 systemd[1]: sshd@23-142.132.231.50:22-83.222.191.62:64572.service: Deactivated successfully. Jan 29 16:38:34.143087 systemd[1]: Started sshd@24-142.132.231.50:22-83.222.191.62:7214.service - OpenSSH per-connection server daemon (83.222.191.62:7214). Jan 29 16:38:37.139966 sshd[3290]: Invalid user rustserver from 83.222.191.62 port 7214 Jan 29 16:38:37.664311 sshd[3290]: Connection closed by invalid user rustserver 83.222.191.62 port 7214 [preauth] Jan 29 16:38:37.667377 systemd[1]: sshd@24-142.132.231.50:22-83.222.191.62:7214.service: Deactivated successfully. Jan 29 16:38:37.712095 systemd[1]: Started sshd@25-142.132.231.50:22-83.222.191.62:7228.service - OpenSSH per-connection server daemon (83.222.191.62:7228). Jan 29 16:38:40.720172 sshd[3295]: Invalid user rustserver from 83.222.191.62 port 7228 Jan 29 16:38:41.326156 sshd[3295]: Connection closed by invalid user rustserver 83.222.191.62 port 7228 [preauth] Jan 29 16:38:41.330695 systemd[1]: sshd@25-142.132.231.50:22-83.222.191.62:7228.service: Deactivated successfully. Jan 29 16:38:41.376226 systemd[1]: Started sshd@26-142.132.231.50:22-83.222.191.62:47396.service - OpenSSH per-connection server daemon (83.222.191.62:47396). Jan 29 16:38:43.671103 systemd[1]: Started sshd@27-142.132.231.50:22-72.240.125.133:51994.service - OpenSSH per-connection server daemon (72.240.125.133:51994). Jan 29 16:38:43.857112 systemd[1]: Started sshd@28-142.132.231.50:22-152.32.133.149:29800.service - OpenSSH per-connection server daemon (152.32.133.149:29800). Jan 29 16:38:44.321147 sshd[3300]: Invalid user rustserver from 83.222.191.62 port 47396 Jan 29 16:38:44.445665 sshd[3303]: Invalid user user from 72.240.125.133 port 51994 Jan 29 16:38:44.580049 sshd[3303]: Received disconnect from 72.240.125.133 port 51994:11: Bye Bye [preauth] Jan 29 16:38:44.580049 sshd[3303]: Disconnected from invalid user user 72.240.125.133 port 51994 [preauth] Jan 29 16:38:44.582516 systemd[1]: sshd@27-142.132.231.50:22-72.240.125.133:51994.service: Deactivated successfully. Jan 29 16:38:44.983253 sshd[3300]: Connection closed by invalid user rustserver 83.222.191.62 port 47396 [preauth] Jan 29 16:38:44.986857 systemd[1]: sshd@26-142.132.231.50:22-83.222.191.62:47396.service: Deactivated successfully. Jan 29 16:38:45.035080 systemd[1]: Started sshd@29-142.132.231.50:22-83.222.191.62:47418.service - OpenSSH per-connection server daemon (83.222.191.62:47418). Jan 29 16:38:46.689620 sshd[3306]: Invalid user steam from 152.32.133.149 port 29800 Jan 29 16:38:46.958959 sshd[3306]: Received disconnect from 152.32.133.149 port 29800:11: Bye Bye [preauth] Jan 29 16:38:46.958959 sshd[3306]: Disconnected from invalid user steam 152.32.133.149 port 29800 [preauth] Jan 29 16:38:46.962277 systemd[1]: sshd@28-142.132.231.50:22-152.32.133.149:29800.service: Deactivated successfully. Jan 29 16:38:47.701057 systemd[1]: Started sshd@30-142.132.231.50:22-103.31.39.159:40590.service - OpenSSH per-connection server daemon (103.31.39.159:40590). Jan 29 16:38:48.516321 sshd[3313]: Invalid user rustserver from 83.222.191.62 port 47418 Jan 29 16:38:49.034911 sshd[3313]: Connection closed by invalid user rustserver 83.222.191.62 port 47418 [preauth] Jan 29 16:38:49.036762 systemd[1]: sshd@29-142.132.231.50:22-83.222.191.62:47418.service: Deactivated successfully. Jan 29 16:38:49.060635 sshd[3318]: Received disconnect from 103.31.39.159 port 40590:11: Bye Bye [preauth] Jan 29 16:38:49.060635 sshd[3318]: Disconnected from authenticating user root 103.31.39.159 port 40590 [preauth] Jan 29 16:38:49.063539 systemd[1]: sshd@30-142.132.231.50:22-103.31.39.159:40590.service: Deactivated successfully. Jan 29 16:38:49.080100 systemd[1]: Started sshd@31-142.132.231.50:22-83.222.191.62:47446.service - OpenSSH per-connection server daemon (83.222.191.62:47446). Jan 29 16:38:50.805713 containerd[1525]: time="2025-01-29T16:38:50.805650599Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:50.806744 containerd[1525]: time="2025-01-29T16:38:50.806570097Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 16:38:50.808605 containerd[1525]: time="2025-01-29T16:38:50.807391753Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:50.808605 containerd[1525]: time="2025-01-29T16:38:50.808490322Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1m31.432868844s" Jan 29 16:38:50.808605 containerd[1525]: time="2025-01-29T16:38:50.808513608Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 16:38:50.809684 containerd[1525]: time="2025-01-29T16:38:50.809670179Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:38:50.812892 containerd[1525]: time="2025-01-29T16:38:50.812866438Z" level=info msg="CreateContainer within sandbox \"4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:38:50.825364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2786512210.mount: Deactivated successfully. Jan 29 16:38:50.838182 containerd[1525]: time="2025-01-29T16:38:50.838150538Z" level=info msg="CreateContainer within sandbox \"4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35\"" Jan 29 16:38:50.838618 containerd[1525]: time="2025-01-29T16:38:50.838575609Z" level=info msg="StartContainer for \"9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35\"" Jan 29 16:38:50.872295 systemd[1]: Started cri-containerd-9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35.scope - libcontainer container 9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35. Jan 29 16:38:50.899839 containerd[1525]: time="2025-01-29T16:38:50.899778845Z" level=info msg="StartContainer for \"9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35\" returns successfully" Jan 29 16:38:51.749838 kubelet[2846]: I0129 16:38:51.749758 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-d9dnb" podStartSLOduration=2.238782473 podStartE2EDuration="1m34.749726041s" podCreationTimestamp="2025-01-29 16:37:17 +0000 UTC" firstStartedPulling="2025-01-29 16:37:18.298582228 +0000 UTC m=+6.878956577" lastFinishedPulling="2025-01-29 16:38:50.809525796 +0000 UTC m=+99.389900145" observedRunningTime="2025-01-29 16:38:51.748287058 +0000 UTC m=+100.328661417" watchObservedRunningTime="2025-01-29 16:38:51.749726041 +0000 UTC m=+100.330100389" Jan 29 16:38:51.871235 sshd[3327]: Invalid user rustserver from 83.222.191.62 port 47446 Jan 29 16:38:52.795583 sshd[3327]: Connection closed by invalid user rustserver 83.222.191.62 port 47446 [preauth] Jan 29 16:38:52.799194 systemd[1]: sshd@31-142.132.231.50:22-83.222.191.62:47446.service: Deactivated successfully. Jan 29 16:38:52.846303 systemd[1]: Started sshd@32-142.132.231.50:22-83.222.191.62:21920.service - OpenSSH per-connection server daemon (83.222.191.62:21920). Jan 29 16:38:55.716573 sshd[3377]: Invalid user ruyin from 83.222.191.62 port 21920 Jan 29 16:38:56.451374 sshd[3377]: Connection closed by invalid user ruyin 83.222.191.62 port 21920 [preauth] Jan 29 16:38:56.454541 systemd[1]: sshd@32-142.132.231.50:22-83.222.191.62:21920.service: Deactivated successfully. Jan 29 16:38:56.496209 systemd[1]: Started sshd@33-142.132.231.50:22-83.222.191.62:21938.service - OpenSSH per-connection server daemon (83.222.191.62:21938). Jan 29 16:38:56.544115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1622931635.mount: Deactivated successfully. Jan 29 16:38:58.030342 containerd[1525]: time="2025-01-29T16:38:58.030271280Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:58.031603 containerd[1525]: time="2025-01-29T16:38:58.031568911Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 16:38:58.032327 containerd[1525]: time="2025-01-29T16:38:58.032088373Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:38:58.033956 containerd[1525]: time="2025-01-29T16:38:58.033935774Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.224174088s" Jan 29 16:38:58.034035 containerd[1525]: time="2025-01-29T16:38:58.034021332Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 16:38:58.037242 containerd[1525]: time="2025-01-29T16:38:58.037217643Z" level=info msg="CreateContainer within sandbox \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:38:58.088374 containerd[1525]: time="2025-01-29T16:38:58.088341568Z" level=info msg="CreateContainer within sandbox \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea\"" Jan 29 16:38:58.091114 containerd[1525]: time="2025-01-29T16:38:58.089615793Z" level=info msg="StartContainer for \"c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea\"" Jan 29 16:38:58.228929 systemd[1]: Started cri-containerd-c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea.scope - libcontainer container c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea. Jan 29 16:38:58.253462 containerd[1525]: time="2025-01-29T16:38:58.253384930Z" level=info msg="StartContainer for \"c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea\" returns successfully" Jan 29 16:38:58.267974 systemd[1]: cri-containerd-c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea.scope: Deactivated successfully. Jan 29 16:38:58.327139 containerd[1525]: time="2025-01-29T16:38:58.318074662Z" level=info msg="shim disconnected" id=c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea namespace=k8s.io Jan 29 16:38:58.327139 containerd[1525]: time="2025-01-29T16:38:58.327068365Z" level=warning msg="cleaning up after shim disconnected" id=c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea namespace=k8s.io Jan 29 16:38:58.327139 containerd[1525]: time="2025-01-29T16:38:58.327082242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:38:58.752227 containerd[1525]: time="2025-01-29T16:38:58.752163588Z" level=info msg="CreateContainer within sandbox \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:38:58.768733 containerd[1525]: time="2025-01-29T16:38:58.768584080Z" level=info msg="CreateContainer within sandbox \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea\"" Jan 29 16:38:58.770160 containerd[1525]: time="2025-01-29T16:38:58.769659466Z" level=info msg="StartContainer for \"dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea\"" Jan 29 16:38:58.824952 systemd[1]: Started cri-containerd-dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea.scope - libcontainer container dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea. Jan 29 16:38:58.871552 containerd[1525]: time="2025-01-29T16:38:58.871373156Z" level=info msg="StartContainer for \"dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea\" returns successfully" Jan 29 16:38:58.890030 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:38:58.890239 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:38:58.890902 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:38:58.899381 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:38:58.901988 systemd[1]: cri-containerd-dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea.scope: Deactivated successfully. Jan 29 16:38:58.903454 sshd[3386]: Invalid user ruyin from 83.222.191.62 port 21938 Jan 29 16:38:58.930562 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:38:58.943174 containerd[1525]: time="2025-01-29T16:38:58.943095380Z" level=info msg="shim disconnected" id=dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea namespace=k8s.io Jan 29 16:38:58.943174 containerd[1525]: time="2025-01-29T16:38:58.943164324Z" level=warning msg="cleaning up after shim disconnected" id=dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea namespace=k8s.io Jan 29 16:38:58.943174 containerd[1525]: time="2025-01-29T16:38:58.943175825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:38:59.087048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea-rootfs.mount: Deactivated successfully. Jan 29 16:38:59.498377 sshd[3386]: Connection closed by invalid user ruyin 83.222.191.62 port 21938 [preauth] Jan 29 16:38:59.500916 systemd[1]: sshd@33-142.132.231.50:22-83.222.191.62:21938.service: Deactivated successfully. Jan 29 16:38:59.549312 systemd[1]: Started sshd@34-142.132.231.50:22-83.222.191.62:21966.service - OpenSSH per-connection server daemon (83.222.191.62:21966). Jan 29 16:38:59.758566 containerd[1525]: time="2025-01-29T16:38:59.758450113Z" level=info msg="CreateContainer within sandbox \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:38:59.788375 containerd[1525]: time="2025-01-29T16:38:59.788319943Z" level=info msg="CreateContainer within sandbox \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2\"" Jan 29 16:38:59.789391 containerd[1525]: time="2025-01-29T16:38:59.789156394Z" level=info msg="StartContainer for \"2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2\"" Jan 29 16:38:59.836954 systemd[1]: Started cri-containerd-2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2.scope - libcontainer container 2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2. Jan 29 16:38:59.882844 containerd[1525]: time="2025-01-29T16:38:59.882249024Z" level=info msg="StartContainer for \"2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2\" returns successfully" Jan 29 16:38:59.886654 systemd[1]: cri-containerd-2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2.scope: Deactivated successfully. Jan 29 16:38:59.910620 containerd[1525]: time="2025-01-29T16:38:59.910548966Z" level=info msg="shim disconnected" id=2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2 namespace=k8s.io Jan 29 16:38:59.910620 containerd[1525]: time="2025-01-29T16:38:59.910616648Z" level=warning msg="cleaning up after shim disconnected" id=2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2 namespace=k8s.io Jan 29 16:38:59.910620 containerd[1525]: time="2025-01-29T16:38:59.910624833Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:39:00.086406 systemd[1]: run-containerd-runc-k8s.io-2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2-runc.fRmeug.mount: Deactivated successfully. Jan 29 16:39:00.086553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2-rootfs.mount: Deactivated successfully. Jan 29 16:39:00.761843 containerd[1525]: time="2025-01-29T16:39:00.761610834Z" level=info msg="CreateContainer within sandbox \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:39:00.790544 containerd[1525]: time="2025-01-29T16:39:00.788138353Z" level=info msg="CreateContainer within sandbox \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f\"" Jan 29 16:39:00.790544 containerd[1525]: time="2025-01-29T16:39:00.789040331Z" level=info msg="StartContainer for \"05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f\"" Jan 29 16:39:00.827959 systemd[1]: Started cri-containerd-05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f.scope - libcontainer container 05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f. Jan 29 16:39:00.852468 systemd[1]: cri-containerd-05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f.scope: Deactivated successfully. Jan 29 16:39:00.855467 containerd[1525]: time="2025-01-29T16:39:00.855433384Z" level=info msg="StartContainer for \"05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f\" returns successfully" Jan 29 16:39:00.879724 containerd[1525]: time="2025-01-29T16:39:00.879624922Z" level=info msg="shim disconnected" id=05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f namespace=k8s.io Jan 29 16:39:00.879724 containerd[1525]: time="2025-01-29T16:39:00.879716982Z" level=warning msg="cleaning up after shim disconnected" id=05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f namespace=k8s.io Jan 29 16:39:00.879724 containerd[1525]: time="2025-01-29T16:39:00.879725618Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:39:01.086131 systemd[1]: run-containerd-runc-k8s.io-05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f-runc.IejTn6.mount: Deactivated successfully. Jan 29 16:39:01.086248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f-rootfs.mount: Deactivated successfully. Jan 29 16:39:01.765622 containerd[1525]: time="2025-01-29T16:39:01.765005622Z" level=info msg="CreateContainer within sandbox \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:39:01.802861 containerd[1525]: time="2025-01-29T16:39:01.791767850Z" level=info msg="CreateContainer within sandbox \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af\"" Jan 29 16:39:01.802861 containerd[1525]: time="2025-01-29T16:39:01.792752418Z" level=info msg="StartContainer for \"94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af\"" Jan 29 16:39:01.795392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4281262334.mount: Deactivated successfully. Jan 29 16:39:01.837944 systemd[1]: Started cri-containerd-94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af.scope - libcontainer container 94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af. Jan 29 16:39:01.878491 containerd[1525]: time="2025-01-29T16:39:01.878415254Z" level=info msg="StartContainer for \"94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af\" returns successfully" Jan 29 16:39:02.028637 kubelet[2846]: I0129 16:39:02.026735 2846 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 16:39:02.077637 systemd[1]: Created slice kubepods-burstable-pode6baf5fc_0773_4e37_84c1_899321f1d3e4.slice - libcontainer container kubepods-burstable-pode6baf5fc_0773_4e37_84c1_899321f1d3e4.slice. Jan 29 16:39:02.106901 systemd[1]: Created slice kubepods-burstable-pod3da6a131_5c06_45dd_aec9_0ae48ecb8a6c.slice - libcontainer container kubepods-burstable-pod3da6a131_5c06_45dd_aec9_0ae48ecb8a6c.slice. Jan 29 16:39:02.160668 kubelet[2846]: I0129 16:39:02.160607 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6baf5fc-0773-4e37-84c1-899321f1d3e4-config-volume\") pod \"coredns-6f6b679f8f-wmzfj\" (UID: \"e6baf5fc-0773-4e37-84c1-899321f1d3e4\") " pod="kube-system/coredns-6f6b679f8f-wmzfj" Jan 29 16:39:02.160668 kubelet[2846]: I0129 16:39:02.160673 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4glgg\" (UniqueName: \"kubernetes.io/projected/e6baf5fc-0773-4e37-84c1-899321f1d3e4-kube-api-access-4glgg\") pod \"coredns-6f6b679f8f-wmzfj\" (UID: \"e6baf5fc-0773-4e37-84c1-899321f1d3e4\") " pod="kube-system/coredns-6f6b679f8f-wmzfj" Jan 29 16:39:02.262917 kubelet[2846]: I0129 16:39:02.261401 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3da6a131-5c06-45dd-aec9-0ae48ecb8a6c-config-volume\") pod \"coredns-6f6b679f8f-gbmkp\" (UID: \"3da6a131-5c06-45dd-aec9-0ae48ecb8a6c\") " pod="kube-system/coredns-6f6b679f8f-gbmkp" Jan 29 16:39:02.262917 kubelet[2846]: I0129 16:39:02.261462 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24v8n\" (UniqueName: \"kubernetes.io/projected/3da6a131-5c06-45dd-aec9-0ae48ecb8a6c-kube-api-access-24v8n\") pod \"coredns-6f6b679f8f-gbmkp\" (UID: \"3da6a131-5c06-45dd-aec9-0ae48ecb8a6c\") " pod="kube-system/coredns-6f6b679f8f-gbmkp" Jan 29 16:39:02.399970 containerd[1525]: time="2025-01-29T16:39:02.399885004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wmzfj,Uid:e6baf5fc-0773-4e37-84c1-899321f1d3e4,Namespace:kube-system,Attempt:0,}" Jan 29 16:39:02.410963 containerd[1525]: time="2025-01-29T16:39:02.410934335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gbmkp,Uid:3da6a131-5c06-45dd-aec9-0ae48ecb8a6c,Namespace:kube-system,Attempt:0,}" Jan 29 16:39:02.784835 kubelet[2846]: I0129 16:39:02.784148 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qlcdh" podStartSLOduration=5.7092879629999995 podStartE2EDuration="1m45.784129591s" podCreationTimestamp="2025-01-29 16:37:17 +0000 UTC" firstStartedPulling="2025-01-29 16:37:17.959894948 +0000 UTC m=+6.540269297" lastFinishedPulling="2025-01-29 16:38:58.034736575 +0000 UTC m=+106.615110925" observedRunningTime="2025-01-29 16:39:02.783989127 +0000 UTC m=+111.364363486" watchObservedRunningTime="2025-01-29 16:39:02.784129591 +0000 UTC m=+111.364503950" Jan 29 16:39:03.184540 sshd[3539]: Invalid user ruyin from 83.222.191.62 port 21966 Jan 29 16:39:04.184618 systemd-networkd[1421]: cilium_host: Link UP Jan 29 16:39:04.187187 systemd-networkd[1421]: cilium_net: Link UP Jan 29 16:39:04.187643 systemd-networkd[1421]: cilium_net: Gained carrier Jan 29 16:39:04.189003 systemd-networkd[1421]: cilium_host: Gained carrier Jan 29 16:39:04.230782 sshd[3539]: Connection closed by invalid user ruyin 83.222.191.62 port 21966 [preauth] Jan 29 16:39:04.229486 systemd[1]: sshd@34-142.132.231.50:22-83.222.191.62:21966.service: Deactivated successfully. Jan 29 16:39:04.268775 systemd[1]: Started sshd@35-142.132.231.50:22-83.222.191.62:22102.service - OpenSSH per-connection server daemon (83.222.191.62:22102). Jan 29 16:39:04.315388 systemd-networkd[1421]: cilium_vxlan: Link UP Jan 29 16:39:04.315396 systemd-networkd[1421]: cilium_vxlan: Gained carrier Jan 29 16:39:04.666901 kernel: NET: Registered PF_ALG protocol family Jan 29 16:39:04.757024 systemd-networkd[1421]: cilium_net: Gained IPv6LL Jan 29 16:39:05.013700 systemd-networkd[1421]: cilium_host: Gained IPv6LL Jan 29 16:39:05.369773 systemd-networkd[1421]: lxc_health: Link UP Jan 29 16:39:05.378106 systemd-networkd[1421]: lxc_health: Gained carrier Jan 29 16:39:05.981768 kernel: eth0: renamed from tmp7f129 Jan 29 16:39:05.981119 systemd-networkd[1421]: lxc7482c881ff60: Link UP Jan 29 16:39:05.990300 systemd-networkd[1421]: lxc7482c881ff60: Gained carrier Jan 29 16:39:06.032366 systemd-networkd[1421]: lxc88545370b91a: Link UP Jan 29 16:39:06.041966 kernel: eth0: renamed from tmpa692b Jan 29 16:39:06.054392 systemd-networkd[1421]: lxc88545370b91a: Gained carrier Jan 29 16:39:06.101006 systemd-networkd[1421]: cilium_vxlan: Gained IPv6LL Jan 29 16:39:06.869422 systemd-networkd[1421]: lxc_health: Gained IPv6LL Jan 29 16:39:07.254019 systemd-networkd[1421]: lxc88545370b91a: Gained IPv6LL Jan 29 16:39:07.893576 systemd-networkd[1421]: lxc7482c881ff60: Gained IPv6LL Jan 29 16:39:08.032119 sshd[3846]: Invalid user ruyin from 83.222.191.62 port 22102 Jan 29 16:39:08.991792 sshd[3846]: Connection closed by invalid user ruyin 83.222.191.62 port 22102 [preauth] Jan 29 16:39:08.994749 systemd[1]: sshd@35-142.132.231.50:22-83.222.191.62:22102.service: Deactivated successfully. Jan 29 16:39:09.040490 systemd[1]: Started sshd@36-142.132.231.50:22-83.222.191.62:22150.service - OpenSSH per-connection server daemon (83.222.191.62:22150). Jan 29 16:39:09.450420 containerd[1525]: time="2025-01-29T16:39:09.450348803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:39:09.453479 containerd[1525]: time="2025-01-29T16:39:09.452847466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:39:09.453479 containerd[1525]: time="2025-01-29T16:39:09.452865020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:39:09.453479 containerd[1525]: time="2025-01-29T16:39:09.453049909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:39:09.464461 containerd[1525]: time="2025-01-29T16:39:09.461794095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:39:09.464967 containerd[1525]: time="2025-01-29T16:39:09.464919726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:39:09.466295 containerd[1525]: time="2025-01-29T16:39:09.465879381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:39:09.466295 containerd[1525]: time="2025-01-29T16:39:09.465965388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:39:09.500465 systemd[1]: Started cri-containerd-7f1291bd00f4db7eae09cb6c73d67b2555ffc835702fb982945fdd805a5700ae.scope - libcontainer container 7f1291bd00f4db7eae09cb6c73d67b2555ffc835702fb982945fdd805a5700ae. Jan 29 16:39:09.524974 systemd[1]: Started cri-containerd-a692b38063d309ff1566309e97cbce62fcab86063866fe082e60c3456ecacb23.scope - libcontainer container a692b38063d309ff1566309e97cbce62fcab86063866fe082e60c3456ecacb23. Jan 29 16:39:09.611636 containerd[1525]: time="2025-01-29T16:39:09.611582454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wmzfj,Uid:e6baf5fc-0773-4e37-84c1-899321f1d3e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f1291bd00f4db7eae09cb6c73d67b2555ffc835702fb982945fdd805a5700ae\"" Jan 29 16:39:09.615382 containerd[1525]: time="2025-01-29T16:39:09.615354361Z" level=info msg="CreateContainer within sandbox \"7f1291bd00f4db7eae09cb6c73d67b2555ffc835702fb982945fdd805a5700ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:39:09.624714 containerd[1525]: time="2025-01-29T16:39:09.624571286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gbmkp,Uid:3da6a131-5c06-45dd-aec9-0ae48ecb8a6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a692b38063d309ff1566309e97cbce62fcab86063866fe082e60c3456ecacb23\"" Jan 29 16:39:09.632449 containerd[1525]: time="2025-01-29T16:39:09.632411946Z" level=info msg="CreateContainer within sandbox \"a692b38063d309ff1566309e97cbce62fcab86063866fe082e60c3456ecacb23\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:39:09.646079 containerd[1525]: time="2025-01-29T16:39:09.646023006Z" level=info msg="CreateContainer within sandbox \"7f1291bd00f4db7eae09cb6c73d67b2555ffc835702fb982945fdd805a5700ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a26a23c8a5cefd51cdbe2311baac49101167f40a7a121e44ee07472dcf8ac1f6\"" Jan 29 16:39:09.647762 containerd[1525]: time="2025-01-29T16:39:09.647736195Z" level=info msg="StartContainer for \"a26a23c8a5cefd51cdbe2311baac49101167f40a7a121e44ee07472dcf8ac1f6\"" Jan 29 16:39:09.654281 containerd[1525]: time="2025-01-29T16:39:09.654193678Z" level=info msg="CreateContainer within sandbox \"a692b38063d309ff1566309e97cbce62fcab86063866fe082e60c3456ecacb23\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"900f6c5caa807c3ffa159084f855a25fa534c11b1477177829e009a8fe45b79e\"" Jan 29 16:39:09.654904 containerd[1525]: time="2025-01-29T16:39:09.654873759Z" level=info msg="StartContainer for \"900f6c5caa807c3ffa159084f855a25fa534c11b1477177829e009a8fe45b79e\"" Jan 29 16:39:09.687747 systemd[1]: Started cri-containerd-a26a23c8a5cefd51cdbe2311baac49101167f40a7a121e44ee07472dcf8ac1f6.scope - libcontainer container a26a23c8a5cefd51cdbe2311baac49101167f40a7a121e44ee07472dcf8ac1f6. Jan 29 16:39:09.699965 systemd[1]: Started cri-containerd-900f6c5caa807c3ffa159084f855a25fa534c11b1477177829e009a8fe45b79e.scope - libcontainer container 900f6c5caa807c3ffa159084f855a25fa534c11b1477177829e009a8fe45b79e. Jan 29 16:39:09.734134 containerd[1525]: time="2025-01-29T16:39:09.733348920Z" level=info msg="StartContainer for \"a26a23c8a5cefd51cdbe2311baac49101167f40a7a121e44ee07472dcf8ac1f6\" returns successfully" Jan 29 16:39:09.738742 containerd[1525]: time="2025-01-29T16:39:09.738704473Z" level=info msg="StartContainer for \"900f6c5caa807c3ffa159084f855a25fa534c11b1477177829e009a8fe45b79e\" returns successfully" Jan 29 16:39:09.827558 kubelet[2846]: I0129 16:39:09.827486 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-gbmkp" podStartSLOduration=112.827467398 podStartE2EDuration="1m52.827467398s" podCreationTimestamp="2025-01-29 16:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:39:09.812471057 +0000 UTC m=+118.392845406" watchObservedRunningTime="2025-01-29 16:39:09.827467398 +0000 UTC m=+118.407841747" Jan 29 16:39:10.460928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4008563711.mount: Deactivated successfully. Jan 29 16:39:10.808880 kubelet[2846]: I0129 16:39:10.808581 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wmzfj" podStartSLOduration=113.808562183 podStartE2EDuration="1m53.808562183s" podCreationTimestamp="2025-01-29 16:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:39:09.828579528 +0000 UTC m=+118.408953878" watchObservedRunningTime="2025-01-29 16:39:10.808562183 +0000 UTC m=+119.388936542" Jan 29 16:39:12.098644 sshd[4175]: Invalid user ryan from 83.222.191.62 port 22150 Jan 29 16:39:12.891930 sshd[4175]: Connection closed by invalid user ryan 83.222.191.62 port 22150 [preauth] Jan 29 16:39:12.895148 systemd[1]: sshd@36-142.132.231.50:22-83.222.191.62:22150.service: Deactivated successfully. Jan 29 16:39:12.938091 systemd[1]: Started sshd@37-142.132.231.50:22-83.222.191.62:15908.service - OpenSSH per-connection server daemon (83.222.191.62:15908). Jan 29 16:39:15.943127 sshd[4352]: Invalid user ryan from 83.222.191.62 port 15908 Jan 29 16:39:16.853239 sshd[4352]: Connection closed by invalid user ryan 83.222.191.62 port 15908 [preauth] Jan 29 16:39:16.856241 systemd[1]: sshd@37-142.132.231.50:22-83.222.191.62:15908.service: Deactivated successfully. Jan 29 16:39:16.904185 systemd[1]: Started sshd@38-142.132.231.50:22-83.222.191.62:15932.service - OpenSSH per-connection server daemon (83.222.191.62:15932). Jan 29 16:39:20.489030 sshd[4357]: Invalid user ry from 83.222.191.62 port 15932 Jan 29 16:39:21.203824 sshd[4357]: Connection closed by invalid user ry 83.222.191.62 port 15932 [preauth] Jan 29 16:39:21.205745 systemd[1]: sshd@38-142.132.231.50:22-83.222.191.62:15932.service: Deactivated successfully. Jan 29 16:39:21.254283 systemd[1]: Started sshd@39-142.132.231.50:22-83.222.191.62:29330.service - OpenSSH per-connection server daemon (83.222.191.62:29330). Jan 29 16:39:24.402701 sshd[4365]: Invalid user sackreuther from 83.222.191.62 port 29330 Jan 29 16:39:25.311985 sshd[4365]: Connection closed by invalid user sackreuther 83.222.191.62 port 29330 [preauth] Jan 29 16:39:25.315058 systemd[1]: sshd@39-142.132.231.50:22-83.222.191.62:29330.service: Deactivated successfully. Jan 29 16:39:25.360052 systemd[1]: Started sshd@40-142.132.231.50:22-83.222.191.62:29360.service - OpenSSH per-connection server daemon (83.222.191.62:29360). Jan 29 16:39:28.655284 sshd[4370]: Invalid user safenet from 83.222.191.62 port 29360 Jan 29 16:39:29.476262 sshd[4370]: Connection closed by invalid user safenet 83.222.191.62 port 29360 [preauth] Jan 29 16:39:29.479690 systemd[1]: sshd@40-142.132.231.50:22-83.222.191.62:29360.service: Deactivated successfully. Jan 29 16:39:29.523219 systemd[1]: Started sshd@41-142.132.231.50:22-83.222.191.62:29372.service - OpenSSH per-connection server daemon (83.222.191.62:29372). Jan 29 16:39:32.816260 sshd[4375]: Invalid user saif from 83.222.191.62 port 29372 Jan 29 16:39:33.819638 sshd[4375]: Connection closed by invalid user saif 83.222.191.62 port 29372 [preauth] Jan 29 16:39:33.822495 systemd[1]: sshd@41-142.132.231.50:22-83.222.191.62:29372.service: Deactivated successfully. Jan 29 16:39:33.868407 systemd[1]: Started sshd@42-142.132.231.50:22-83.222.191.62:36496.service - OpenSSH per-connection server daemon (83.222.191.62:36496). Jan 29 16:39:37.613489 sshd[4380]: Invalid user saif from 83.222.191.62 port 36496 Jan 29 16:39:38.666911 sshd[4380]: Connection closed by invalid user saif 83.222.191.62 port 36496 [preauth] Jan 29 16:39:38.670245 systemd[1]: sshd@42-142.132.231.50:22-83.222.191.62:36496.service: Deactivated successfully. Jan 29 16:39:38.716042 systemd[1]: Started sshd@43-142.132.231.50:22-83.222.191.62:36506.service - OpenSSH per-connection server daemon (83.222.191.62:36506). Jan 29 16:39:42.061209 sshd[4386]: Invalid user sales from 83.222.191.62 port 36506 Jan 29 16:39:42.752259 sshd[4386]: Connection closed by invalid user sales 83.222.191.62 port 36506 [preauth] Jan 29 16:39:42.755273 systemd[1]: sshd@43-142.132.231.50:22-83.222.191.62:36506.service: Deactivated successfully. Jan 29 16:39:42.805195 systemd[1]: Started sshd@44-142.132.231.50:22-83.222.191.62:35468.service - OpenSSH per-connection server daemon (83.222.191.62:35468). Jan 29 16:39:46.684514 sshd[4393]: Invalid user sales from 83.222.191.62 port 35468 Jan 29 16:39:47.472956 sshd[4393]: Connection closed by invalid user sales 83.222.191.62 port 35468 [preauth] Jan 29 16:39:47.475654 systemd[1]: sshd@44-142.132.231.50:22-83.222.191.62:35468.service: Deactivated successfully. Jan 29 16:39:47.520116 systemd[1]: Started sshd@45-142.132.231.50:22-83.222.191.62:35476.service - OpenSSH per-connection server daemon (83.222.191.62:35476). Jan 29 16:39:51.579042 sshd[4399]: Invalid user samara from 83.222.191.62 port 35476 Jan 29 16:39:52.257970 sshd[4399]: Connection closed by invalid user samara 83.222.191.62 port 35476 [preauth] Jan 29 16:39:52.261009 systemd[1]: sshd@45-142.132.231.50:22-83.222.191.62:35476.service: Deactivated successfully. Jan 29 16:39:52.305074 systemd[1]: Started sshd@46-142.132.231.50:22-83.222.191.62:26960.service - OpenSSH per-connection server daemon (83.222.191.62:26960). Jan 29 16:39:53.059097 systemd[1]: Started sshd@47-142.132.231.50:22-72.240.125.133:49300.service - OpenSSH per-connection server daemon (72.240.125.133:49300). Jan 29 16:39:53.781080 sshd[4409]: Invalid user sammy from 72.240.125.133 port 49300 Jan 29 16:39:53.913910 sshd[4409]: Received disconnect from 72.240.125.133 port 49300:11: Bye Bye [preauth] Jan 29 16:39:53.913910 sshd[4409]: Disconnected from invalid user sammy 72.240.125.133 port 49300 [preauth] Jan 29 16:39:53.916796 systemd[1]: sshd@47-142.132.231.50:22-72.240.125.133:49300.service: Deactivated successfully. Jan 29 16:39:55.052866 sshd[4407]: Invalid user samara from 83.222.191.62 port 26960 Jan 29 16:39:56.003356 sshd[4407]: Connection closed by invalid user samara 83.222.191.62 port 26960 [preauth] Jan 29 16:39:56.006352 systemd[1]: sshd@46-142.132.231.50:22-83.222.191.62:26960.service: Deactivated successfully. Jan 29 16:39:56.052054 systemd[1]: Started sshd@48-142.132.231.50:22-83.222.191.62:26974.service - OpenSSH per-connection server daemon (83.222.191.62:26974). Jan 29 16:39:58.649685 sshd[4417]: Invalid user samba from 83.222.191.62 port 26974 Jan 29 16:39:59.635661 sshd[4417]: Connection closed by invalid user samba 83.222.191.62 port 26974 [preauth] Jan 29 16:39:59.638608 systemd[1]: sshd@48-142.132.231.50:22-83.222.191.62:26974.service: Deactivated successfully. Jan 29 16:39:59.692299 systemd[1]: Started sshd@49-142.132.231.50:22-83.222.191.62:26976.service - OpenSSH per-connection server daemon (83.222.191.62:26976). Jan 29 16:40:03.316970 sshd[4422]: Invalid user samba from 83.222.191.62 port 26976 Jan 29 16:40:04.281053 sshd[4422]: Connection closed by invalid user samba 83.222.191.62 port 26976 [preauth] Jan 29 16:40:04.284121 systemd[1]: sshd@49-142.132.231.50:22-83.222.191.62:26976.service: Deactivated successfully. Jan 29 16:40:04.328072 systemd[1]: Started sshd@50-142.132.231.50:22-83.222.191.62:8768.service - OpenSSH per-connection server daemon (83.222.191.62:8768). Jan 29 16:40:08.075246 sshd[4427]: Invalid user samba from 83.222.191.62 port 8768 Jan 29 16:40:08.856627 sshd[4427]: Connection closed by invalid user samba 83.222.191.62 port 8768 [preauth] Jan 29 16:40:08.859480 systemd[1]: sshd@50-142.132.231.50:22-83.222.191.62:8768.service: Deactivated successfully. Jan 29 16:40:08.906181 systemd[1]: Started sshd@51-142.132.231.50:22-83.222.191.62:8772.service - OpenSSH per-connection server daemon (83.222.191.62:8772). Jan 29 16:40:12.237449 sshd[4432]: Invalid user samba from 83.222.191.62 port 8772 Jan 29 16:40:12.961707 sshd[4432]: Connection closed by invalid user samba 83.222.191.62 port 8772 [preauth] Jan 29 16:40:12.965110 systemd[1]: sshd@51-142.132.231.50:22-83.222.191.62:8772.service: Deactivated successfully. Jan 29 16:40:13.012169 systemd[1]: Started sshd@52-142.132.231.50:22-83.222.191.62:6148.service - OpenSSH per-connection server daemon (83.222.191.62:6148). Jan 29 16:40:16.307401 systemd[1]: Started sshd@53-142.132.231.50:22-152.32.133.149:55978.service - OpenSSH per-connection server daemon (152.32.133.149:55978). Jan 29 16:40:16.536602 sshd[4439]: Invalid user samba from 83.222.191.62 port 6148 Jan 29 16:40:17.168342 sshd[4439]: Connection closed by invalid user samba 83.222.191.62 port 6148 [preauth] Jan 29 16:40:17.171477 systemd[1]: sshd@52-142.132.231.50:22-83.222.191.62:6148.service: Deactivated successfully. Jan 29 16:40:17.215085 systemd[1]: Started sshd@54-142.132.231.50:22-83.222.191.62:6150.service - OpenSSH per-connection server daemon (83.222.191.62:6150). Jan 29 16:40:17.756178 sshd[4442]: Invalid user test1 from 152.32.133.149 port 55978 Jan 29 16:40:18.036393 sshd[4442]: Received disconnect from 152.32.133.149 port 55978:11: Bye Bye [preauth] Jan 29 16:40:18.036393 sshd[4442]: Disconnected from invalid user test1 152.32.133.149 port 55978 [preauth] Jan 29 16:40:18.040967 systemd[1]: sshd@53-142.132.231.50:22-152.32.133.149:55978.service: Deactivated successfully. Jan 29 16:40:20.033765 sshd[4447]: Invalid user samba from 83.222.191.62 port 6150 Jan 29 16:40:20.644065 sshd[4447]: Connection closed by invalid user samba 83.222.191.62 port 6150 [preauth] Jan 29 16:40:20.646636 systemd[1]: sshd@54-142.132.231.50:22-83.222.191.62:6150.service: Deactivated successfully. Jan 29 16:40:20.695051 systemd[1]: Started sshd@55-142.132.231.50:22-83.222.191.62:10976.service - OpenSSH per-connection server daemon (83.222.191.62:10976). Jan 29 16:40:24.381120 sshd[4458]: Invalid user sambauser from 83.222.191.62 port 10976 Jan 29 16:40:25.524370 sshd[4458]: Connection closed by invalid user sambauser 83.222.191.62 port 10976 [preauth] Jan 29 16:40:25.527528 systemd[1]: sshd@55-142.132.231.50:22-83.222.191.62:10976.service: Deactivated successfully. Jan 29 16:40:25.573060 systemd[1]: Started sshd@56-142.132.231.50:22-83.222.191.62:10996.service - OpenSSH per-connection server daemon (83.222.191.62:10996). Jan 29 16:40:29.326747 sshd[4463]: Invalid user sambauser from 83.222.191.62 port 10996 Jan 29 16:40:30.208096 sshd[4463]: Connection closed by invalid user sambauser 83.222.191.62 port 10996 [preauth] Jan 29 16:40:30.211024 systemd[1]: sshd@56-142.132.231.50:22-83.222.191.62:10996.service: Deactivated successfully. Jan 29 16:40:30.254254 systemd[1]: Started sshd@57-142.132.231.50:22-83.222.191.62:11002.service - OpenSSH per-connection server daemon (83.222.191.62:11002). Jan 29 16:40:33.779531 sshd[4468]: Invalid user sambauser from 83.222.191.62 port 11002 Jan 29 16:40:34.401517 sshd[4468]: Connection closed by invalid user sambauser 83.222.191.62 port 11002 [preauth] Jan 29 16:40:34.404924 systemd[1]: sshd@57-142.132.231.50:22-83.222.191.62:11002.service: Deactivated successfully. Jan 29 16:40:34.450146 systemd[1]: Started sshd@58-142.132.231.50:22-83.222.191.62:6964.service - OpenSSH per-connection server daemon (83.222.191.62:6964). Jan 29 16:40:38.103357 sshd[4473]: Invalid user sambit from 83.222.191.62 port 6964 Jan 29 16:40:38.897444 sshd[4473]: Connection closed by invalid user sambit 83.222.191.62 port 6964 [preauth] Jan 29 16:40:38.899475 systemd[1]: sshd@58-142.132.231.50:22-83.222.191.62:6964.service: Deactivated successfully. Jan 29 16:40:38.944118 systemd[1]: Started sshd@59-142.132.231.50:22-83.222.191.62:6992.service - OpenSSH per-connection server daemon (83.222.191.62:6992). Jan 29 16:40:42.694079 sshd[4478]: Invalid user sammy from 83.222.191.62 port 6992 Jan 29 16:40:43.703724 sshd[4478]: Connection closed by invalid user sammy 83.222.191.62 port 6992 [preauth] Jan 29 16:40:43.707225 systemd[1]: sshd@59-142.132.231.50:22-83.222.191.62:6992.service: Deactivated successfully. Jan 29 16:40:43.752151 systemd[1]: Started sshd@60-142.132.231.50:22-83.222.191.62:65218.service - OpenSSH per-connection server daemon (83.222.191.62:65218). Jan 29 16:40:46.948998 sshd[4483]: Invalid user samp from 83.222.191.62 port 65218 Jan 29 16:40:47.798265 sshd[4483]: Connection closed by invalid user samp 83.222.191.62 port 65218 [preauth] Jan 29 16:40:47.801526 systemd[1]: sshd@60-142.132.231.50:22-83.222.191.62:65218.service: Deactivated successfully. Jan 29 16:40:47.845064 systemd[1]: Started sshd@61-142.132.231.50:22-83.222.191.62:65234.service - OpenSSH per-connection server daemon (83.222.191.62:65234). Jan 29 16:40:49.238085 systemd[1]: Started sshd@62-142.132.231.50:22-103.31.39.159:44596.service - OpenSSH per-connection server daemon (103.31.39.159:44596). Jan 29 16:40:50.623559 sshd[4493]: Received disconnect from 103.31.39.159 port 44596:11: Bye Bye [preauth] Jan 29 16:40:50.623559 sshd[4493]: Disconnected from authenticating user root 103.31.39.159 port 44596 [preauth] Jan 29 16:40:50.625766 systemd[1]: sshd@62-142.132.231.50:22-103.31.39.159:44596.service: Deactivated successfully. Jan 29 16:40:51.622456 sshd[4488]: Invalid user sam from 83.222.191.62 port 65234 Jan 29 16:40:52.686365 sshd[4488]: Connection closed by invalid user sam 83.222.191.62 port 65234 [preauth] Jan 29 16:40:52.689492 systemd[1]: sshd@61-142.132.231.50:22-83.222.191.62:65234.service: Deactivated successfully. Jan 29 16:40:52.738076 systemd[1]: Started sshd@63-142.132.231.50:22-83.222.191.62:33096.service - OpenSSH per-connection server daemon (83.222.191.62:33096). Jan 29 16:40:57.088287 sshd[4500]: Invalid user samurai from 83.222.191.62 port 33096 Jan 29 16:40:58.021765 sshd[4500]: Connection closed by invalid user samurai 83.222.191.62 port 33096 [preauth] Jan 29 16:40:58.025265 systemd[1]: sshd@63-142.132.231.50:22-83.222.191.62:33096.service: Deactivated successfully. Jan 29 16:40:58.073187 systemd[1]: Started sshd@64-142.132.231.50:22-83.222.191.62:33116.service - OpenSSH per-connection server daemon (83.222.191.62:33116). Jan 29 16:41:01.721867 sshd[4505]: Invalid user san from 83.222.191.62 port 33116 Jan 29 16:41:02.715238 sshd[4505]: Connection closed by invalid user san 83.222.191.62 port 33116 [preauth] Jan 29 16:41:02.717949 systemd[1]: sshd@64-142.132.231.50:22-83.222.191.62:33116.service: Deactivated successfully. Jan 29 16:41:02.765224 systemd[1]: Started sshd@65-142.132.231.50:22-83.222.191.62:17502.service - OpenSSH per-connection server daemon (83.222.191.62:17502). Jan 29 16:41:03.229049 systemd[1]: Started sshd@66-142.132.231.50:22-72.240.125.133:46582.service - OpenSSH per-connection server daemon (72.240.125.133:46582). Jan 29 16:41:04.059851 sshd[4512]: Invalid user ftpuser from 72.240.125.133 port 46582 Jan 29 16:41:04.205841 sshd[4512]: Received disconnect from 72.240.125.133 port 46582:11: Bye Bye [preauth] Jan 29 16:41:04.205841 sshd[4512]: Disconnected from invalid user ftpuser 72.240.125.133 port 46582 [preauth] Jan 29 16:41:04.208847 systemd[1]: sshd@66-142.132.231.50:22-72.240.125.133:46582.service: Deactivated successfully. Jan 29 16:41:06.160941 sshd[4510]: Invalid user sandra from 83.222.191.62 port 17502 Jan 29 16:41:06.992688 sshd[4510]: Connection closed by invalid user sandra 83.222.191.62 port 17502 [preauth] Jan 29 16:41:06.995783 systemd[1]: sshd@65-142.132.231.50:22-83.222.191.62:17502.service: Deactivated successfully. Jan 29 16:41:07.038121 systemd[1]: Started sshd@67-142.132.231.50:22-83.222.191.62:17526.service - OpenSSH per-connection server daemon (83.222.191.62:17526). Jan 29 16:41:10.768080 sshd[4520]: Invalid user sand from 83.222.191.62 port 17526 Jan 29 16:41:11.390733 sshd[4520]: Connection closed by invalid user sand 83.222.191.62 port 17526 [preauth] Jan 29 16:41:11.393571 systemd[1]: sshd@67-142.132.231.50:22-83.222.191.62:17526.service: Deactivated successfully. Jan 29 16:41:11.438194 systemd[1]: Started sshd@68-142.132.231.50:22-83.222.191.62:59618.service - OpenSSH per-connection server daemon (83.222.191.62:59618). Jan 29 16:41:15.402085 sshd[4525]: Invalid user saned from 83.222.191.62 port 59618 Jan 29 16:41:16.073961 sshd[4525]: Connection closed by invalid user saned 83.222.191.62 port 59618 [preauth] Jan 29 16:41:16.076912 systemd[1]: sshd@68-142.132.231.50:22-83.222.191.62:59618.service: Deactivated successfully. Jan 29 16:41:16.123038 systemd[1]: Started sshd@69-142.132.231.50:22-83.222.191.62:59636.service - OpenSSH per-connection server daemon (83.222.191.62:59636). Jan 29 16:41:19.632318 sshd[4532]: Invalid user saned from 83.222.191.62 port 59636 Jan 29 16:41:20.177415 sshd[4532]: Connection closed by invalid user saned 83.222.191.62 port 59636 [preauth] Jan 29 16:41:20.180364 systemd[1]: sshd@69-142.132.231.50:22-83.222.191.62:59636.service: Deactivated successfully. Jan 29 16:41:20.224115 systemd[1]: Started sshd@70-142.132.231.50:22-83.222.191.62:59642.service - OpenSSH per-connection server daemon (83.222.191.62:59642). Jan 29 16:41:23.805058 sshd[4539]: Invalid user saned from 83.222.191.62 port 59642 Jan 29 16:41:24.933667 sshd[4539]: Connection closed by invalid user saned 83.222.191.62 port 59642 [preauth] Jan 29 16:41:24.936596 systemd[1]: sshd@70-142.132.231.50:22-83.222.191.62:59642.service: Deactivated successfully. Jan 29 16:41:58.544153 systemd[1]: Started sshd@71-142.132.231.50:22-147.75.109.163:60600.service - OpenSSH per-connection server daemon (147.75.109.163:60600). Jan 29 16:41:59.530846 sshd[4546]: Accepted publickey for core from 147.75.109.163 port 60600 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:41:59.532997 sshd-session[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:41:59.538737 systemd-logind[1509]: New session 8 of user core. Jan 29 16:41:59.545984 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:42:00.614873 sshd[4548]: Connection closed by 147.75.109.163 port 60600 Jan 29 16:42:00.615590 sshd-session[4546]: pam_unix(sshd:session): session closed for user core Jan 29 16:42:00.620282 systemd-logind[1509]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:42:00.621070 systemd[1]: sshd@71-142.132.231.50:22-147.75.109.163:60600.service: Deactivated successfully. Jan 29 16:42:00.623338 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:42:00.624364 systemd-logind[1509]: Removed session 8. Jan 29 16:42:04.459077 systemd[1]: Started sshd@72-142.132.231.50:22-152.32.133.149:27160.service - OpenSSH per-connection server daemon (152.32.133.149:27160). Jan 29 16:42:05.789067 systemd[1]: Started sshd@73-142.132.231.50:22-147.75.109.163:60616.service - OpenSSH per-connection server daemon (147.75.109.163:60616). Jan 29 16:42:06.586976 sshd[4560]: Invalid user debian from 152.32.133.149 port 27160 Jan 29 16:42:06.768295 sshd[4563]: Accepted publickey for core from 147.75.109.163 port 60616 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:42:06.770102 sshd-session[4563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:42:06.775876 systemd-logind[1509]: New session 9 of user core. Jan 29 16:42:06.787090 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:42:06.864061 sshd[4560]: Received disconnect from 152.32.133.149 port 27160:11: Bye Bye [preauth] Jan 29 16:42:06.864061 sshd[4560]: Disconnected from invalid user debian 152.32.133.149 port 27160 [preauth] Jan 29 16:42:06.867059 systemd[1]: sshd@72-142.132.231.50:22-152.32.133.149:27160.service: Deactivated successfully. Jan 29 16:42:07.535212 sshd[4565]: Connection closed by 147.75.109.163 port 60616 Jan 29 16:42:07.536370 sshd-session[4563]: pam_unix(sshd:session): session closed for user core Jan 29 16:42:07.540415 systemd[1]: sshd@73-142.132.231.50:22-147.75.109.163:60616.service: Deactivated successfully. Jan 29 16:42:07.543348 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:42:07.545444 systemd-logind[1509]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:42:07.547426 systemd-logind[1509]: Removed session 9. Jan 29 16:42:10.702048 systemd[1]: Started sshd@74-142.132.231.50:22-72.240.125.133:43862.service - OpenSSH per-connection server daemon (72.240.125.133:43862). Jan 29 16:42:11.473788 sshd[4580]: Invalid user user1 from 72.240.125.133 port 43862 Jan 29 16:42:11.610154 sshd[4580]: Received disconnect from 72.240.125.133 port 43862:11: Bye Bye [preauth] Jan 29 16:42:11.610154 sshd[4580]: Disconnected from invalid user user1 72.240.125.133 port 43862 [preauth] Jan 29 16:42:11.612903 systemd[1]: sshd@74-142.132.231.50:22-72.240.125.133:43862.service: Deactivated successfully. Jan 29 16:42:12.715157 systemd[1]: Started sshd@75-142.132.231.50:22-147.75.109.163:56278.service - OpenSSH per-connection server daemon (147.75.109.163:56278). Jan 29 16:42:13.694565 sshd[4587]: Accepted publickey for core from 147.75.109.163 port 56278 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:42:13.696167 sshd-session[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:42:13.703393 systemd-logind[1509]: New session 10 of user core. Jan 29 16:42:13.708947 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:42:14.437271 sshd[4589]: Connection closed by 147.75.109.163 port 56278 Jan 29 16:42:14.438374 sshd-session[4587]: pam_unix(sshd:session): session closed for user core Jan 29 16:42:14.442752 systemd[1]: sshd@75-142.132.231.50:22-147.75.109.163:56278.service: Deactivated successfully. Jan 29 16:42:14.444950 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:42:14.445963 systemd-logind[1509]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:42:14.447372 systemd-logind[1509]: Removed session 10. Jan 29 16:42:19.615076 systemd[1]: Started sshd@76-142.132.231.50:22-147.75.109.163:43490.service - OpenSSH per-connection server daemon (147.75.109.163:43490). Jan 29 16:42:20.606266 sshd[4604]: Accepted publickey for core from 147.75.109.163 port 43490 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:42:20.607908 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:42:20.612763 systemd-logind[1509]: New session 11 of user core. Jan 29 16:42:20.618939 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:42:21.365922 sshd[4606]: Connection closed by 147.75.109.163 port 43490 Jan 29 16:42:21.366701 sshd-session[4604]: pam_unix(sshd:session): session closed for user core Jan 29 16:42:21.371297 systemd-logind[1509]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:42:21.372226 systemd[1]: sshd@76-142.132.231.50:22-147.75.109.163:43490.service: Deactivated successfully. Jan 29 16:42:21.374620 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:42:21.375902 systemd-logind[1509]: Removed session 11. Jan 29 16:42:21.540050 systemd[1]: Started sshd@77-142.132.231.50:22-147.75.109.163:43498.service - OpenSSH per-connection server daemon (147.75.109.163:43498). Jan 29 16:42:22.518635 sshd[4619]: Accepted publickey for core from 147.75.109.163 port 43498 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:42:22.520477 sshd-session[4619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:42:22.525974 systemd-logind[1509]: New session 12 of user core. Jan 29 16:42:22.533959 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:42:23.300534 sshd[4621]: Connection closed by 147.75.109.163 port 43498 Jan 29 16:42:23.301501 sshd-session[4619]: pam_unix(sshd:session): session closed for user core Jan 29 16:42:23.305510 systemd[1]: sshd@77-142.132.231.50:22-147.75.109.163:43498.service: Deactivated successfully. Jan 29 16:42:23.308188 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:42:23.311677 systemd-logind[1509]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:42:23.313373 systemd-logind[1509]: Removed session 12. Jan 29 16:42:23.476909 systemd[1]: Started sshd@78-142.132.231.50:22-147.75.109.163:43508.service - OpenSSH per-connection server daemon (147.75.109.163:43508). Jan 29 16:42:24.458877 sshd[4631]: Accepted publickey for core from 147.75.109.163 port 43508 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:42:24.460691 sshd-session[4631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:42:24.466602 systemd-logind[1509]: New session 13 of user core. Jan 29 16:42:24.468029 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:42:25.210997 sshd[4633]: Connection closed by 147.75.109.163 port 43508 Jan 29 16:42:25.211695 sshd-session[4631]: pam_unix(sshd:session): session closed for user core Jan 29 16:42:25.214744 systemd[1]: sshd@78-142.132.231.50:22-147.75.109.163:43508.service: Deactivated successfully. Jan 29 16:42:25.217458 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:42:25.218760 systemd-logind[1509]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:42:25.219924 systemd-logind[1509]: Removed session 13. Jan 29 16:42:30.388099 systemd[1]: Started sshd@79-142.132.231.50:22-147.75.109.163:60988.service - OpenSSH per-connection server daemon (147.75.109.163:60988). Jan 29 16:42:31.377727 sshd[4645]: Accepted publickey for core from 147.75.109.163 port 60988 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:42:31.379910 sshd-session[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:42:31.385184 systemd-logind[1509]: New session 14 of user core. Jan 29 16:42:31.388981 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:42:32.128825 sshd[4647]: Connection closed by 147.75.109.163 port 60988 Jan 29 16:42:32.129551 sshd-session[4645]: pam_unix(sshd:session): session closed for user core Jan 29 16:42:32.133921 systemd[1]: sshd@79-142.132.231.50:22-147.75.109.163:60988.service: Deactivated successfully. Jan 29 16:42:32.136231 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:42:32.137336 systemd-logind[1509]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:42:32.138405 systemd-logind[1509]: Removed session 14. Jan 29 16:42:32.310084 systemd[1]: Started sshd@80-142.132.231.50:22-147.75.109.163:32768.service - OpenSSH per-connection server daemon (147.75.109.163:32768). Jan 29 16:42:33.277498 sshd[4659]: Accepted publickey for core from 147.75.109.163 port 32768 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:42:33.279250 sshd-session[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:42:33.283861 systemd-logind[1509]: New session 15 of user core. Jan 29 16:42:33.286922 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:42:34.193835 sshd[4661]: Connection closed by 147.75.109.163 port 32768 Jan 29 16:42:34.194916 sshd-session[4659]: pam_unix(sshd:session): session closed for user core Jan 29 16:42:34.202869 systemd[1]: sshd@80-142.132.231.50:22-147.75.109.163:32768.service: Deactivated successfully. Jan 29 16:42:34.205551 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:42:34.207309 systemd-logind[1509]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:42:34.209020 systemd-logind[1509]: Removed session 15. Jan 29 16:42:34.376134 systemd[1]: Started sshd@81-142.132.231.50:22-147.75.109.163:32782.service - OpenSSH per-connection server daemon (147.75.109.163:32782). Jan 29 16:42:35.364073 sshd[4670]: Accepted publickey for core from 147.75.109.163 port 32782 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:42:35.366224 sshd-session[4670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:42:35.372145 systemd-logind[1509]: New session 16 of user core. Jan 29 16:42:35.378970 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:42:37.535948 sshd[4672]: Connection closed by 147.75.109.163 port 32782 Jan 29 16:42:37.538979 sshd-session[4670]: pam_unix(sshd:session): session closed for user core Jan 29 16:42:37.543792 systemd-logind[1509]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:42:37.544625 systemd[1]: sshd@81-142.132.231.50:22-147.75.109.163:32782.service: Deactivated successfully. Jan 29 16:42:37.547390 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:42:37.548473 systemd-logind[1509]: Removed session 16. Jan 29 16:42:37.713113 systemd[1]: Started sshd@82-142.132.231.50:22-147.75.109.163:41230.service - OpenSSH per-connection server daemon (147.75.109.163:41230). Jan 29 16:42:38.697668 sshd[4689]: Accepted publickey for core from 147.75.109.163 port 41230 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:42:38.699661 sshd-session[4689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:42:38.704968 systemd-logind[1509]: New session 17 of user core. Jan 29 16:42:38.709922 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:42:39.676144 sshd[4691]: Connection closed by 147.75.109.163 port 41230 Jan 29 16:42:39.676756 sshd-session[4689]: pam_unix(sshd:session): session closed for user core Jan 29 16:42:39.680947 systemd[1]: sshd@82-142.132.231.50:22-147.75.109.163:41230.service: Deactivated successfully. Jan 29 16:42:39.683349 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:42:39.684151 systemd-logind[1509]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:42:39.685188 systemd-logind[1509]: Removed session 17. Jan 29 16:42:39.850037 systemd[1]: Started sshd@83-142.132.231.50:22-147.75.109.163:41240.service - OpenSSH per-connection server daemon (147.75.109.163:41240). Jan 29 16:42:40.833898 sshd[4702]: Accepted publickey for core from 147.75.109.163 port 41240 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:42:40.835912 sshd-session[4702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:42:40.841979 systemd-logind[1509]: New session 18 of user core. Jan 29 16:42:40.852990 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:42:41.593584 sshd[4704]: Connection closed by 147.75.109.163 port 41240 Jan 29 16:42:41.594299 sshd-session[4702]: pam_unix(sshd:session): session closed for user core Jan 29 16:42:41.598515 systemd[1]: sshd@83-142.132.231.50:22-147.75.109.163:41240.service: Deactivated successfully. Jan 29 16:42:41.600798 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:42:41.602023 systemd-logind[1509]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:42:41.603126 systemd-logind[1509]: Removed session 18. Jan 29 16:42:46.772098 systemd[1]: Started sshd@84-142.132.231.50:22-147.75.109.163:41242.service - OpenSSH per-connection server daemon (147.75.109.163:41242). Jan 29 16:42:47.764107 sshd[4719]: Accepted publickey for core from 147.75.109.163 port 41242 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:42:47.767192 sshd-session[4719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:42:47.778934 systemd-logind[1509]: New session 19 of user core. Jan 29 16:42:47.786096 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:42:48.519867 sshd[4721]: Connection closed by 147.75.109.163 port 41242 Jan 29 16:42:48.520573 sshd-session[4719]: pam_unix(sshd:session): session closed for user core Jan 29 16:42:48.524144 systemd[1]: sshd@84-142.132.231.50:22-147.75.109.163:41242.service: Deactivated successfully. Jan 29 16:42:48.527016 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:42:48.529713 systemd-logind[1509]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:42:48.530977 systemd-logind[1509]: Removed session 19. Jan 29 16:42:49.097097 systemd[1]: Started sshd@85-142.132.231.50:22-103.31.39.159:40068.service - OpenSSH per-connection server daemon (103.31.39.159:40068). Jan 29 16:42:50.244773 sshd[4735]: Invalid user abe from 103.31.39.159 port 40068 Jan 29 16:42:50.463500 sshd[4735]: Received disconnect from 103.31.39.159 port 40068:11: Bye Bye [preauth] Jan 29 16:42:50.463500 sshd[4735]: Disconnected from invalid user abe 103.31.39.159 port 40068 [preauth] Jan 29 16:42:50.467779 systemd[1]: sshd@85-142.132.231.50:22-103.31.39.159:40068.service: Deactivated successfully. Jan 29 16:42:53.694276 systemd[1]: Started sshd@86-142.132.231.50:22-147.75.109.163:55018.service - OpenSSH per-connection server daemon (147.75.109.163:55018). Jan 29 16:42:54.682775 sshd[4742]: Accepted publickey for core from 147.75.109.163 port 55018 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:42:54.684511 sshd-session[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:42:54.689884 systemd-logind[1509]: New session 20 of user core. Jan 29 16:42:54.697969 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 16:42:55.415603 sshd[4744]: Connection closed by 147.75.109.163 port 55018 Jan 29 16:42:55.416409 sshd-session[4742]: pam_unix(sshd:session): session closed for user core Jan 29 16:42:55.420415 systemd[1]: sshd@86-142.132.231.50:22-147.75.109.163:55018.service: Deactivated successfully. Jan 29 16:42:55.422649 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 16:42:55.424123 systemd-logind[1509]: Session 20 logged out. Waiting for processes to exit. Jan 29 16:42:55.425554 systemd-logind[1509]: Removed session 20. Jan 29 16:42:55.594149 systemd[1]: Started sshd@87-142.132.231.50:22-147.75.109.163:55022.service - OpenSSH per-connection server daemon (147.75.109.163:55022). Jan 29 16:42:56.587742 sshd[4756]: Accepted publickey for core from 147.75.109.163 port 55022 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:42:56.590763 sshd-session[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:42:56.597331 systemd-logind[1509]: New session 21 of user core. Jan 29 16:42:56.601957 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 16:42:58.465884 systemd[1]: run-containerd-runc-k8s.io-94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af-runc.Hxdesn.mount: Deactivated successfully. Jan 29 16:42:58.481547 containerd[1525]: time="2025-01-29T16:42:58.481477736Z" level=info msg="StopContainer for \"9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35\" with timeout 30 (s)" Jan 29 16:42:58.484884 containerd[1525]: time="2025-01-29T16:42:58.484783752Z" level=info msg="Stop container \"9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35\" with signal terminated" Jan 29 16:42:58.492459 containerd[1525]: time="2025-01-29T16:42:58.492423246Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:42:58.502519 systemd[1]: cri-containerd-9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35.scope: Deactivated successfully. Jan 29 16:42:58.503365 systemd[1]: cri-containerd-9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35.scope: Consumed 544ms CPU time, 29.7M memory peak, 2.9M read from disk, 4K written to disk. Jan 29 16:42:58.506283 containerd[1525]: time="2025-01-29T16:42:58.503648631Z" level=info msg="StopContainer for \"94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af\" with timeout 2 (s)" Jan 29 16:42:58.506283 containerd[1525]: time="2025-01-29T16:42:58.504093721Z" level=info msg="Stop container \"94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af\" with signal terminated" Jan 29 16:42:58.513001 systemd-networkd[1421]: lxc_health: Link DOWN Jan 29 16:42:58.513014 systemd-networkd[1421]: lxc_health: Lost carrier Jan 29 16:42:58.537523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35-rootfs.mount: Deactivated successfully. Jan 29 16:42:58.539759 systemd[1]: cri-containerd-94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af.scope: Deactivated successfully. Jan 29 16:42:58.540664 systemd[1]: cri-containerd-94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af.scope: Consumed 7.460s CPU time, 160.2M memory peak, 39.9M read from disk, 13.3M written to disk. Jan 29 16:42:58.541928 containerd[1525]: time="2025-01-29T16:42:58.541875713Z" level=info msg="shim disconnected" id=9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35 namespace=k8s.io Jan 29 16:42:58.541928 containerd[1525]: time="2025-01-29T16:42:58.541923106Z" level=warning msg="cleaning up after shim disconnected" id=9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35 namespace=k8s.io Jan 29 16:42:58.542105 containerd[1525]: time="2025-01-29T16:42:58.541933868Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:42:58.563415 containerd[1525]: time="2025-01-29T16:42:58.563359320Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:42:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:42:58.571127 containerd[1525]: time="2025-01-29T16:42:58.571072740Z" level=info msg="StopContainer for \"9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35\" returns successfully" Jan 29 16:42:58.571631 containerd[1525]: time="2025-01-29T16:42:58.571594753Z" level=info msg="StopPodSandbox for \"4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886\"" Jan 29 16:42:58.573946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af-rootfs.mount: Deactivated successfully. Jan 29 16:42:58.576690 containerd[1525]: time="2025-01-29T16:42:58.576615460Z" level=info msg="Container to stop \"9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:42:58.580604 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886-shm.mount: Deactivated successfully. Jan 29 16:42:58.586613 systemd[1]: cri-containerd-4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886.scope: Deactivated successfully. Jan 29 16:42:58.588369 containerd[1525]: time="2025-01-29T16:42:58.586893741Z" level=info msg="shim disconnected" id=94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af namespace=k8s.io Jan 29 16:42:58.588369 containerd[1525]: time="2025-01-29T16:42:58.586940424Z" level=warning msg="cleaning up after shim disconnected" id=94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af namespace=k8s.io Jan 29 16:42:58.588369 containerd[1525]: time="2025-01-29T16:42:58.586947898Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:42:58.613699 containerd[1525]: time="2025-01-29T16:42:58.613522982Z" level=info msg="StopContainer for \"94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af\" returns successfully" Jan 29 16:42:58.614363 containerd[1525]: time="2025-01-29T16:42:58.614235381Z" level=info msg="StopPodSandbox for \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\"" Jan 29 16:42:58.614363 containerd[1525]: time="2025-01-29T16:42:58.614261382Z" level=info msg="Container to stop \"dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:42:58.614363 containerd[1525]: time="2025-01-29T16:42:58.614308326Z" level=info msg="Container to stop \"2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:42:58.614363 containerd[1525]: time="2025-01-29T16:42:58.614318476Z" level=info msg="Container to stop \"05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:42:58.614363 containerd[1525]: time="2025-01-29T16:42:58.614326350Z" level=info msg="Container to stop \"c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:42:58.614363 containerd[1525]: time="2025-01-29T16:42:58.614334337Z" level=info msg="Container to stop \"94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:42:58.616458 containerd[1525]: time="2025-01-29T16:42:58.616400953Z" level=info msg="shim disconnected" id=4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886 namespace=k8s.io Jan 29 16:42:58.616458 containerd[1525]: time="2025-01-29T16:42:58.616439629Z" level=warning msg="cleaning up after shim disconnected" id=4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886 namespace=k8s.io Jan 29 16:42:58.616458 containerd[1525]: time="2025-01-29T16:42:58.616448707Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:42:58.625050 systemd[1]: cri-containerd-e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624.scope: Deactivated successfully. Jan 29 16:42:58.633502 containerd[1525]: time="2025-01-29T16:42:58.633470342Z" level=info msg="TearDown network for sandbox \"4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886\" successfully" Jan 29 16:42:58.633502 containerd[1525]: time="2025-01-29T16:42:58.633497265Z" level=info msg="StopPodSandbox for \"4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886\" returns successfully" Jan 29 16:42:58.655136 containerd[1525]: time="2025-01-29T16:42:58.655032855Z" level=info msg="shim disconnected" id=e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624 namespace=k8s.io Jan 29 16:42:58.655136 containerd[1525]: time="2025-01-29T16:42:58.655081711Z" level=warning msg="cleaning up after shim disconnected" id=e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624 namespace=k8s.io Jan 29 16:42:58.655136 containerd[1525]: time="2025-01-29T16:42:58.655089697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:42:58.669656 containerd[1525]: time="2025-01-29T16:42:58.669606289Z" level=info msg="TearDown network for sandbox \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\" successfully" Jan 29 16:42:58.669656 containerd[1525]: time="2025-01-29T16:42:58.669645567Z" level=info msg="StopPodSandbox for \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\" returns successfully" Jan 29 16:42:58.785502 kubelet[2846]: I0129 16:42:58.784968 2846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-host-proc-sys-net\") pod \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " Jan 29 16:42:58.785502 kubelet[2846]: I0129 16:42:58.785025 2846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcfj5\" (UniqueName: \"kubernetes.io/projected/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-kube-api-access-fcfj5\") pod \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " Jan 29 16:42:58.785502 kubelet[2846]: I0129 16:42:58.785044 2846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-cilium-run\") pod \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " Jan 29 16:42:58.785502 kubelet[2846]: I0129 16:42:58.785056 2846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-bpf-maps\") pod \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " Jan 29 16:42:58.785502 kubelet[2846]: I0129 16:42:58.785070 2846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-cni-path\") pod \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " Jan 29 16:42:58.785502 kubelet[2846]: I0129 16:42:58.785086 2846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-clustermesh-secrets\") pod \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " Jan 29 16:42:58.786294 kubelet[2846]: I0129 16:42:58.785099 2846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-hubble-tls\") pod \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " Jan 29 16:42:58.786294 kubelet[2846]: I0129 16:42:58.785112 2846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xzps\" (UniqueName: \"kubernetes.io/projected/f8a1c440-7300-40bc-9ce1-4c0c6cabc043-kube-api-access-2xzps\") pod \"f8a1c440-7300-40bc-9ce1-4c0c6cabc043\" (UID: \"f8a1c440-7300-40bc-9ce1-4c0c6cabc043\") " Jan 29 16:42:58.786294 kubelet[2846]: I0129 16:42:58.785124 2846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-cilium-cgroup\") pod \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " Jan 29 16:42:58.786294 kubelet[2846]: I0129 16:42:58.785137 2846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-hostproc\") pod \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " Jan 29 16:42:58.786294 kubelet[2846]: I0129 16:42:58.785150 2846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-lib-modules\") pod \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " Jan 29 16:42:58.786294 kubelet[2846]: I0129 16:42:58.785161 2846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-xtables-lock\") pod \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " Jan 29 16:42:58.786527 kubelet[2846]: I0129 16:42:58.785176 2846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8a1c440-7300-40bc-9ce1-4c0c6cabc043-cilium-config-path\") pod \"f8a1c440-7300-40bc-9ce1-4c0c6cabc043\" (UID: \"f8a1c440-7300-40bc-9ce1-4c0c6cabc043\") " Jan 29 16:42:58.786527 kubelet[2846]: I0129 16:42:58.785219 2846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-etc-cni-netd\") pod \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " Jan 29 16:42:58.786527 kubelet[2846]: I0129 16:42:58.785232 2846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-host-proc-sys-kernel\") pod \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " Jan 29 16:42:58.786527 kubelet[2846]: I0129 16:42:58.785245 2846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-cilium-config-path\") pod \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\" (UID: \"d5b196d2-5245-42fc-b1bc-8b384cc3fae1\") " Jan 29 16:42:58.790517 kubelet[2846]: I0129 16:42:58.788186 2846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d5b196d2-5245-42fc-b1bc-8b384cc3fae1" (UID: "d5b196d2-5245-42fc-b1bc-8b384cc3fae1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:42:58.794678 kubelet[2846]: I0129 16:42:58.794643 2846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d5b196d2-5245-42fc-b1bc-8b384cc3fae1" (UID: "d5b196d2-5245-42fc-b1bc-8b384cc3fae1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:42:58.794842 kubelet[2846]: I0129 16:42:58.794795 2846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d5b196d2-5245-42fc-b1bc-8b384cc3fae1" (UID: "d5b196d2-5245-42fc-b1bc-8b384cc3fae1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:42:58.794930 kubelet[2846]: I0129 16:42:58.794913 2846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-hostproc" (OuterVolumeSpecName: "hostproc") pod "d5b196d2-5245-42fc-b1bc-8b384cc3fae1" (UID: "d5b196d2-5245-42fc-b1bc-8b384cc3fae1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:42:58.795054 kubelet[2846]: I0129 16:42:58.795031 2846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d5b196d2-5245-42fc-b1bc-8b384cc3fae1" (UID: "d5b196d2-5245-42fc-b1bc-8b384cc3fae1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:42:58.795140 kubelet[2846]: I0129 16:42:58.795123 2846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d5b196d2-5245-42fc-b1bc-8b384cc3fae1" (UID: "d5b196d2-5245-42fc-b1bc-8b384cc3fae1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:42:58.795501 kubelet[2846]: I0129 16:42:58.795469 2846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-kube-api-access-fcfj5" (OuterVolumeSpecName: "kube-api-access-fcfj5") pod "d5b196d2-5245-42fc-b1bc-8b384cc3fae1" (UID: "d5b196d2-5245-42fc-b1bc-8b384cc3fae1"). InnerVolumeSpecName "kube-api-access-fcfj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:42:58.795553 kubelet[2846]: I0129 16:42:58.795512 2846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d5b196d2-5245-42fc-b1bc-8b384cc3fae1" (UID: "d5b196d2-5245-42fc-b1bc-8b384cc3fae1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:42:58.795553 kubelet[2846]: I0129 16:42:58.795528 2846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d5b196d2-5245-42fc-b1bc-8b384cc3fae1" (UID: "d5b196d2-5245-42fc-b1bc-8b384cc3fae1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:42:58.795622 kubelet[2846]: I0129 16:42:58.795578 2846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-cni-path" (OuterVolumeSpecName: "cni-path") pod "d5b196d2-5245-42fc-b1bc-8b384cc3fae1" (UID: "d5b196d2-5245-42fc-b1bc-8b384cc3fae1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:42:58.798038 kubelet[2846]: I0129 16:42:58.798007 2846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d5b196d2-5245-42fc-b1bc-8b384cc3fae1" (UID: "d5b196d2-5245-42fc-b1bc-8b384cc3fae1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:42:58.799563 kubelet[2846]: I0129 16:42:58.799530 2846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8a1c440-7300-40bc-9ce1-4c0c6cabc043-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f8a1c440-7300-40bc-9ce1-4c0c6cabc043" (UID: "f8a1c440-7300-40bc-9ce1-4c0c6cabc043"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:42:58.799722 kubelet[2846]: I0129 16:42:58.799702 2846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d5b196d2-5245-42fc-b1bc-8b384cc3fae1" (UID: "d5b196d2-5245-42fc-b1bc-8b384cc3fae1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:42:58.799852 kubelet[2846]: I0129 16:42:58.799832 2846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d5b196d2-5245-42fc-b1bc-8b384cc3fae1" (UID: "d5b196d2-5245-42fc-b1bc-8b384cc3fae1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:42:58.800081 kubelet[2846]: I0129 16:42:58.800060 2846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8a1c440-7300-40bc-9ce1-4c0c6cabc043-kube-api-access-2xzps" (OuterVolumeSpecName: "kube-api-access-2xzps") pod "f8a1c440-7300-40bc-9ce1-4c0c6cabc043" (UID: "f8a1c440-7300-40bc-9ce1-4c0c6cabc043"). InnerVolumeSpecName "kube-api-access-2xzps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:42:58.800377 kubelet[2846]: I0129 16:42:58.800336 2846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d5b196d2-5245-42fc-b1bc-8b384cc3fae1" (UID: "d5b196d2-5245-42fc-b1bc-8b384cc3fae1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:42:58.887938 kubelet[2846]: I0129 16:42:58.887882 2846 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-hostproc\") on node \"ci-4230-0-0-6-6baf09a0d0\" DevicePath \"\"" Jan 29 16:42:58.887938 kubelet[2846]: I0129 16:42:58.887925 2846 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-lib-modules\") on node \"ci-4230-0-0-6-6baf09a0d0\" DevicePath \"\"" Jan 29 16:42:58.887938 kubelet[2846]: I0129 16:42:58.887944 2846 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-xtables-lock\") on node \"ci-4230-0-0-6-6baf09a0d0\" DevicePath \"\"" Jan 29 16:42:58.888129 kubelet[2846]: I0129 16:42:58.887961 2846 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8a1c440-7300-40bc-9ce1-4c0c6cabc043-cilium-config-path\") on node \"ci-4230-0-0-6-6baf09a0d0\" DevicePath \"\"" Jan 29 16:42:58.888129 kubelet[2846]: I0129 16:42:58.887977 2846 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-cilium-cgroup\") on node \"ci-4230-0-0-6-6baf09a0d0\" DevicePath \"\"" Jan 29 16:42:58.888129 kubelet[2846]: I0129 16:42:58.887993 2846 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-cilium-config-path\") on node \"ci-4230-0-0-6-6baf09a0d0\" DevicePath \"\"" Jan 29 16:42:58.888129 kubelet[2846]: I0129 16:42:58.888007 2846 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-etc-cni-netd\") on node \"ci-4230-0-0-6-6baf09a0d0\" DevicePath \"\"" Jan 29 16:42:58.888129 kubelet[2846]: I0129 16:42:58.888022 2846 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-host-proc-sys-kernel\") on node \"ci-4230-0-0-6-6baf09a0d0\" DevicePath \"\"" Jan 29 16:42:58.888129 kubelet[2846]: I0129 16:42:58.888036 2846 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-host-proc-sys-net\") on node \"ci-4230-0-0-6-6baf09a0d0\" DevicePath \"\"" Jan 29 16:42:58.888129 kubelet[2846]: I0129 16:42:58.888052 2846 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fcfj5\" (UniqueName: \"kubernetes.io/projected/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-kube-api-access-fcfj5\") on node \"ci-4230-0-0-6-6baf09a0d0\" DevicePath \"\"" Jan 29 16:42:58.888129 kubelet[2846]: I0129 16:42:58.888067 2846 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-cilium-run\") on node \"ci-4230-0-0-6-6baf09a0d0\" DevicePath \"\"" Jan 29 16:42:58.888295 kubelet[2846]: I0129 16:42:58.888081 2846 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-bpf-maps\") on node \"ci-4230-0-0-6-6baf09a0d0\" DevicePath \"\"" Jan 29 16:42:58.888295 kubelet[2846]: I0129 16:42:58.888094 2846 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-cni-path\") on node \"ci-4230-0-0-6-6baf09a0d0\" DevicePath \"\"" Jan 29 16:42:58.888295 kubelet[2846]: I0129 16:42:58.888108 2846 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-clustermesh-secrets\") on node \"ci-4230-0-0-6-6baf09a0d0\" DevicePath \"\"" Jan 29 16:42:58.888295 kubelet[2846]: I0129 16:42:58.888121 2846 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5b196d2-5245-42fc-b1bc-8b384cc3fae1-hubble-tls\") on node \"ci-4230-0-0-6-6baf09a0d0\" DevicePath \"\"" Jan 29 16:42:58.888295 kubelet[2846]: I0129 16:42:58.888135 2846 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2xzps\" (UniqueName: \"kubernetes.io/projected/f8a1c440-7300-40bc-9ce1-4c0c6cabc043-kube-api-access-2xzps\") on node \"ci-4230-0-0-6-6baf09a0d0\" DevicePath \"\"" Jan 29 16:42:59.253522 systemd[1]: Removed slice kubepods-besteffort-podf8a1c440_7300_40bc_9ce1_4c0c6cabc043.slice - libcontainer container kubepods-besteffort-podf8a1c440_7300_40bc_9ce1_4c0c6cabc043.slice. Jan 29 16:42:59.253662 systemd[1]: kubepods-besteffort-podf8a1c440_7300_40bc_9ce1_4c0c6cabc043.slice: Consumed 577ms CPU time, 30M memory peak, 2.9M read from disk, 4K written to disk. Jan 29 16:42:59.258009 kubelet[2846]: I0129 16:42:59.257966 2846 scope.go:117] "RemoveContainer" containerID="9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35" Jan 29 16:42:59.269724 containerd[1525]: time="2025-01-29T16:42:59.269654411Z" level=info msg="RemoveContainer for \"9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35\"" Jan 29 16:42:59.281514 containerd[1525]: time="2025-01-29T16:42:59.279840585Z" level=info msg="RemoveContainer for \"9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35\" returns successfully" Jan 29 16:42:59.281644 kubelet[2846]: I0129 16:42:59.280120 2846 scope.go:117] "RemoveContainer" containerID="9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35" Jan 29 16:42:59.281973 containerd[1525]: time="2025-01-29T16:42:59.281933252Z" level=error msg="ContainerStatus for \"9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35\": not found" Jan 29 16:42:59.283355 kubelet[2846]: E0129 16:42:59.283108 2846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35\": not found" containerID="9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35" Jan 29 16:42:59.283368 systemd[1]: Removed slice kubepods-burstable-podd5b196d2_5245_42fc_b1bc_8b384cc3fae1.slice - libcontainer container kubepods-burstable-podd5b196d2_5245_42fc_b1bc_8b384cc3fae1.slice. Jan 29 16:42:59.283476 systemd[1]: kubepods-burstable-podd5b196d2_5245_42fc_b1bc_8b384cc3fae1.slice: Consumed 7.559s CPU time, 160.5M memory peak, 39.9M read from disk, 13.3M written to disk. Jan 29 16:42:59.284945 kubelet[2846]: I0129 16:42:59.283509 2846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35"} err="failed to get container status \"9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e7d307afb507e9def083e852594d2d5bc1040e27d2b3fc66b04ad0e75dd8c35\": not found" Jan 29 16:42:59.284945 kubelet[2846]: I0129 16:42:59.283694 2846 scope.go:117] "RemoveContainer" containerID="94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af" Jan 29 16:42:59.291308 containerd[1525]: time="2025-01-29T16:42:59.291259086Z" level=info msg="RemoveContainer for \"94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af\"" Jan 29 16:42:59.295127 containerd[1525]: time="2025-01-29T16:42:59.295084387Z" level=info msg="RemoveContainer for \"94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af\" returns successfully" Jan 29 16:42:59.295424 kubelet[2846]: I0129 16:42:59.295394 2846 scope.go:117] "RemoveContainer" containerID="05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f" Jan 29 16:42:59.299332 containerd[1525]: time="2025-01-29T16:42:59.299004296Z" level=info msg="RemoveContainer for \"05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f\"" Jan 29 16:42:59.306658 containerd[1525]: time="2025-01-29T16:42:59.306623117Z" level=info msg="RemoveContainer for \"05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f\" returns successfully" Jan 29 16:42:59.307169 kubelet[2846]: I0129 16:42:59.307077 2846 scope.go:117] "RemoveContainer" containerID="2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2" Jan 29 16:42:59.308011 containerd[1525]: time="2025-01-29T16:42:59.307933567Z" level=info msg="RemoveContainer for \"2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2\"" Jan 29 16:42:59.312070 containerd[1525]: time="2025-01-29T16:42:59.312050424Z" level=info msg="RemoveContainer for \"2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2\" returns successfully" Jan 29 16:42:59.312307 kubelet[2846]: I0129 16:42:59.312279 2846 scope.go:117] "RemoveContainer" containerID="dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea" Jan 29 16:42:59.313306 containerd[1525]: time="2025-01-29T16:42:59.313240898Z" level=info msg="RemoveContainer for \"dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea\"" Jan 29 16:42:59.316208 containerd[1525]: time="2025-01-29T16:42:59.316183345Z" level=info msg="RemoveContainer for \"dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea\" returns successfully" Jan 29 16:42:59.316647 kubelet[2846]: I0129 16:42:59.316417 2846 scope.go:117] "RemoveContainer" containerID="c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea" Jan 29 16:42:59.317464 containerd[1525]: time="2025-01-29T16:42:59.317395169Z" level=info msg="RemoveContainer for \"c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea\"" Jan 29 16:42:59.320339 containerd[1525]: time="2025-01-29T16:42:59.320307136Z" level=info msg="RemoveContainer for \"c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea\" returns successfully" Jan 29 16:42:59.320573 kubelet[2846]: I0129 16:42:59.320509 2846 scope.go:117] "RemoveContainer" containerID="94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af" Jan 29 16:42:59.321041 containerd[1525]: time="2025-01-29T16:42:59.320937932Z" level=error msg="ContainerStatus for \"94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af\": not found" Jan 29 16:42:59.321278 kubelet[2846]: E0129 16:42:59.321261 2846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af\": not found" containerID="94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af" Jan 29 16:42:59.321582 kubelet[2846]: I0129 16:42:59.321382 2846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af"} err="failed to get container status \"94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af\": rpc error: code = NotFound desc = an error occurred when try to find container \"94cf39c2b07f96c5220d4a33c6beb2da99e9232ab70993f638ec75495c2619af\": not found" Jan 29 16:42:59.321582 kubelet[2846]: I0129 16:42:59.321406 2846 scope.go:117] "RemoveContainer" containerID="05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f" Jan 29 16:42:59.321644 containerd[1525]: time="2025-01-29T16:42:59.321540313Z" level=error msg="ContainerStatus for \"05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f\": not found" Jan 29 16:42:59.321785 kubelet[2846]: E0129 16:42:59.321712 2846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f\": not found" containerID="05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f" Jan 29 16:42:59.321785 kubelet[2846]: I0129 16:42:59.321730 2846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f"} err="failed to get container status \"05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f\": rpc error: code = NotFound desc = an error occurred when try to find container \"05643d52ef7e1b8413ce4ea68bad46e825efbfbeb23601670bbad7cd0d2a295f\": not found" Jan 29 16:42:59.321785 kubelet[2846]: I0129 16:42:59.321743 2846 scope.go:117] "RemoveContainer" containerID="2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2" Jan 29 16:42:59.322078 containerd[1525]: time="2025-01-29T16:42:59.322021925Z" level=error msg="ContainerStatus for \"2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2\": not found" Jan 29 16:42:59.322183 kubelet[2846]: E0129 16:42:59.322128 2846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2\": not found" containerID="2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2" Jan 29 16:42:59.322183 kubelet[2846]: I0129 16:42:59.322172 2846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2"} err="failed to get container status \"2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b9e0844985a8742766bacd83394a5251542824002d0cd444275bf5b4f66e0b2\": not found" Jan 29 16:42:59.322239 kubelet[2846]: I0129 16:42:59.322190 2846 scope.go:117] "RemoveContainer" containerID="dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea" Jan 29 16:42:59.322438 containerd[1525]: time="2025-01-29T16:42:59.322395554Z" level=error msg="ContainerStatus for \"dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea\": not found" Jan 29 16:42:59.322584 kubelet[2846]: E0129 16:42:59.322559 2846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea\": not found" containerID="dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea" Jan 29 16:42:59.322660 kubelet[2846]: I0129 16:42:59.322632 2846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea"} err="failed to get container status \"dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd330769052da4a74307dc96fc7ea491188617ab4e5de493259577a8510fb5ea\": not found" Jan 29 16:42:59.322660 kubelet[2846]: I0129 16:42:59.322656 2846 scope.go:117] "RemoveContainer" containerID="c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea" Jan 29 16:42:59.323041 containerd[1525]: time="2025-01-29T16:42:59.322864219Z" level=error msg="ContainerStatus for \"c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea\": not found" Jan 29 16:42:59.323240 kubelet[2846]: E0129 16:42:59.322998 2846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea\": not found" containerID="c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea" Jan 29 16:42:59.323240 kubelet[2846]: I0129 16:42:59.323014 2846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea"} err="failed to get container status \"c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea\": rpc error: code = NotFound desc = an error occurred when try to find container \"c718352908ef1f674e7b4036b463fdca4b4628c0b32edff2280eeaa885901cea\": not found" Jan 29 16:42:59.459230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886-rootfs.mount: Deactivated successfully. Jan 29 16:42:59.459412 systemd[1]: var-lib-kubelet-pods-f8a1c440\x2d7300\x2d40bc\x2d9ce1\x2d4c0c6cabc043-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2xzps.mount: Deactivated successfully. Jan 29 16:42:59.459512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624-rootfs.mount: Deactivated successfully. Jan 29 16:42:59.459617 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624-shm.mount: Deactivated successfully. Jan 29 16:42:59.459716 systemd[1]: var-lib-kubelet-pods-d5b196d2\x2d5245\x2d42fc\x2db1bc\x2d8b384cc3fae1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfcfj5.mount: Deactivated successfully. Jan 29 16:42:59.459829 systemd[1]: var-lib-kubelet-pods-d5b196d2\x2d5245\x2d42fc\x2db1bc\x2d8b384cc3fae1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 16:42:59.459956 systemd[1]: var-lib-kubelet-pods-d5b196d2\x2d5245\x2d42fc\x2db1bc\x2d8b384cc3fae1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 16:42:59.529874 kubelet[2846]: I0129 16:42:59.529715 2846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b196d2-5245-42fc-b1bc-8b384cc3fae1" path="/var/lib/kubelet/pods/d5b196d2-5245-42fc-b1bc-8b384cc3fae1/volumes" Jan 29 16:42:59.530904 kubelet[2846]: I0129 16:42:59.530869 2846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8a1c440-7300-40bc-9ce1-4c0c6cabc043" path="/var/lib/kubelet/pods/f8a1c440-7300-40bc-9ce1-4c0c6cabc043/volumes" Jan 29 16:43:00.532731 sshd[4758]: Connection closed by 147.75.109.163 port 55022 Jan 29 16:43:00.533707 sshd-session[4756]: pam_unix(sshd:session): session closed for user core Jan 29 16:43:00.537694 systemd[1]: sshd@87-142.132.231.50:22-147.75.109.163:55022.service: Deactivated successfully. Jan 29 16:43:00.539986 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 16:43:00.542345 systemd-logind[1509]: Session 21 logged out. Waiting for processes to exit. Jan 29 16:43:00.543883 systemd-logind[1509]: Removed session 21. Jan 29 16:43:00.701998 systemd[1]: Started sshd@88-142.132.231.50:22-147.75.109.163:50510.service - OpenSSH per-connection server daemon (147.75.109.163:50510). Jan 29 16:43:01.673339 kubelet[2846]: E0129 16:43:01.673274 2846 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:43:01.684870 sshd[4917]: Accepted publickey for core from 147.75.109.163 port 50510 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:43:01.686634 sshd-session[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:43:01.691922 systemd-logind[1509]: New session 22 of user core. Jan 29 16:43:01.696962 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 16:43:02.776203 kubelet[2846]: E0129 16:43:02.776159 2846 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5b196d2-5245-42fc-b1bc-8b384cc3fae1" containerName="apply-sysctl-overwrites" Jan 29 16:43:02.776203 kubelet[2846]: E0129 16:43:02.776191 2846 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5b196d2-5245-42fc-b1bc-8b384cc3fae1" containerName="clean-cilium-state" Jan 29 16:43:02.776203 kubelet[2846]: E0129 16:43:02.776199 2846 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f8a1c440-7300-40bc-9ce1-4c0c6cabc043" containerName="cilium-operator" Jan 29 16:43:02.776203 kubelet[2846]: E0129 16:43:02.776205 2846 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5b196d2-5245-42fc-b1bc-8b384cc3fae1" containerName="mount-cgroup" Jan 29 16:43:02.776203 kubelet[2846]: E0129 16:43:02.776211 2846 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5b196d2-5245-42fc-b1bc-8b384cc3fae1" containerName="mount-bpf-fs" Jan 29 16:43:02.776203 kubelet[2846]: E0129 16:43:02.776216 2846 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5b196d2-5245-42fc-b1bc-8b384cc3fae1" containerName="cilium-agent" Jan 29 16:43:02.777648 kubelet[2846]: I0129 16:43:02.776275 2846 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8a1c440-7300-40bc-9ce1-4c0c6cabc043" containerName="cilium-operator" Jan 29 16:43:02.777648 kubelet[2846]: I0129 16:43:02.776292 2846 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b196d2-5245-42fc-b1bc-8b384cc3fae1" containerName="cilium-agent" Jan 29 16:43:02.816036 systemd[1]: Created slice kubepods-burstable-podf3f7b615_1a21_4e77_aea9_ada16addb5db.slice - libcontainer container kubepods-burstable-podf3f7b615_1a21_4e77_aea9_ada16addb5db.slice. Jan 29 16:43:02.899356 sshd[4919]: Connection closed by 147.75.109.163 port 50510 Jan 29 16:43:02.900073 sshd-session[4917]: pam_unix(sshd:session): session closed for user core Jan 29 16:43:02.903425 systemd[1]: sshd@88-142.132.231.50:22-147.75.109.163:50510.service: Deactivated successfully. Jan 29 16:43:02.906071 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 16:43:02.907725 systemd-logind[1509]: Session 22 logged out. Waiting for processes to exit. Jan 29 16:43:02.909354 systemd-logind[1509]: Removed session 22. Jan 29 16:43:02.915640 kubelet[2846]: I0129 16:43:02.915283 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f3f7b615-1a21-4e77-aea9-ada16addb5db-etc-cni-netd\") pod \"cilium-9v589\" (UID: \"f3f7b615-1a21-4e77-aea9-ada16addb5db\") " pod="kube-system/cilium-9v589" Jan 29 16:43:02.915640 kubelet[2846]: I0129 16:43:02.915320 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3f7b615-1a21-4e77-aea9-ada16addb5db-xtables-lock\") pod \"cilium-9v589\" (UID: \"f3f7b615-1a21-4e77-aea9-ada16addb5db\") " pod="kube-system/cilium-9v589" Jan 29 16:43:02.915640 kubelet[2846]: I0129 16:43:02.915339 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f3f7b615-1a21-4e77-aea9-ada16addb5db-hubble-tls\") pod \"cilium-9v589\" (UID: \"f3f7b615-1a21-4e77-aea9-ada16addb5db\") " pod="kube-system/cilium-9v589" Jan 29 16:43:02.915640 kubelet[2846]: I0129 16:43:02.915355 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmsdp\" (UniqueName: \"kubernetes.io/projected/f3f7b615-1a21-4e77-aea9-ada16addb5db-kube-api-access-tmsdp\") pod \"cilium-9v589\" (UID: \"f3f7b615-1a21-4e77-aea9-ada16addb5db\") " pod="kube-system/cilium-9v589" Jan 29 16:43:02.915640 kubelet[2846]: I0129 16:43:02.915369 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3f7b615-1a21-4e77-aea9-ada16addb5db-cilium-config-path\") pod \"cilium-9v589\" (UID: \"f3f7b615-1a21-4e77-aea9-ada16addb5db\") " pod="kube-system/cilium-9v589" Jan 29 16:43:02.915640 kubelet[2846]: I0129 16:43:02.915382 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f3f7b615-1a21-4e77-aea9-ada16addb5db-host-proc-sys-net\") pod \"cilium-9v589\" (UID: \"f3f7b615-1a21-4e77-aea9-ada16addb5db\") " pod="kube-system/cilium-9v589" Jan 29 16:43:02.915912 kubelet[2846]: I0129 16:43:02.915396 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f3f7b615-1a21-4e77-aea9-ada16addb5db-clustermesh-secrets\") pod \"cilium-9v589\" (UID: \"f3f7b615-1a21-4e77-aea9-ada16addb5db\") " pod="kube-system/cilium-9v589" Jan 29 16:43:02.915912 kubelet[2846]: I0129 16:43:02.915416 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f3f7b615-1a21-4e77-aea9-ada16addb5db-cilium-run\") pod \"cilium-9v589\" (UID: \"f3f7b615-1a21-4e77-aea9-ada16addb5db\") " pod="kube-system/cilium-9v589" Jan 29 16:43:02.915912 kubelet[2846]: I0129 16:43:02.915437 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f3f7b615-1a21-4e77-aea9-ada16addb5db-hostproc\") pod \"cilium-9v589\" (UID: \"f3f7b615-1a21-4e77-aea9-ada16addb5db\") " pod="kube-system/cilium-9v589" Jan 29 16:43:02.915912 kubelet[2846]: I0129 16:43:02.915466 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f3f7b615-1a21-4e77-aea9-ada16addb5db-cilium-cgroup\") pod \"cilium-9v589\" (UID: \"f3f7b615-1a21-4e77-aea9-ada16addb5db\") " pod="kube-system/cilium-9v589" Jan 29 16:43:02.915912 kubelet[2846]: I0129 16:43:02.915482 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f3f7b615-1a21-4e77-aea9-ada16addb5db-cni-path\") pod \"cilium-9v589\" (UID: \"f3f7b615-1a21-4e77-aea9-ada16addb5db\") " pod="kube-system/cilium-9v589" Jan 29 16:43:02.915912 kubelet[2846]: I0129 16:43:02.915494 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3f7b615-1a21-4e77-aea9-ada16addb5db-lib-modules\") pod \"cilium-9v589\" (UID: \"f3f7b615-1a21-4e77-aea9-ada16addb5db\") " pod="kube-system/cilium-9v589" Jan 29 16:43:02.916041 kubelet[2846]: I0129 16:43:02.915507 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f3f7b615-1a21-4e77-aea9-ada16addb5db-bpf-maps\") pod \"cilium-9v589\" (UID: \"f3f7b615-1a21-4e77-aea9-ada16addb5db\") " pod="kube-system/cilium-9v589" Jan 29 16:43:02.916041 kubelet[2846]: I0129 16:43:02.915519 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f3f7b615-1a21-4e77-aea9-ada16addb5db-cilium-ipsec-secrets\") pod \"cilium-9v589\" (UID: \"f3f7b615-1a21-4e77-aea9-ada16addb5db\") " pod="kube-system/cilium-9v589" Jan 29 16:43:02.916041 kubelet[2846]: I0129 16:43:02.915538 2846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f3f7b615-1a21-4e77-aea9-ada16addb5db-host-proc-sys-kernel\") pod \"cilium-9v589\" (UID: \"f3f7b615-1a21-4e77-aea9-ada16addb5db\") " pod="kube-system/cilium-9v589" Jan 29 16:43:03.072067 systemd[1]: Started sshd@89-142.132.231.50:22-147.75.109.163:50514.service - OpenSSH per-connection server daemon (147.75.109.163:50514). Jan 29 16:43:03.119341 containerd[1525]: time="2025-01-29T16:43:03.119237250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9v589,Uid:f3f7b615-1a21-4e77-aea9-ada16addb5db,Namespace:kube-system,Attempt:0,}" Jan 29 16:43:03.139534 containerd[1525]: time="2025-01-29T16:43:03.139439044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:43:03.139735 containerd[1525]: time="2025-01-29T16:43:03.139684388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:43:03.139735 containerd[1525]: time="2025-01-29T16:43:03.139704348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:43:03.140389 containerd[1525]: time="2025-01-29T16:43:03.139975814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:43:03.162988 systemd[1]: Started cri-containerd-f7049605c7b07f224c878a57bfdd91af0f3bae1077724a97918167b6f8aff13b.scope - libcontainer container f7049605c7b07f224c878a57bfdd91af0f3bae1077724a97918167b6f8aff13b. Jan 29 16:43:03.186956 containerd[1525]: time="2025-01-29T16:43:03.186921301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9v589,Uid:f3f7b615-1a21-4e77-aea9-ada16addb5db,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7049605c7b07f224c878a57bfdd91af0f3bae1077724a97918167b6f8aff13b\"" Jan 29 16:43:03.192358 containerd[1525]: time="2025-01-29T16:43:03.192294912Z" level=info msg="CreateContainer within sandbox \"f7049605c7b07f224c878a57bfdd91af0f3bae1077724a97918167b6f8aff13b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:43:03.204172 containerd[1525]: time="2025-01-29T16:43:03.204002342Z" level=info msg="CreateContainer within sandbox \"f7049605c7b07f224c878a57bfdd91af0f3bae1077724a97918167b6f8aff13b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3561c6e99eb7c92b3cf843f005069d8eda7b7468524a52fd5dad3f0196be9662\"" Jan 29 16:43:03.205028 containerd[1525]: time="2025-01-29T16:43:03.204791330Z" level=info msg="StartContainer for \"3561c6e99eb7c92b3cf843f005069d8eda7b7468524a52fd5dad3f0196be9662\"" Jan 29 16:43:03.231940 systemd[1]: Started cri-containerd-3561c6e99eb7c92b3cf843f005069d8eda7b7468524a52fd5dad3f0196be9662.scope - libcontainer container 3561c6e99eb7c92b3cf843f005069d8eda7b7468524a52fd5dad3f0196be9662. Jan 29 16:43:03.258463 containerd[1525]: time="2025-01-29T16:43:03.258416718Z" level=info msg="StartContainer for \"3561c6e99eb7c92b3cf843f005069d8eda7b7468524a52fd5dad3f0196be9662\" returns successfully" Jan 29 16:43:03.270434 systemd[1]: cri-containerd-3561c6e99eb7c92b3cf843f005069d8eda7b7468524a52fd5dad3f0196be9662.scope: Deactivated successfully. Jan 29 16:43:03.310260 containerd[1525]: time="2025-01-29T16:43:03.310199027Z" level=info msg="shim disconnected" id=3561c6e99eb7c92b3cf843f005069d8eda7b7468524a52fd5dad3f0196be9662 namespace=k8s.io Jan 29 16:43:03.310260 containerd[1525]: time="2025-01-29T16:43:03.310246229Z" level=warning msg="cleaning up after shim disconnected" id=3561c6e99eb7c92b3cf843f005069d8eda7b7468524a52fd5dad3f0196be9662 namespace=k8s.io Jan 29 16:43:03.310260 containerd[1525]: time="2025-01-29T16:43:03.310253725Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:43:04.051958 sshd[4934]: Accepted publickey for core from 147.75.109.163 port 50514 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:43:04.053602 sshd-session[4934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:43:04.058988 systemd-logind[1509]: New session 23 of user core. Jan 29 16:43:04.061938 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 16:43:04.282950 containerd[1525]: time="2025-01-29T16:43:04.282879190Z" level=info msg="CreateContainer within sandbox \"f7049605c7b07f224c878a57bfdd91af0f3bae1077724a97918167b6f8aff13b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:43:04.304405 containerd[1525]: time="2025-01-29T16:43:04.300119909Z" level=info msg="CreateContainer within sandbox \"f7049605c7b07f224c878a57bfdd91af0f3bae1077724a97918167b6f8aff13b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ef812e27351dd448c7d00b9506a1295af931fa0644c8636f92a5c0f9d2b8dcf9\"" Jan 29 16:43:04.304405 containerd[1525]: time="2025-01-29T16:43:04.302983472Z" level=info msg="StartContainer for \"ef812e27351dd448c7d00b9506a1295af931fa0644c8636f92a5c0f9d2b8dcf9\"" Jan 29 16:43:04.338971 systemd[1]: Started cri-containerd-ef812e27351dd448c7d00b9506a1295af931fa0644c8636f92a5c0f9d2b8dcf9.scope - libcontainer container ef812e27351dd448c7d00b9506a1295af931fa0644c8636f92a5c0f9d2b8dcf9. Jan 29 16:43:04.366439 containerd[1525]: time="2025-01-29T16:43:04.366381409Z" level=info msg="StartContainer for \"ef812e27351dd448c7d00b9506a1295af931fa0644c8636f92a5c0f9d2b8dcf9\" returns successfully" Jan 29 16:43:04.375553 systemd[1]: cri-containerd-ef812e27351dd448c7d00b9506a1295af931fa0644c8636f92a5c0f9d2b8dcf9.scope: Deactivated successfully. Jan 29 16:43:04.395276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef812e27351dd448c7d00b9506a1295af931fa0644c8636f92a5c0f9d2b8dcf9-rootfs.mount: Deactivated successfully. Jan 29 16:43:04.404259 containerd[1525]: time="2025-01-29T16:43:04.404193963Z" level=info msg="shim disconnected" id=ef812e27351dd448c7d00b9506a1295af931fa0644c8636f92a5c0f9d2b8dcf9 namespace=k8s.io Jan 29 16:43:04.404259 containerd[1525]: time="2025-01-29T16:43:04.404251817Z" level=warning msg="cleaning up after shim disconnected" id=ef812e27351dd448c7d00b9506a1295af931fa0644c8636f92a5c0f9d2b8dcf9 namespace=k8s.io Jan 29 16:43:04.404526 containerd[1525]: time="2025-01-29T16:43:04.404263862Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:43:04.734090 sshd[5035]: Connection closed by 147.75.109.163 port 50514 Jan 29 16:43:04.735085 sshd-session[4934]: pam_unix(sshd:session): session closed for user core Jan 29 16:43:04.738793 systemd[1]: sshd@89-142.132.231.50:22-147.75.109.163:50514.service: Deactivated successfully. Jan 29 16:43:04.741310 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 16:43:04.743272 systemd-logind[1509]: Session 23 logged out. Waiting for processes to exit. Jan 29 16:43:04.744392 systemd-logind[1509]: Removed session 23. Jan 29 16:43:04.915079 systemd[1]: Started sshd@90-142.132.231.50:22-147.75.109.163:50530.service - OpenSSH per-connection server daemon (147.75.109.163:50530). Jan 29 16:43:05.283181 containerd[1525]: time="2025-01-29T16:43:05.283146715Z" level=info msg="CreateContainer within sandbox \"f7049605c7b07f224c878a57bfdd91af0f3bae1077724a97918167b6f8aff13b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:43:05.312111 containerd[1525]: time="2025-01-29T16:43:05.312058982Z" level=info msg="CreateContainer within sandbox \"f7049605c7b07f224c878a57bfdd91af0f3bae1077724a97918167b6f8aff13b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"281b022aadcec2ba7e0acbd492707e5065aa9a4b9a26ced81c8a64876f96a9d0\"" Jan 29 16:43:05.312714 containerd[1525]: time="2025-01-29T16:43:05.312475124Z" level=info msg="StartContainer for \"281b022aadcec2ba7e0acbd492707e5065aa9a4b9a26ced81c8a64876f96a9d0\"" Jan 29 16:43:05.343997 systemd[1]: Started cri-containerd-281b022aadcec2ba7e0acbd492707e5065aa9a4b9a26ced81c8a64876f96a9d0.scope - libcontainer container 281b022aadcec2ba7e0acbd492707e5065aa9a4b9a26ced81c8a64876f96a9d0. Jan 29 16:43:05.372511 containerd[1525]: time="2025-01-29T16:43:05.372379575Z" level=info msg="StartContainer for \"281b022aadcec2ba7e0acbd492707e5065aa9a4b9a26ced81c8a64876f96a9d0\" returns successfully" Jan 29 16:43:05.380049 systemd[1]: cri-containerd-281b022aadcec2ba7e0acbd492707e5065aa9a4b9a26ced81c8a64876f96a9d0.scope: Deactivated successfully. Jan 29 16:43:05.401389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-281b022aadcec2ba7e0acbd492707e5065aa9a4b9a26ced81c8a64876f96a9d0-rootfs.mount: Deactivated successfully. Jan 29 16:43:05.406132 containerd[1525]: time="2025-01-29T16:43:05.406071308Z" level=info msg="shim disconnected" id=281b022aadcec2ba7e0acbd492707e5065aa9a4b9a26ced81c8a64876f96a9d0 namespace=k8s.io Jan 29 16:43:05.406132 containerd[1525]: time="2025-01-29T16:43:05.406121818Z" level=warning msg="cleaning up after shim disconnected" id=281b022aadcec2ba7e0acbd492707e5065aa9a4b9a26ced81c8a64876f96a9d0 namespace=k8s.io Jan 29 16:43:05.406132 containerd[1525]: time="2025-01-29T16:43:05.406131075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:43:05.892868 sshd[5103]: Accepted publickey for core from 147.75.109.163 port 50530 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:43:05.895954 sshd-session[5103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:43:05.905912 systemd-logind[1509]: New session 24 of user core. Jan 29 16:43:05.917056 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 16:43:06.288323 containerd[1525]: time="2025-01-29T16:43:06.288285946Z" level=info msg="CreateContainer within sandbox \"f7049605c7b07f224c878a57bfdd91af0f3bae1077724a97918167b6f8aff13b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:43:06.300060 containerd[1525]: time="2025-01-29T16:43:06.300016145Z" level=info msg="CreateContainer within sandbox \"f7049605c7b07f224c878a57bfdd91af0f3bae1077724a97918167b6f8aff13b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f8cb81b35c1f212a5603a1517420834303738e1ff1a9166bb237f1ac68745901\"" Jan 29 16:43:06.300492 containerd[1525]: time="2025-01-29T16:43:06.300460071Z" level=info msg="StartContainer for \"f8cb81b35c1f212a5603a1517420834303738e1ff1a9166bb237f1ac68745901\"" Jan 29 16:43:06.346956 systemd[1]: Started cri-containerd-f8cb81b35c1f212a5603a1517420834303738e1ff1a9166bb237f1ac68745901.scope - libcontainer container f8cb81b35c1f212a5603a1517420834303738e1ff1a9166bb237f1ac68745901. Jan 29 16:43:06.370854 systemd[1]: cri-containerd-f8cb81b35c1f212a5603a1517420834303738e1ff1a9166bb237f1ac68745901.scope: Deactivated successfully. Jan 29 16:43:06.373181 containerd[1525]: time="2025-01-29T16:43:06.373146770Z" level=info msg="StartContainer for \"f8cb81b35c1f212a5603a1517420834303738e1ff1a9166bb237f1ac68745901\" returns successfully" Jan 29 16:43:06.392766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8cb81b35c1f212a5603a1517420834303738e1ff1a9166bb237f1ac68745901-rootfs.mount: Deactivated successfully. Jan 29 16:43:06.396839 containerd[1525]: time="2025-01-29T16:43:06.396761740Z" level=info msg="shim disconnected" id=f8cb81b35c1f212a5603a1517420834303738e1ff1a9166bb237f1ac68745901 namespace=k8s.io Jan 29 16:43:06.396839 containerd[1525]: time="2025-01-29T16:43:06.396830496Z" level=warning msg="cleaning up after shim disconnected" id=f8cb81b35c1f212a5603a1517420834303738e1ff1a9166bb237f1ac68745901 namespace=k8s.io Jan 29 16:43:06.396839 containerd[1525]: time="2025-01-29T16:43:06.396840226Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:43:06.674664 kubelet[2846]: E0129 16:43:06.674490 2846 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:43:06.874568 kubelet[2846]: I0129 16:43:06.874478 2846 setters.go:600] "Node became not ready" node="ci-4230-0-0-6-6baf09a0d0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T16:43:06Z","lastTransitionTime":"2025-01-29T16:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 16:43:07.301725 containerd[1525]: time="2025-01-29T16:43:07.301665532Z" level=info msg="CreateContainer within sandbox \"f7049605c7b07f224c878a57bfdd91af0f3bae1077724a97918167b6f8aff13b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:43:07.326025 containerd[1525]: time="2025-01-29T16:43:07.325950383Z" level=info msg="CreateContainer within sandbox \"f7049605c7b07f224c878a57bfdd91af0f3bae1077724a97918167b6f8aff13b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"141ca19dbc3cb668fa7904cd21d996c4d75b2348e9e7a631662fe56f9c51e936\"" Jan 29 16:43:07.328856 containerd[1525]: time="2025-01-29T16:43:07.327994946Z" level=info msg="StartContainer for \"141ca19dbc3cb668fa7904cd21d996c4d75b2348e9e7a631662fe56f9c51e936\"" Jan 29 16:43:07.357991 systemd[1]: Started cri-containerd-141ca19dbc3cb668fa7904cd21d996c4d75b2348e9e7a631662fe56f9c51e936.scope - libcontainer container 141ca19dbc3cb668fa7904cd21d996c4d75b2348e9e7a631662fe56f9c51e936. Jan 29 16:43:07.390960 containerd[1525]: time="2025-01-29T16:43:07.390916185Z" level=info msg="StartContainer for \"141ca19dbc3cb668fa7904cd21d996c4d75b2348e9e7a631662fe56f9c51e936\" returns successfully" Jan 29 16:43:07.864141 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 16:43:08.527610 kubelet[2846]: E0129 16:43:08.527550 2846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-wmzfj" podUID="e6baf5fc-0773-4e37-84c1-899321f1d3e4" Jan 29 16:43:10.527408 kubelet[2846]: E0129 16:43:10.527330 2846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-wmzfj" podUID="e6baf5fc-0773-4e37-84c1-899321f1d3e4" Jan 29 16:43:10.884751 systemd-networkd[1421]: lxc_health: Link UP Jan 29 16:43:10.889479 systemd-networkd[1421]: lxc_health: Gained carrier Jan 29 16:43:11.145303 kubelet[2846]: I0129 16:43:11.144747 2846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9v589" podStartSLOduration=9.144693579 podStartE2EDuration="9.144693579s" podCreationTimestamp="2025-01-29 16:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:43:08.318401557 +0000 UTC m=+356.898775906" watchObservedRunningTime="2025-01-29 16:43:11.144693579 +0000 UTC m=+359.725067928" Jan 29 16:43:11.541858 containerd[1525]: time="2025-01-29T16:43:11.540250872Z" level=info msg="StopPodSandbox for \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\"" Jan 29 16:43:11.541858 containerd[1525]: time="2025-01-29T16:43:11.540368363Z" level=info msg="TearDown network for sandbox \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\" successfully" Jan 29 16:43:11.541858 containerd[1525]: time="2025-01-29T16:43:11.540379666Z" level=info msg="StopPodSandbox for \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\" returns successfully" Jan 29 16:43:11.555528 containerd[1525]: time="2025-01-29T16:43:11.555477765Z" level=info msg="RemovePodSandbox for \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\"" Jan 29 16:43:11.555528 containerd[1525]: time="2025-01-29T16:43:11.555531060Z" level=info msg="Forcibly stopping sandbox \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\"" Jan 29 16:43:11.555708 containerd[1525]: time="2025-01-29T16:43:11.555593263Z" level=info msg="TearDown network for sandbox \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\" successfully" Jan 29 16:43:11.561527 containerd[1525]: time="2025-01-29T16:43:11.561483043Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:43:11.561613 containerd[1525]: time="2025-01-29T16:43:11.561540626Z" level=info msg="RemovePodSandbox \"e3c92556af6d010215b343770f1f5690d99ae3aa517ae30db0b4426d357ee624\" returns successfully" Jan 29 16:43:11.562094 containerd[1525]: time="2025-01-29T16:43:11.562063657Z" level=info msg="StopPodSandbox for \"4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886\"" Jan 29 16:43:11.562227 containerd[1525]: time="2025-01-29T16:43:11.562180588Z" level=info msg="TearDown network for sandbox \"4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886\" successfully" Jan 29 16:43:11.562227 containerd[1525]: time="2025-01-29T16:43:11.562218843Z" level=info msg="StopPodSandbox for \"4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886\" returns successfully" Jan 29 16:43:11.562551 containerd[1525]: time="2025-01-29T16:43:11.562522752Z" level=info msg="RemovePodSandbox for \"4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886\"" Jan 29 16:43:11.562551 containerd[1525]: time="2025-01-29T16:43:11.562546218Z" level=info msg="Forcibly stopping sandbox \"4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886\"" Jan 29 16:43:11.562670 containerd[1525]: time="2025-01-29T16:43:11.562587570Z" level=info msg="TearDown network for sandbox \"4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886\" successfully" Jan 29 16:43:11.568030 containerd[1525]: time="2025-01-29T16:43:11.567954558Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:43:11.568092 containerd[1525]: time="2025-01-29T16:43:11.568045899Z" level=info msg="RemovePodSandbox \"4a8042925720d93cdd7eba888f2452bc07fa8872188fc542b42a0e3a8ef44886\" returns successfully" Jan 29 16:43:12.629719 systemd-networkd[1421]: lxc_health: Gained IPv6LL Jan 29 16:43:12.934531 systemd[1]: run-containerd-runc-k8s.io-141ca19dbc3cb668fa7904cd21d996c4d75b2348e9e7a631662fe56f9c51e936-runc.eT8aIQ.mount: Deactivated successfully. Jan 29 16:43:17.192960 kubelet[2846]: E0129 16:43:17.192682 2846 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52400->127.0.0.1:39827: write tcp 127.0.0.1:52400->127.0.0.1:39827: write: broken pipe Jan 29 16:43:19.260546 systemd[1]: run-containerd-runc-k8s.io-141ca19dbc3cb668fa7904cd21d996c4d75b2348e9e7a631662fe56f9c51e936-runc.eCmhl3.mount: Deactivated successfully. Jan 29 16:43:19.307983 kubelet[2846]: E0129 16:43:19.307896 2846 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52408->127.0.0.1:39827: write tcp 127.0.0.1:52408->127.0.0.1:39827: write: broken pipe Jan 29 16:43:21.373774 systemd[1]: run-containerd-runc-k8s.io-141ca19dbc3cb668fa7904cd21d996c4d75b2348e9e7a631662fe56f9c51e936-runc.20BUjH.mount: Deactivated successfully. Jan 29 16:43:21.877027 systemd[1]: Started sshd@91-142.132.231.50:22-72.240.125.133:41172.service - OpenSSH per-connection server daemon (72.240.125.133:41172). Jan 29 16:43:22.658926 sshd[5969]: Invalid user ftpuser from 72.240.125.133 port 41172 Jan 29 16:43:22.802879 sshd[5969]: Received disconnect from 72.240.125.133 port 41172:11: Bye Bye [preauth] Jan 29 16:43:22.802879 sshd[5969]: Disconnected from invalid user ftpuser 72.240.125.133 port 41172 [preauth] Jan 29 16:43:22.805890 systemd[1]: sshd@91-142.132.231.50:22-72.240.125.133:41172.service: Deactivated successfully. Jan 29 16:43:27.679371 systemd[1]: run-containerd-runc-k8s.io-141ca19dbc3cb668fa7904cd21d996c4d75b2348e9e7a631662fe56f9c51e936-runc.XIR6xF.mount: Deactivated successfully. Jan 29 16:43:29.852013 systemd[1]: run-containerd-runc-k8s.io-141ca19dbc3cb668fa7904cd21d996c4d75b2348e9e7a631662fe56f9c51e936-runc.qwlo3S.mount: Deactivated successfully. Jan 29 16:43:34.090102 systemd[1]: run-containerd-runc-k8s.io-141ca19dbc3cb668fa7904cd21d996c4d75b2348e9e7a631662fe56f9c51e936-runc.EgyJQo.mount: Deactivated successfully. Jan 29 16:43:46.831302 systemd[1]: run-containerd-runc-k8s.io-141ca19dbc3cb668fa7904cd21d996c4d75b2348e9e7a631662fe56f9c51e936-runc.tWGtwP.mount: Deactivated successfully. Jan 29 16:43:52.653059 systemd[1]: Started sshd@92-142.132.231.50:22-152.32.133.149:53344.service - OpenSSH per-connection server daemon (152.32.133.149:53344). Jan 29 16:43:54.150422 sshd[6271]: Invalid user server from 152.32.133.149 port 53344 Jan 29 16:43:54.443422 sshd[6271]: Received disconnect from 152.32.133.149 port 53344:11: Bye Bye [preauth] Jan 29 16:43:54.443422 sshd[6271]: Disconnected from invalid user server 152.32.133.149 port 53344 [preauth] Jan 29 16:43:54.447558 systemd[1]: sshd@92-142.132.231.50:22-152.32.133.149:53344.service: Deactivated successfully. Jan 29 16:43:55.254232 systemd[1]: run-containerd-runc-k8s.io-141ca19dbc3cb668fa7904cd21d996c4d75b2348e9e7a631662fe56f9c51e936-runc.Mnw62C.mount: Deactivated successfully. Jan 29 16:43:59.512792 kubelet[2846]: E0129 16:43:59.512704 2846 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:36884->127.0.0.1:39827: write tcp 127.0.0.1:36884->127.0.0.1:39827: write: broken pipe Jan 29 16:44:01.584574 systemd[1]: run-containerd-runc-k8s.io-141ca19dbc3cb668fa7904cd21d996c4d75b2348e9e7a631662fe56f9c51e936-runc.xxSmCy.mount: Deactivated successfully. Jan 29 16:44:07.992018 sshd[5164]: Connection closed by 147.75.109.163 port 50530 Jan 29 16:44:07.993216 sshd-session[5103]: pam_unix(sshd:session): session closed for user core Jan 29 16:44:07.998510 systemd[1]: sshd@90-142.132.231.50:22-147.75.109.163:50530.service: Deactivated successfully. Jan 29 16:44:08.001418 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 16:44:08.003110 systemd-logind[1509]: Session 24 logged out. Waiting for processes to exit. Jan 29 16:44:08.004768 systemd-logind[1509]: Removed session 24. Jan 29 16:44:26.841088 systemd[1]: Started sshd@93-142.132.231.50:22-72.240.125.133:38480.service - OpenSSH per-connection server daemon (72.240.125.133:38480). Jan 29 16:44:27.222653 systemd[1]: cri-containerd-7a240faf10c2fa5b7a14e137e8f8e85d3efd0509206b066434a0c3c50c219395.scope: Deactivated successfully. Jan 29 16:44:27.223088 systemd[1]: cri-containerd-7a240faf10c2fa5b7a14e137e8f8e85d3efd0509206b066434a0c3c50c219395.scope: Consumed 6.593s CPU time, 70.4M memory peak, 20.3M read from disk. Jan 29 16:44:27.248924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a240faf10c2fa5b7a14e137e8f8e85d3efd0509206b066434a0c3c50c219395-rootfs.mount: Deactivated successfully. Jan 29 16:44:27.252999 containerd[1525]: time="2025-01-29T16:44:27.252912863Z" level=info msg="shim disconnected" id=7a240faf10c2fa5b7a14e137e8f8e85d3efd0509206b066434a0c3c50c219395 namespace=k8s.io Jan 29 16:44:27.253551 containerd[1525]: time="2025-01-29T16:44:27.253405721Z" level=warning msg="cleaning up after shim disconnected" id=7a240faf10c2fa5b7a14e137e8f8e85d3efd0509206b066434a0c3c50c219395 namespace=k8s.io Jan 29 16:44:27.253551 containerd[1525]: time="2025-01-29T16:44:27.253428797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:44:27.467083 kubelet[2846]: I0129 16:44:27.466848 2846 scope.go:117] "RemoveContainer" containerID="7a240faf10c2fa5b7a14e137e8f8e85d3efd0509206b066434a0c3c50c219395" Jan 29 16:44:27.469482 containerd[1525]: time="2025-01-29T16:44:27.469434628Z" level=info msg="CreateContainer within sandbox \"c1362370f34165baba74077f086d241c4fb7e0cd8eb9b2d7bf43ea25f5e84c4c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 29 16:44:27.485273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1018058824.mount: Deactivated successfully. Jan 29 16:44:27.487738 containerd[1525]: time="2025-01-29T16:44:27.487700220Z" level=info msg="CreateContainer within sandbox \"c1362370f34165baba74077f086d241c4fb7e0cd8eb9b2d7bf43ea25f5e84c4c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e2bff64a59ba25283961032e37199b75617bfb25a0eeead9603ce41e70ccac16\"" Jan 29 16:44:27.488309 containerd[1525]: time="2025-01-29T16:44:27.488277853Z" level=info msg="StartContainer for \"e2bff64a59ba25283961032e37199b75617bfb25a0eeead9603ce41e70ccac16\"" Jan 29 16:44:27.516727 systemd[1]: Started cri-containerd-e2bff64a59ba25283961032e37199b75617bfb25a0eeead9603ce41e70ccac16.scope - libcontainer container e2bff64a59ba25283961032e37199b75617bfb25a0eeead9603ce41e70ccac16. Jan 29 16:44:27.557036 containerd[1525]: time="2025-01-29T16:44:27.556980044Z" level=info msg="StartContainer for \"e2bff64a59ba25283961032e37199b75617bfb25a0eeead9603ce41e70ccac16\" returns successfully" Jan 29 16:44:27.575478 kubelet[2846]: E0129 16:44:27.575436 2846 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:49336->10.0.0.2:2379: read: connection timed out" Jan 29 16:44:27.582658 systemd[1]: cri-containerd-f52daf43dec4c88f524936309bd5881573d623febd610a560b9d43d5d79173f4.scope: Deactivated successfully. Jan 29 16:44:27.583153 systemd[1]: cri-containerd-f52daf43dec4c88f524936309bd5881573d623febd610a560b9d43d5d79173f4.scope: Consumed 2.142s CPU time, 28.1M memory peak, 8.5M read from disk. Jan 29 16:44:27.609622 containerd[1525]: time="2025-01-29T16:44:27.609536658Z" level=info msg="shim disconnected" id=f52daf43dec4c88f524936309bd5881573d623febd610a560b9d43d5d79173f4 namespace=k8s.io Jan 29 16:44:27.609622 containerd[1525]: time="2025-01-29T16:44:27.609609810Z" level=warning msg="cleaning up after shim disconnected" id=f52daf43dec4c88f524936309bd5881573d623febd610a560b9d43d5d79173f4 namespace=k8s.io Jan 29 16:44:27.609622 containerd[1525]: time="2025-01-29T16:44:27.609617635Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:44:27.656719 sshd[6429]: Invalid user server from 72.240.125.133 port 38480 Jan 29 16:44:27.798711 sshd[6429]: Received disconnect from 72.240.125.133 port 38480:11: Bye Bye [preauth] Jan 29 16:44:27.798711 sshd[6429]: Disconnected from invalid user server 72.240.125.133 port 38480 [preauth] Jan 29 16:44:27.799366 systemd[1]: sshd@93-142.132.231.50:22-72.240.125.133:38480.service: Deactivated successfully. Jan 29 16:44:28.250764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f52daf43dec4c88f524936309bd5881573d623febd610a560b9d43d5d79173f4-rootfs.mount: Deactivated successfully. Jan 29 16:44:28.471652 kubelet[2846]: I0129 16:44:28.471607 2846 scope.go:117] "RemoveContainer" containerID="f52daf43dec4c88f524936309bd5881573d623febd610a560b9d43d5d79173f4" Jan 29 16:44:28.473623 containerd[1525]: time="2025-01-29T16:44:28.473548921Z" level=info msg="CreateContainer within sandbox \"6d598aa82e805f03b265c89079ecdaf51385841005a5be994a234a6fecf4877e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 29 16:44:28.487833 containerd[1525]: time="2025-01-29T16:44:28.487749629Z" level=info msg="CreateContainer within sandbox \"6d598aa82e805f03b265c89079ecdaf51385841005a5be994a234a6fecf4877e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"564603c5fd4e8aacdf9fdd267dd93e20b02d5724177430a3383b2b6732ce55bc\"" Jan 29 16:44:28.488278 containerd[1525]: time="2025-01-29T16:44:28.488239632Z" level=info msg="StartContainer for \"564603c5fd4e8aacdf9fdd267dd93e20b02d5724177430a3383b2b6732ce55bc\"" Jan 29 16:44:28.531043 systemd[1]: Started cri-containerd-564603c5fd4e8aacdf9fdd267dd93e20b02d5724177430a3383b2b6732ce55bc.scope - libcontainer container 564603c5fd4e8aacdf9fdd267dd93e20b02d5724177430a3383b2b6732ce55bc. Jan 29 16:44:28.571933 containerd[1525]: time="2025-01-29T16:44:28.571884477Z" level=info msg="StartContainer for \"564603c5fd4e8aacdf9fdd267dd93e20b02d5724177430a3383b2b6732ce55bc\" returns successfully" Jan 29 16:44:31.878742 kubelet[2846]: E0129 16:44:31.873325 2846 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-0-0-6-6baf09a0d0.181f378d744f2f9f kube-system 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-0-0-6-6baf09a0d0,UID:074ef48a87697e5d3bc73b4c972f1e05,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-6-6baf09a0d0,},FirstTimestamp:2025-01-29 16:44:21.869563807 +0000 UTC m=+430.449938156,LastTimestamp:2025-01-29 16:44:21.869563807 +0000 UTC m=+430.449938156,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-6-6baf09a0d0,}" Jan 29 16:44:37.576291 kubelet[2846]: E0129 16:44:37.575955 2846 controller.go:195] "Failed to update lease" err="Put \"https://142.132.231.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-6-6baf09a0d0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:44:37.943155 kubelet[2846]: I0129 16:44:37.943008 2846 status_manager.go:851] "Failed to get status for pod" podUID="918c54d808a697f1e5909038f67127de" pod="kube-system/kube-controller-manager-ci-4230-0-0-6-6baf09a0d0" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:49232->10.0.0.2:2379: read: connection timed out"