Jan 29 16:54:38.125786 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:54:38.125809 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:54:38.125820 kernel: BIOS-provided physical RAM map: Jan 29 16:54:38.125827 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 16:54:38.125833 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 16:54:38.125840 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 16:54:38.125847 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jan 29 16:54:38.125854 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jan 29 16:54:38.125862 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 16:54:38.125869 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 16:54:38.125875 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 16:54:38.125882 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 16:54:38.125888 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 16:54:38.125895 kernel: NX (Execute Disable) protection: active Jan 29 16:54:38.125905 kernel: APIC: Static calls initialized Jan 29 16:54:38.125912 kernel: SMBIOS 3.0.0 present. Jan 29 16:54:38.125919 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 29 16:54:38.126106 kernel: Hypervisor detected: KVM Jan 29 16:54:38.126113 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:54:38.126120 kernel: kvm-clock: using sched offset of 3596916295 cycles Jan 29 16:54:38.126129 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:54:38.126139 kernel: tsc: Detected 2495.310 MHz processor Jan 29 16:54:38.126149 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:54:38.126159 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:54:38.126172 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jan 29 16:54:38.126181 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 16:54:38.126188 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:54:38.126195 kernel: Using GB pages for direct mapping Jan 29 16:54:38.126202 kernel: ACPI: Early table checksum verification disabled Jan 29 16:54:38.126209 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Jan 29 16:54:38.126216 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:54:38.126224 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:54:38.126231 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:54:38.126240 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jan 29 16:54:38.126247 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:54:38.126254 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:54:38.126261 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:54:38.126269 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:54:38.126276 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Jan 29 16:54:38.126283 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Jan 29 16:54:38.126295 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jan 29 16:54:38.126303 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Jan 29 16:54:38.126310 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Jan 29 16:54:38.126317 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Jan 29 16:54:38.126324 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Jan 29 16:54:38.126331 kernel: No NUMA configuration found Jan 29 16:54:38.126339 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jan 29 16:54:38.126348 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jan 29 16:54:38.126356 kernel: Zone ranges: Jan 29 16:54:38.126363 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:54:38.126370 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jan 29 16:54:38.126377 kernel: Normal empty Jan 29 16:54:38.126385 kernel: Movable zone start for each node Jan 29 16:54:38.126392 kernel: Early memory node ranges Jan 29 16:54:38.126399 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 16:54:38.126407 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jan 29 16:54:38.126416 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jan 29 16:54:38.126423 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:54:38.126430 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 16:54:38.126438 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 16:54:38.126445 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 16:54:38.126452 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:54:38.126460 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:54:38.126467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 16:54:38.126474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:54:38.126484 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:54:38.126491 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:54:38.126498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:54:38.126506 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:54:38.126513 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 16:54:38.126520 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 16:54:38.126527 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 16:54:38.126535 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 16:54:38.126542 kernel: Booting paravirtualized kernel on KVM Jan 29 16:54:38.126550 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:54:38.126559 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 16:54:38.126567 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 16:54:38.126574 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 16:54:38.126582 kernel: pcpu-alloc: [0] 0 1 Jan 29 16:54:38.126589 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 16:54:38.126597 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:54:38.126605 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:54:38.126613 kernel: random: crng init done Jan 29 16:54:38.126624 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:54:38.126632 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 16:54:38.126641 kernel: Fallback order for Node 0: 0 Jan 29 16:54:38.126649 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jan 29 16:54:38.126656 kernel: Policy zone: DMA32 Jan 29 16:54:38.126664 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:54:38.126671 kernel: Memory: 1920004K/2047464K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 127200K reserved, 0K cma-reserved) Jan 29 16:54:38.126679 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:54:38.126686 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:54:38.126695 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:54:38.126702 kernel: Dynamic Preempt: voluntary Jan 29 16:54:38.126710 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:54:38.126718 kernel: rcu: RCU event tracing is enabled. Jan 29 16:54:38.126725 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:54:38.126733 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:54:38.126740 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:54:38.126747 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:54:38.126755 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:54:38.126765 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:54:38.126772 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 16:54:38.126779 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:54:38.126787 kernel: Console: colour VGA+ 80x25 Jan 29 16:54:38.126794 kernel: printk: console [tty0] enabled Jan 29 16:54:38.126801 kernel: printk: console [ttyS0] enabled Jan 29 16:54:38.126808 kernel: ACPI: Core revision 20230628 Jan 29 16:54:38.126816 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 16:54:38.126824 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:54:38.126833 kernel: x2apic enabled Jan 29 16:54:38.126840 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:54:38.126848 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 16:54:38.126855 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 16:54:38.126863 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495310) Jan 29 16:54:38.126870 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 16:54:38.126877 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 16:54:38.126885 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 16:54:38.126902 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:54:38.126909 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:54:38.126917 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:54:38.129192 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:54:38.129208 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 16:54:38.129215 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 16:54:38.129223 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 16:54:38.129231 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 16:54:38.129239 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 16:54:38.129250 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 16:54:38.129258 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 16:54:38.129265 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:54:38.129273 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:54:38.129281 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:54:38.129288 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:54:38.129296 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 16:54:38.129304 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:54:38.129314 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:54:38.129321 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:54:38.129329 kernel: landlock: Up and running. Jan 29 16:54:38.129337 kernel: SELinux: Initializing. Jan 29 16:54:38.129344 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:54:38.129352 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:54:38.129360 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 16:54:38.129367 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:54:38.129375 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:54:38.129385 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:54:38.129393 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 16:54:38.129400 kernel: ... version: 0 Jan 29 16:54:38.129408 kernel: ... bit width: 48 Jan 29 16:54:38.129415 kernel: ... generic registers: 6 Jan 29 16:54:38.129423 kernel: ... value mask: 0000ffffffffffff Jan 29 16:54:38.129430 kernel: ... max period: 00007fffffffffff Jan 29 16:54:38.129438 kernel: ... fixed-purpose events: 0 Jan 29 16:54:38.129445 kernel: ... event mask: 000000000000003f Jan 29 16:54:38.129455 kernel: signal: max sigframe size: 1776 Jan 29 16:54:38.129462 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:54:38.129470 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:54:38.129478 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:54:38.129486 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:54:38.129493 kernel: .... node #0, CPUs: #1 Jan 29 16:54:38.129501 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:54:38.129508 kernel: smpboot: Max logical packages: 1 Jan 29 16:54:38.129516 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Jan 29 16:54:38.129526 kernel: devtmpfs: initialized Jan 29 16:54:38.129533 kernel: x86/mm: Memory block size: 128MB Jan 29 16:54:38.129541 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:54:38.129549 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:54:38.129556 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:54:38.129564 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:54:38.129572 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:54:38.129579 kernel: audit: type=2000 audit(1738169676.548:1): state=initialized audit_enabled=0 res=1 Jan 29 16:54:38.129587 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:54:38.129597 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:54:38.129604 kernel: cpuidle: using governor menu Jan 29 16:54:38.129612 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:54:38.129619 kernel: dca service started, version 1.12.1 Jan 29 16:54:38.129627 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 16:54:38.129635 kernel: PCI: Using configuration type 1 for base access Jan 29 16:54:38.129642 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:54:38.129650 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:54:38.129658 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:54:38.129668 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:54:38.129675 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:54:38.129683 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:54:38.129690 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:54:38.129698 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:54:38.129705 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:54:38.129713 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:54:38.129721 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:54:38.129728 kernel: ACPI: Interpreter enabled Jan 29 16:54:38.129738 kernel: ACPI: PM: (supports S0 S5) Jan 29 16:54:38.129745 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:54:38.129753 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:54:38.129761 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:54:38.129768 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 16:54:38.129776 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:54:38.130003 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:54:38.130139 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 16:54:38.130267 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 16:54:38.130277 kernel: PCI host bridge to bus 0000:00 Jan 29 16:54:38.130411 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:54:38.130524 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:54:38.130636 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:54:38.130746 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jan 29 16:54:38.130857 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 16:54:38.131778 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 16:54:38.131900 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:54:38.132061 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 16:54:38.132197 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 29 16:54:38.132320 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jan 29 16:54:38.132441 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jan 29 16:54:38.132570 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jan 29 16:54:38.132706 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jan 29 16:54:38.132848 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:54:38.132996 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:38.133121 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jan 29 16:54:38.133252 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:38.133374 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jan 29 16:54:38.133515 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:38.133638 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jan 29 16:54:38.133767 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:38.133890 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jan 29 16:54:38.135017 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:38.135153 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jan 29 16:54:38.135295 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:38.135419 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jan 29 16:54:38.135555 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:38.135683 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jan 29 16:54:38.135813 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:38.135963 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jan 29 16:54:38.136101 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 29 16:54:38.136224 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jan 29 16:54:38.136359 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 16:54:38.136483 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 16:54:38.136615 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 16:54:38.136739 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jan 29 16:54:38.136911 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jan 29 16:54:38.137188 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 16:54:38.137319 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 16:54:38.137460 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 16:54:38.137588 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jan 29 16:54:38.137713 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 29 16:54:38.137838 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jan 29 16:54:38.142009 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 16:54:38.142141 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 16:54:38.142260 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 29 16:54:38.142403 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 16:54:38.142531 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jan 29 16:54:38.142659 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 16:54:38.142793 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 16:54:38.142913 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 16:54:38.143107 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 29 16:54:38.143233 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jan 29 16:54:38.143359 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jan 29 16:54:38.143481 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 16:54:38.143602 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 16:54:38.143728 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 16:54:38.143862 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 29 16:54:38.144018 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 29 16:54:38.144144 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 16:54:38.144268 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 16:54:38.144389 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 16:54:38.144526 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 16:54:38.144658 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jan 29 16:54:38.144784 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jan 29 16:54:38.146542 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 16:54:38.146686 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 16:54:38.146808 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 16:54:38.147000 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 29 16:54:38.147137 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jan 29 16:54:38.147265 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jan 29 16:54:38.147474 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 16:54:38.147600 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 16:54:38.147729 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 16:54:38.147740 kernel: acpiphp: Slot [0] registered Jan 29 16:54:38.147873 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 16:54:38.148096 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jan 29 16:54:38.148223 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jan 29 16:54:38.148352 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jan 29 16:54:38.148473 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 16:54:38.148591 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 16:54:38.148707 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 16:54:38.148717 kernel: acpiphp: Slot [0-2] registered Jan 29 16:54:38.148848 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 16:54:38.148982 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 29 16:54:38.149102 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 16:54:38.149112 kernel: acpiphp: Slot [0-3] registered Jan 29 16:54:38.149236 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 16:54:38.149354 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 16:54:38.149473 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 16:54:38.149483 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:54:38.149491 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:54:38.149500 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:54:38.149508 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:54:38.149516 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 16:54:38.149527 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 16:54:38.149534 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 16:54:38.149542 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 16:54:38.149550 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 16:54:38.149558 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 16:54:38.149566 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 16:54:38.149574 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 16:54:38.149582 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 16:54:38.149590 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 16:54:38.149600 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 16:54:38.149608 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 16:54:38.149616 kernel: iommu: Default domain type: Translated Jan 29 16:54:38.149624 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:54:38.149632 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:54:38.149640 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:54:38.149649 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 16:54:38.149657 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jan 29 16:54:38.149780 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 16:54:38.149904 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 16:54:38.150044 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:54:38.150056 kernel: vgaarb: loaded Jan 29 16:54:38.150065 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 16:54:38.150073 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 16:54:38.150081 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:54:38.150088 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:54:38.150097 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:54:38.150105 kernel: pnp: PnP ACPI init Jan 29 16:54:38.150238 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 16:54:38.150249 kernel: pnp: PnP ACPI: found 5 devices Jan 29 16:54:38.150257 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:54:38.150265 kernel: NET: Registered PF_INET protocol family Jan 29 16:54:38.150273 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:54:38.150281 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 16:54:38.150289 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:54:38.150297 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 16:54:38.150308 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 16:54:38.150316 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 16:54:38.150324 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:54:38.150332 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:54:38.150340 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:54:38.150348 kernel: NET: Registered PF_XDP protocol family Jan 29 16:54:38.150468 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 16:54:38.150588 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 16:54:38.150711 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 16:54:38.150831 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 16:54:38.150974 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 16:54:38.151095 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 16:54:38.151213 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 16:54:38.151332 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 16:54:38.151451 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 29 16:54:38.151576 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 16:54:38.151701 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 16:54:38.151820 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 16:54:38.151980 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 16:54:38.152103 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 16:54:38.152228 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 16:54:38.152348 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 16:54:38.152473 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 16:54:38.152610 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 16:54:38.152732 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 16:54:38.152867 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 16:54:38.153217 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 16:54:38.153336 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 16:54:38.153455 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 16:54:38.153574 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 16:54:38.153691 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 16:54:38.153810 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 29 16:54:38.153963 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 16:54:38.154091 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 16:54:38.154222 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 16:54:38.154342 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 29 16:54:38.154461 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 29 16:54:38.154585 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 16:54:38.154712 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 16:54:38.154831 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 29 16:54:38.155043 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 16:54:38.155164 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 16:54:38.155280 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:54:38.155394 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:54:38.155503 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:54:38.155614 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jan 29 16:54:38.155723 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 16:54:38.155832 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 16:54:38.156191 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 29 16:54:38.156312 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 29 16:54:38.156440 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 29 16:54:38.156556 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 16:54:38.156684 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 29 16:54:38.156799 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 16:54:38.156990 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 29 16:54:38.157285 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 16:54:38.157621 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 29 16:54:38.157965 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 16:54:38.158095 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jan 29 16:54:38.158268 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 16:54:38.158411 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 29 16:54:38.158535 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 29 16:54:38.158660 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 16:54:38.158800 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 29 16:54:38.159014 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jan 29 16:54:38.159154 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 16:54:38.159291 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 29 16:54:38.159416 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 29 16:54:38.159539 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 16:54:38.159557 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 16:54:38.159569 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:54:38.159580 kernel: Initialise system trusted keyrings Jan 29 16:54:38.159592 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 16:54:38.159600 kernel: Key type asymmetric registered Jan 29 16:54:38.159608 kernel: Asymmetric key parser 'x509' registered Jan 29 16:54:38.159617 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:54:38.159625 kernel: io scheduler mq-deadline registered Jan 29 16:54:38.159633 kernel: io scheduler kyber registered Jan 29 16:54:38.159641 kernel: io scheduler bfq registered Jan 29 16:54:38.159783 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 29 16:54:38.159911 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 29 16:54:38.160075 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 29 16:54:38.160265 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 29 16:54:38.160410 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 29 16:54:38.160534 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 29 16:54:38.160659 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 29 16:54:38.160787 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 29 16:54:38.161048 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 29 16:54:38.161185 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 29 16:54:38.161316 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 29 16:54:38.161437 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 29 16:54:38.161560 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 29 16:54:38.161681 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 29 16:54:38.161804 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 29 16:54:38.162115 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 29 16:54:38.162136 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 16:54:38.162263 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 29 16:54:38.162385 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 29 16:54:38.162396 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:54:38.162404 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 29 16:54:38.162413 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:54:38.162421 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:54:38.162430 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:54:38.162439 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:54:38.162451 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:54:38.162459 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:54:38.162600 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 16:54:38.162716 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 16:54:38.162831 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T16:54:37 UTC (1738169677) Jan 29 16:54:38.163046 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 16:54:38.163059 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 16:54:38.163071 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:54:38.163082 kernel: Segment Routing with IPv6 Jan 29 16:54:38.163090 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:54:38.163098 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:54:38.163107 kernel: Key type dns_resolver registered Jan 29 16:54:38.163116 kernel: IPI shorthand broadcast: enabled Jan 29 16:54:38.163125 kernel: sched_clock: Marking stable (1370011298, 148482898)->(1608611031, -90116835) Jan 29 16:54:38.163133 kernel: registered taskstats version 1 Jan 29 16:54:38.163141 kernel: Loading compiled-in X.509 certificates Jan 29 16:54:38.163149 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:54:38.163160 kernel: Key type .fscrypt registered Jan 29 16:54:38.163168 kernel: Key type fscrypt-provisioning registered Jan 29 16:54:38.163177 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:54:38.163185 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:54:38.163194 kernel: ima: No architecture policies found Jan 29 16:54:38.163202 kernel: clk: Disabling unused clocks Jan 29 16:54:38.163214 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:54:38.163226 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:54:38.163239 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:54:38.163249 kernel: Run /init as init process Jan 29 16:54:38.163257 kernel: with arguments: Jan 29 16:54:38.163266 kernel: /init Jan 29 16:54:38.163273 kernel: with environment: Jan 29 16:54:38.163281 kernel: HOME=/ Jan 29 16:54:38.163289 kernel: TERM=linux Jan 29 16:54:38.163297 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:54:38.163307 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:54:38.163321 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:54:38.163330 systemd[1]: Detected virtualization kvm. Jan 29 16:54:38.163339 systemd[1]: Detected architecture x86-64. Jan 29 16:54:38.163347 systemd[1]: Running in initrd. Jan 29 16:54:38.163356 systemd[1]: No hostname configured, using default hostname. Jan 29 16:54:38.163364 systemd[1]: Hostname set to . Jan 29 16:54:38.163373 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:54:38.163382 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:54:38.163393 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:54:38.163402 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:54:38.163411 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:54:38.163420 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:54:38.163429 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:54:38.163439 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:54:38.163451 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:54:38.163460 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:54:38.163470 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:54:38.163478 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:54:38.163487 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:54:38.163496 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:54:38.163504 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:54:38.163513 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:54:38.163521 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:54:38.163532 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:54:38.163541 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:54:38.163550 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:54:38.163558 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:54:38.163567 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:54:38.163576 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:54:38.163584 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:54:38.163593 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:54:38.163604 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:54:38.163613 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:54:38.163621 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:54:38.163630 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:54:38.163638 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:54:38.163647 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:54:38.163655 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:54:38.163664 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:54:38.163675 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:54:38.163723 systemd-journald[188]: Collecting audit messages is disabled. Jan 29 16:54:38.163748 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:54:38.163758 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:54:38.163767 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:54:38.163777 systemd-journald[188]: Journal started Jan 29 16:54:38.163796 systemd-journald[188]: Runtime Journal (/run/log/journal/4ae408f726a641859cf3e09c7eabacbe) is 4.8M, max 38.3M, 33.5M free. Jan 29 16:54:38.111447 systemd-modules-load[189]: Inserted module 'overlay' Jan 29 16:54:38.192871 kernel: Bridge firewalling registered Jan 29 16:54:38.192895 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:54:38.164677 systemd-modules-load[189]: Inserted module 'br_netfilter' Jan 29 16:54:38.193573 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:54:38.194396 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:54:38.203099 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:54:38.205415 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:54:38.209053 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:54:38.210250 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:54:38.225983 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:54:38.235350 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:54:38.236803 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:54:38.237580 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:54:38.243113 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:54:38.247079 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:54:38.257403 dracut-cmdline[223]: dracut-dracut-053 Jan 29 16:54:38.260794 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:54:38.295470 systemd-resolved[225]: Positive Trust Anchors: Jan 29 16:54:38.296267 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:54:38.296300 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:54:38.302426 systemd-resolved[225]: Defaulting to hostname 'linux'. Jan 29 16:54:38.304075 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:54:38.305190 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:54:38.340029 kernel: SCSI subsystem initialized Jan 29 16:54:38.350973 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:54:38.372970 kernel: iscsi: registered transport (tcp) Jan 29 16:54:38.396004 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:54:38.396145 kernel: QLogic iSCSI HBA Driver Jan 29 16:54:38.498913 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:54:38.507148 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:54:38.573188 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:54:38.573306 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:54:38.575315 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:54:38.648043 kernel: raid6: avx2x4 gen() 11758 MB/s Jan 29 16:54:38.665993 kernel: raid6: avx2x2 gen() 16062 MB/s Jan 29 16:54:38.684045 kernel: raid6: avx2x1 gen() 14269 MB/s Jan 29 16:54:38.684175 kernel: raid6: using algorithm avx2x2 gen() 16062 MB/s Jan 29 16:54:38.703048 kernel: raid6: .... xor() 19856 MB/s, rmw enabled Jan 29 16:54:38.703177 kernel: raid6: using avx2x2 recovery algorithm Jan 29 16:54:38.724975 kernel: xor: automatically using best checksumming function avx Jan 29 16:54:38.880008 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:54:38.902576 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:54:38.909250 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:54:38.952185 systemd-udevd[408]: Using default interface naming scheme 'v255'. Jan 29 16:54:38.959086 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:54:38.970171 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:54:38.987819 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jan 29 16:54:39.036715 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:54:39.044195 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:54:39.145868 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:54:39.158286 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:54:39.176870 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:54:39.179826 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:54:39.180306 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:54:39.180749 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:54:39.189383 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:54:39.204828 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:54:39.251984 kernel: ACPI: bus type USB registered Jan 29 16:54:39.255134 kernel: usbcore: registered new interface driver usbfs Jan 29 16:54:39.255154 kernel: usbcore: registered new interface driver hub Jan 29 16:54:39.255164 kernel: usbcore: registered new device driver usb Jan 29 16:54:39.302965 kernel: scsi host0: Virtio SCSI HBA Jan 29 16:54:39.304942 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:54:39.321024 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 29 16:54:39.338089 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:54:39.339036 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:54:39.340293 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:54:39.341537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:54:39.341658 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:54:39.342654 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:54:39.351333 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:54:39.357014 kernel: libata version 3.00 loaded. Jan 29 16:54:39.401977 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 16:54:39.412071 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 29 16:54:39.412236 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 16:54:39.412386 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 16:54:39.412529 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 29 16:54:39.412677 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 29 16:54:39.412901 kernel: hub 1-0:1.0: USB hub found Jan 29 16:54:39.413151 kernel: hub 1-0:1.0: 4 ports detected Jan 29 16:54:39.413317 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 16:54:39.413504 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:54:39.413516 kernel: AES CTR mode by8 optimization enabled Jan 29 16:54:39.413527 kernel: hub 2-0:1.0: USB hub found Jan 29 16:54:39.413706 kernel: hub 2-0:1.0: 4 ports detected Jan 29 16:54:39.413864 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 16:54:39.429379 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 16:54:39.429400 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 16:54:39.429557 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 16:54:39.432334 kernel: scsi host1: ahci Jan 29 16:54:39.432505 kernel: scsi host2: ahci Jan 29 16:54:39.432661 kernel: scsi host3: ahci Jan 29 16:54:39.432816 kernel: scsi host4: ahci Jan 29 16:54:39.433077 kernel: scsi host5: ahci Jan 29 16:54:39.433246 kernel: scsi host6: ahci Jan 29 16:54:39.433401 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 51 Jan 29 16:54:39.433413 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 51 Jan 29 16:54:39.433423 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 51 Jan 29 16:54:39.433433 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 51 Jan 29 16:54:39.433443 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 51 Jan 29 16:54:39.433458 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 51 Jan 29 16:54:39.435018 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 29 16:54:39.444009 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 29 16:54:39.444177 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 16:54:39.444329 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 29 16:54:39.444513 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 16:54:39.444669 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:54:39.444680 kernel: GPT:17805311 != 80003071 Jan 29 16:54:39.444697 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:54:39.444707 kernel: GPT:17805311 != 80003071 Jan 29 16:54:39.444716 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:54:39.444727 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:54:39.444737 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 16:54:39.497019 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:54:39.503148 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:54:39.543090 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:54:39.653181 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 16:54:39.738674 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 16:54:39.738836 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 29 16:54:39.738892 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 16:54:39.750001 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 16:54:39.752463 kernel: ata1.00: applying bridge limits Jan 29 16:54:39.756509 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 16:54:39.759151 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 16:54:39.759185 kernel: ata1.00: configured for UDMA/100 Jan 29 16:54:39.764003 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 16:54:39.769027 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 16:54:39.831993 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 16:54:39.845233 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 16:54:39.870226 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 16:54:39.870253 kernel: usbcore: registered new interface driver usbhid Jan 29 16:54:39.870272 kernel: usbhid: USB HID core driver Jan 29 16:54:39.870289 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 29 16:54:39.880954 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 29 16:54:39.880993 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (466) Jan 29 16:54:39.896952 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (461) Jan 29 16:54:39.901965 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 29 16:54:39.920159 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 29 16:54:39.937773 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 29 16:54:39.946814 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 29 16:54:39.947998 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 29 16:54:39.960135 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 16:54:39.968150 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:54:39.987167 disk-uuid[576]: Primary Header is updated. Jan 29 16:54:39.987167 disk-uuid[576]: Secondary Entries is updated. Jan 29 16:54:39.987167 disk-uuid[576]: Secondary Header is updated. Jan 29 16:54:40.004003 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:54:41.025043 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:54:41.027395 disk-uuid[577]: The operation has completed successfully. Jan 29 16:54:41.139038 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:54:41.139210 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:54:41.164151 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:54:41.170862 sh[593]: Success Jan 29 16:54:41.187010 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 16:54:41.259842 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:54:41.275057 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:54:41.276589 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:54:41.315288 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:54:41.315387 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:54:41.318132 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:54:41.322436 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:54:41.322472 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:54:41.338001 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 16:54:41.342323 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:54:41.344877 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:54:41.352349 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:54:41.367305 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:54:41.398460 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:54:41.398539 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:54:41.402280 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:54:41.411106 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:54:41.411157 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:54:41.430228 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:54:41.434654 kernel: BTRFS info (device sda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:54:41.442510 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:54:41.453226 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:54:41.558975 ignition[704]: Ignition 2.20.0 Jan 29 16:54:41.558997 ignition[704]: Stage: fetch-offline Jan 29 16:54:41.559039 ignition[704]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:54:41.559049 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:54:41.559169 ignition[704]: parsed url from cmdline: "" Jan 29 16:54:41.561619 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:54:41.559173 ignition[704]: no config URL provided Jan 29 16:54:41.562683 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:54:41.559179 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:54:41.559188 ignition[704]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:54:41.559196 ignition[704]: failed to fetch config: resource requires networking Jan 29 16:54:41.559368 ignition[704]: Ignition finished successfully Jan 29 16:54:41.572285 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:54:41.601866 systemd-networkd[780]: lo: Link UP Jan 29 16:54:41.601876 systemd-networkd[780]: lo: Gained carrier Jan 29 16:54:41.604745 systemd-networkd[780]: Enumeration completed Jan 29 16:54:41.604912 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:54:41.605253 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:54:41.605257 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:54:41.606300 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:54:41.606304 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:54:41.606489 systemd[1]: Reached target network.target - Network. Jan 29 16:54:41.607415 systemd-networkd[780]: eth0: Link UP Jan 29 16:54:41.607419 systemd-networkd[780]: eth0: Gained carrier Jan 29 16:54:41.607425 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:54:41.612134 systemd-networkd[780]: eth1: Link UP Jan 29 16:54:41.612138 systemd-networkd[780]: eth1: Gained carrier Jan 29 16:54:41.612145 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:54:41.613198 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:54:41.627841 ignition[783]: Ignition 2.20.0 Jan 29 16:54:41.627854 ignition[783]: Stage: fetch Jan 29 16:54:41.628052 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:54:41.628063 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:54:41.628152 ignition[783]: parsed url from cmdline: "" Jan 29 16:54:41.628157 ignition[783]: no config URL provided Jan 29 16:54:41.628164 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:54:41.628178 ignition[783]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:54:41.628206 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 29 16:54:41.628376 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 16:54:41.655997 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:54:41.739026 systemd-networkd[780]: eth0: DHCPv4 address 168.119.110.78/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 16:54:41.828646 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 29 16:54:41.842005 ignition[783]: GET result: OK Jan 29 16:54:41.842672 ignition[783]: parsing config with SHA512: 444b5858c0127fab72fef300d20413a84d0dcdfe8cfaf01aacdf6de3ed663df7a164485e133d3cbecff3f701eceb4a1a8a67330db01a27bf964f9584b1852bbe Jan 29 16:54:41.851603 unknown[783]: fetched base config from "system" Jan 29 16:54:41.853183 ignition[783]: fetch: fetch complete Jan 29 16:54:41.851636 unknown[783]: fetched base config from "system" Jan 29 16:54:41.853199 ignition[783]: fetch: fetch passed Jan 29 16:54:41.851653 unknown[783]: fetched user config from "hetzner" Jan 29 16:54:41.853318 ignition[783]: Ignition finished successfully Jan 29 16:54:41.859850 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:54:41.868306 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:54:41.911877 ignition[791]: Ignition 2.20.0 Jan 29 16:54:41.911904 ignition[791]: Stage: kargs Jan 29 16:54:41.912365 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:54:41.912399 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:54:41.915013 ignition[791]: kargs: kargs passed Jan 29 16:54:41.919385 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:54:41.915122 ignition[791]: Ignition finished successfully Jan 29 16:54:41.934185 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:54:41.970740 ignition[797]: Ignition 2.20.0 Jan 29 16:54:41.970755 ignition[797]: Stage: disks Jan 29 16:54:41.971082 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:54:41.971105 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:54:41.973070 ignition[797]: disks: disks passed Jan 29 16:54:41.976225 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:54:41.973163 ignition[797]: Ignition finished successfully Jan 29 16:54:41.978739 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:54:41.980038 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:54:41.982091 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:54:41.984446 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:54:41.986855 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:54:42.000224 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:54:42.029084 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 16:54:42.033456 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:54:42.318164 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:54:42.466998 kernel: EXT4-fs (sda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:54:42.468027 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:54:42.469059 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:54:42.480101 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:54:42.483322 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:54:42.489181 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 16:54:42.491332 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:54:42.492307 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:54:42.499947 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (813) Jan 29 16:54:42.502959 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:54:42.502553 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:54:42.507592 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:54:42.507615 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:54:42.511136 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:54:42.516974 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:54:42.517011 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:54:42.528146 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:54:42.580398 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:54:42.587566 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:54:42.595968 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:54:42.597045 coreos-metadata[815]: Jan 29 16:54:42.596 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 29 16:54:42.598737 coreos-metadata[815]: Jan 29 16:54:42.597 INFO Fetch successful Jan 29 16:54:42.599259 coreos-metadata[815]: Jan 29 16:54:42.599 INFO wrote hostname ci-4230-0-0-c-e7d65f4211 to /sysroot/etc/hostname Jan 29 16:54:42.601542 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:54:42.603669 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:54:42.706301 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:54:42.713025 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:54:42.718084 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:54:42.724534 kernel: BTRFS info (device sda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:54:42.747978 ignition[930]: INFO : Ignition 2.20.0 Jan 29 16:54:42.747978 ignition[930]: INFO : Stage: mount Jan 29 16:54:42.747978 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:54:42.747978 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:54:42.753011 ignition[930]: INFO : mount: mount passed Jan 29 16:54:42.753011 ignition[930]: INFO : Ignition finished successfully Jan 29 16:54:42.751457 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:54:42.759042 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:54:42.760641 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:54:43.087158 systemd-networkd[780]: eth0: Gained IPv6LL Jan 29 16:54:43.313471 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:54:43.321206 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:54:43.353981 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (942) Jan 29 16:54:43.359897 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:54:43.359981 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:54:43.364642 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:54:43.374913 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:54:43.375062 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:54:43.380597 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:54:43.425004 ignition[959]: INFO : Ignition 2.20.0 Jan 29 16:54:43.425004 ignition[959]: INFO : Stage: files Jan 29 16:54:43.428321 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:54:43.428321 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:54:43.428321 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:54:43.434031 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:54:43.434031 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:54:43.437521 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:54:43.437521 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:54:43.437521 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:54:43.437088 unknown[959]: wrote ssh authorized keys file for user: core Jan 29 16:54:43.443501 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 16:54:43.443501 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 29 16:54:43.535660 systemd-networkd[780]: eth1: Gained IPv6LL Jan 29 16:54:43.558337 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:54:44.068964 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 16:54:44.068964 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:54:44.073263 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 16:54:44.631755 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 16:54:44.820646 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:54:44.822352 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:54:44.822352 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:54:44.822352 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:54:44.822352 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:54:44.822352 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:54:44.822352 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:54:44.822352 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:54:44.822352 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:54:44.839450 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:54:44.839450 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:54:44.839450 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 16:54:44.839450 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 16:54:44.839450 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 16:54:44.839450 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 29 16:54:45.383646 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 16:54:45.972945 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 16:54:45.972945 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 16:54:45.978212 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:54:45.978212 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:54:45.978212 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 16:54:45.978212 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 16:54:45.978212 ignition[959]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 16:54:45.978212 ignition[959]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 16:54:45.978212 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 16:54:45.978212 ignition[959]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:54:45.978212 ignition[959]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:54:45.978212 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:54:45.978212 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:54:45.978212 ignition[959]: INFO : files: files passed Jan 29 16:54:45.978212 ignition[959]: INFO : Ignition finished successfully Jan 29 16:54:45.979033 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:54:45.987090 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:54:45.994123 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:54:45.997308 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:54:45.997412 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:54:46.013670 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:54:46.014484 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:54:46.014484 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:54:46.017350 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:54:46.019529 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:54:46.027061 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:54:46.062887 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:54:46.063081 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:54:46.064744 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:54:46.065879 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:54:46.067259 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:54:46.075045 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:54:46.092716 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:54:46.098115 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:54:46.112409 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:54:46.113437 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:54:46.114843 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:54:46.116307 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:54:46.116543 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:54:46.118408 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:54:46.119416 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:54:46.120539 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:54:46.121784 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:54:46.123238 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:54:46.124539 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:54:46.125905 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:54:46.127269 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:54:46.128575 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:54:46.129891 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:54:46.130895 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:54:46.131172 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:54:46.132974 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:54:46.134020 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:54:46.135260 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:54:46.135439 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:54:46.136695 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:54:46.137077 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:54:46.139143 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:54:46.139402 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:54:46.140581 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:54:46.140750 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:54:46.141877 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 16:54:46.142109 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:54:46.152086 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:54:46.156118 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:54:46.157076 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:54:46.157205 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:54:46.158395 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:54:46.158499 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:54:46.172806 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:54:46.172957 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:54:46.180893 ignition[1012]: INFO : Ignition 2.20.0 Jan 29 16:54:46.180893 ignition[1012]: INFO : Stage: umount Jan 29 16:54:46.183652 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:54:46.183652 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:54:46.183652 ignition[1012]: INFO : umount: umount passed Jan 29 16:54:46.183652 ignition[1012]: INFO : Ignition finished successfully Jan 29 16:54:46.184132 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:54:46.184272 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:54:46.186450 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:54:46.186503 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:54:46.188226 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:54:46.188278 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:54:46.188709 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:54:46.188755 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:54:46.189239 systemd[1]: Stopped target network.target - Network. Jan 29 16:54:46.189886 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:54:46.191952 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:54:46.193426 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:54:46.193830 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:54:46.193877 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:54:46.195023 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:54:46.195467 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:54:46.195893 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:54:46.195953 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:54:46.196385 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:54:46.196425 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:54:46.196850 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:54:46.197481 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:54:46.201078 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:54:46.201130 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:54:46.202173 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:54:46.204186 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:54:46.209002 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:54:46.209555 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:54:46.209675 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:54:46.217793 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:54:46.218102 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:54:46.218227 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:54:46.220483 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:54:46.223376 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:54:46.223421 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:54:46.234317 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:54:46.234799 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:54:46.234874 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:54:46.235818 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:54:46.235870 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:54:46.237399 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:54:46.237450 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:54:46.238257 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:54:46.238310 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:54:46.242368 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:54:46.254157 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:54:46.254256 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:54:46.261130 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:54:46.261307 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:54:46.265861 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:54:46.266009 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:54:46.268627 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:54:46.268687 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:54:46.270845 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:54:46.270887 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:54:46.271874 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:54:46.271958 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:54:46.278150 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:54:46.278221 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:54:46.279163 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:54:46.279397 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:54:46.288122 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:54:46.288730 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:54:46.288799 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:54:46.290652 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 16:54:46.290706 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:54:46.291878 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:54:46.291947 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:54:46.292429 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:54:46.292476 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:54:46.294737 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:54:46.294798 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:54:46.295215 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:54:46.295325 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:54:46.296041 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:54:46.296141 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:54:46.298661 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:54:46.299352 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:54:46.299412 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:54:46.306105 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:54:46.315655 systemd[1]: Switching root. Jan 29 16:54:46.351615 systemd-journald[188]: Journal stopped Jan 29 16:54:47.770581 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Jan 29 16:54:47.770640 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:54:47.770654 kernel: SELinux: policy capability open_perms=1 Jan 29 16:54:47.770666 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:54:47.770685 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:54:47.770696 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:54:47.770707 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:54:47.770719 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:54:47.770730 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:54:47.770743 kernel: audit: type=1403 audit(1738169686.553:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:54:47.770759 systemd[1]: Successfully loaded SELinux policy in 78.078ms. Jan 29 16:54:47.770781 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 25.720ms. Jan 29 16:54:47.770794 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:54:47.770811 systemd[1]: Detected virtualization kvm. Jan 29 16:54:47.770824 systemd[1]: Detected architecture x86-64. Jan 29 16:54:47.770837 systemd[1]: Detected first boot. Jan 29 16:54:47.770849 systemd[1]: Hostname set to . Jan 29 16:54:47.770861 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:54:47.770874 zram_generator::config[1056]: No configuration found. Jan 29 16:54:47.770887 kernel: Guest personality initialized and is inactive Jan 29 16:54:47.770898 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:54:47.770913 kernel: Initialized host personality Jan 29 16:54:47.770942 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:54:47.770954 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:54:47.770967 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:54:47.770979 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:54:47.770992 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:54:47.771004 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:54:47.771020 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:54:47.771033 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:54:47.771047 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:54:47.771060 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:54:47.771072 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:54:47.771084 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:54:47.771096 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:54:47.771108 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:54:47.771120 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:54:47.771132 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:54:47.771151 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:54:47.771163 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:54:47.771176 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:54:47.771188 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:54:47.771200 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:54:47.771212 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:54:47.771226 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:54:47.771239 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:54:47.771256 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:54:47.771270 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:54:47.771282 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:54:47.771294 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:54:47.771309 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:54:47.771321 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:54:47.771333 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:54:47.771347 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:54:47.771359 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:54:47.771371 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:54:47.771384 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:54:47.771396 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:54:47.771408 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:54:47.771422 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:54:47.771434 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:54:47.771446 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:54:47.771460 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:54:47.771472 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:54:47.771484 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:54:47.771496 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:54:47.771510 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:54:47.771523 systemd[1]: Reached target machines.target - Containers. Jan 29 16:54:47.771537 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:54:47.771550 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:54:47.771562 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:54:47.771574 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:54:47.771586 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:54:47.771599 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:54:47.771611 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:54:47.771624 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:54:47.771638 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:54:47.771651 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:54:47.771663 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:54:47.771675 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:54:47.771687 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:54:47.771700 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:54:47.771712 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:54:47.771724 kernel: fuse: init (API version 7.39) Jan 29 16:54:47.771736 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:54:47.771750 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:54:47.771763 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:54:47.771775 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:54:47.771787 kernel: ACPI: bus type drm_connector registered Jan 29 16:54:47.771803 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:54:47.771815 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:54:47.771830 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:54:47.771842 systemd[1]: Stopped verity-setup.service. Jan 29 16:54:47.771854 kernel: loop: module loaded Jan 29 16:54:47.771868 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:54:47.771885 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:54:47.771897 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:54:47.771910 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:54:47.771922 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:54:47.771950 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:54:47.771962 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:54:47.771975 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:54:47.771987 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:54:47.772016 systemd-journald[1141]: Collecting audit messages is disabled. Jan 29 16:54:47.772043 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:54:47.772056 systemd-journald[1141]: Journal started Jan 29 16:54:47.772078 systemd-journald[1141]: Runtime Journal (/run/log/journal/4ae408f726a641859cf3e09c7eabacbe) is 4.8M, max 38.3M, 33.5M free. Jan 29 16:54:47.382456 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:54:47.398643 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 16:54:47.399762 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:54:47.777380 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:54:47.777460 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:54:47.779473 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:54:47.779709 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:54:47.780513 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:54:47.780723 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:54:47.781680 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:54:47.781890 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:54:47.782755 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:54:47.783366 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:54:47.784160 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:54:47.784433 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:54:47.785307 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:54:47.786162 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:54:47.787051 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:54:47.787908 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:54:47.804809 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:54:47.816023 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:54:47.821842 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:54:47.822369 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:54:47.822398 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:54:47.824812 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:54:47.834391 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:54:47.842184 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:54:47.843978 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:54:47.847191 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:54:47.851240 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:54:47.853121 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:54:47.856154 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:54:47.857910 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:54:47.861069 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:54:47.872760 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:54:47.879245 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:54:47.891144 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:54:47.894137 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:54:47.895738 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:54:47.921509 systemd-journald[1141]: Time spent on flushing to /var/log/journal/4ae408f726a641859cf3e09c7eabacbe is 57.483ms for 1154 entries. Jan 29 16:54:47.921509 systemd-journald[1141]: System Journal (/var/log/journal/4ae408f726a641859cf3e09c7eabacbe) is 8M, max 584.8M, 576.8M free. Jan 29 16:54:48.015170 systemd-journald[1141]: Received client request to flush runtime journal. Jan 29 16:54:48.015220 kernel: loop0: detected capacity change from 0 to 138176 Jan 29 16:54:48.015241 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:54:48.015259 kernel: loop1: detected capacity change from 0 to 147912 Jan 29 16:54:47.932059 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:54:47.943228 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:54:47.946544 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:54:47.948112 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:54:47.958575 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:54:47.969718 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:54:47.987085 udevadm[1191]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 16:54:47.989334 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 29 16:54:47.989347 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 29 16:54:48.005336 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:54:48.017179 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:54:48.020426 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:54:48.039635 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:54:48.068474 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:54:48.075298 kernel: loop2: detected capacity change from 0 to 218376 Jan 29 16:54:48.076841 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:54:48.092640 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jan 29 16:54:48.092664 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jan 29 16:54:48.100421 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:54:48.128235 kernel: loop3: detected capacity change from 0 to 8 Jan 29 16:54:48.156136 kernel: loop4: detected capacity change from 0 to 138176 Jan 29 16:54:48.187971 kernel: loop5: detected capacity change from 0 to 147912 Jan 29 16:54:48.222072 kernel: loop6: detected capacity change from 0 to 218376 Jan 29 16:54:48.248338 kernel: loop7: detected capacity change from 0 to 8 Jan 29 16:54:48.250478 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 29 16:54:48.252077 (sd-merge)[1212]: Merged extensions into '/usr'. Jan 29 16:54:48.258041 systemd[1]: Reload requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:54:48.258171 systemd[1]: Reloading... Jan 29 16:54:48.377663 zram_generator::config[1243]: No configuration found. Jan 29 16:54:48.543318 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:54:48.576886 ldconfig[1178]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:54:48.614747 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:54:48.614989 systemd[1]: Reloading finished in 356 ms. Jan 29 16:54:48.630421 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:54:48.631349 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:54:48.644098 systemd[1]: Starting ensure-sysext.service... Jan 29 16:54:48.650468 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:54:48.671961 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:54:48.671983 systemd[1]: Reloading... Jan 29 16:54:48.713030 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:54:48.713376 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:54:48.717150 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:54:48.717435 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jan 29 16:54:48.717510 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jan 29 16:54:48.723656 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:54:48.723664 systemd-tmpfiles[1284]: Skipping /boot Jan 29 16:54:48.756233 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:54:48.756256 systemd-tmpfiles[1284]: Skipping /boot Jan 29 16:54:48.800955 zram_generator::config[1310]: No configuration found. Jan 29 16:54:48.938512 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:54:49.021043 systemd[1]: Reloading finished in 348 ms. Jan 29 16:54:49.037834 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:54:49.038761 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:54:49.069234 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:54:49.074162 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:54:49.078058 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:54:49.087097 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:54:49.092048 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:54:49.099107 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:54:49.107116 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:54:49.107413 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:54:49.115030 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:54:49.118805 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:54:49.123349 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:54:49.124427 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:54:49.124740 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:54:49.124887 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:54:49.148250 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:54:49.150966 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:54:49.153351 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:54:49.162632 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:54:49.170842 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:54:49.171113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:54:49.181646 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:54:49.182654 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:54:49.183111 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:54:49.183217 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:54:49.185626 systemd-udevd[1367]: Using default interface naming scheme 'v255'. Jan 29 16:54:49.187389 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:54:49.188053 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:54:49.189853 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:54:49.190995 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:54:49.197640 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:54:49.197870 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:54:49.204895 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:54:49.206357 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:54:49.217039 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:54:49.229121 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:54:49.231250 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:54:49.241179 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:54:49.243049 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:54:49.243161 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:54:49.243301 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:54:49.245792 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:54:49.249366 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:54:49.249601 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:54:49.250698 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:54:49.251320 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:54:49.252745 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:54:49.256449 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:54:49.258050 augenrules[1396]: No rules Jan 29 16:54:49.257378 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:54:49.259267 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:54:49.260103 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:54:49.266780 systemd[1]: Finished ensure-sysext.service. Jan 29 16:54:49.268544 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:54:49.268763 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:54:49.280752 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:54:49.280843 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:54:49.291104 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:54:49.294073 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:54:49.296517 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:54:49.297422 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:54:49.307077 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:54:49.307573 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:54:49.336278 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:54:49.420024 systemd-resolved[1361]: Positive Trust Anchors: Jan 29 16:54:49.420042 systemd-resolved[1361]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:54:49.420074 systemd-resolved[1361]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:54:49.429491 systemd-resolved[1361]: Using system hostname 'ci-4230-0-0-c-e7d65f4211'. Jan 29 16:54:49.434150 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:54:49.434824 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:54:49.437563 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:54:49.438172 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:54:49.448045 systemd-networkd[1425]: lo: Link UP Jan 29 16:54:49.448338 systemd-networkd[1425]: lo: Gained carrier Jan 29 16:54:49.454389 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:54:49.479830 systemd-networkd[1425]: Enumeration completed Jan 29 16:54:49.479999 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:54:49.481202 systemd[1]: Reached target network.target - Network. Jan 29 16:54:49.487093 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:54:49.497057 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:54:49.501123 systemd-networkd[1425]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:54:49.501200 systemd-networkd[1425]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:54:49.501831 systemd-networkd[1425]: eth1: Link UP Jan 29 16:54:49.501891 systemd-networkd[1425]: eth1: Gained carrier Jan 29 16:54:49.501975 systemd-networkd[1425]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:54:49.510056 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:54:49.510125 systemd-networkd[1425]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:54:49.510653 systemd-networkd[1425]: eth0: Link UP Jan 29 16:54:49.510706 systemd-networkd[1425]: eth0: Gained carrier Jan 29 16:54:49.510765 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:54:49.527386 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:54:49.539012 systemd-networkd[1425]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:54:49.540266 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Jan 29 16:54:49.550964 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1433) Jan 29 16:54:49.579003 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:54:49.590018 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 16:54:49.603967 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:54:49.623082 systemd-networkd[1425]: eth0: DHCPv4 address 168.119.110.78/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 16:54:49.624037 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Jan 29 16:54:49.624812 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Jan 29 16:54:49.637852 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 16:54:49.646069 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:54:49.648700 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 29 16:54:49.648792 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:54:49.648915 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:54:49.653057 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:54:49.655759 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:54:49.660501 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:54:49.661236 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:54:49.661265 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:54:49.661291 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:54:49.661305 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:54:49.665342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:54:49.665562 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:54:49.677033 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 29 16:54:49.677078 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 29 16:54:49.683279 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:54:49.688320 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:54:49.689006 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:54:49.690682 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:54:49.691177 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:54:49.694407 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:54:49.694465 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:54:49.707956 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 16:54:49.708236 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 16:54:49.708423 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 16:54:49.716362 kernel: Console: switching to colour dummy device 80x25 Jan 29 16:54:49.722254 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 16:54:49.722291 kernel: [drm] features: -context_init Jan 29 16:54:49.724966 kernel: EDAC MC: Ver: 3.0.0 Jan 29 16:54:49.733959 kernel: [drm] number of scanouts: 1 Jan 29 16:54:49.737039 kernel: [drm] number of cap sets: 0 Jan 29 16:54:49.738736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:54:49.741996 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 16:54:49.746944 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 29 16:54:49.753828 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 16:54:49.760370 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 16:54:49.757627 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:54:49.757884 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:54:49.766754 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 16:54:49.774200 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:54:49.779167 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:54:49.779419 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:54:49.790092 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:54:49.874516 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:54:49.890188 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:54:49.897196 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:54:49.934371 lvm[1482]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:54:49.986639 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:54:49.989161 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:54:49.989671 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:54:49.990102 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:54:49.990335 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:54:49.990972 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:54:49.993072 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:54:49.993249 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:54:49.993392 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:54:49.993451 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:54:49.993577 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:54:49.996520 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:54:50.000831 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:54:50.006762 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:54:50.008074 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:54:50.008265 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:54:50.026204 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:54:50.031087 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:54:50.040182 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:54:50.043473 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:54:50.045769 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:54:50.046968 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:54:50.047900 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:54:50.048054 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:54:50.053748 lvm[1486]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:54:50.054181 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:54:50.069867 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:54:50.083132 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:54:50.093140 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:54:50.103232 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:54:50.106298 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:54:50.115131 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:54:50.123002 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:54:50.131328 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 29 16:54:50.140501 jq[1492]: false Jan 29 16:54:50.140749 coreos-metadata[1488]: Jan 29 16:54:50.140 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 29 16:54:50.146118 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:54:50.146232 coreos-metadata[1488]: Jan 29 16:54:50.145 INFO Fetch successful Jan 29 16:54:50.146331 coreos-metadata[1488]: Jan 29 16:54:50.146 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 29 16:54:50.159343 dbus-daemon[1489]: [system] SELinux support is enabled Jan 29 16:54:50.160613 coreos-metadata[1488]: Jan 29 16:54:50.160 INFO Fetch successful Jan 29 16:54:50.164947 extend-filesystems[1493]: Found loop4 Jan 29 16:54:50.164947 extend-filesystems[1493]: Found loop5 Jan 29 16:54:50.164947 extend-filesystems[1493]: Found loop6 Jan 29 16:54:50.164947 extend-filesystems[1493]: Found loop7 Jan 29 16:54:50.164947 extend-filesystems[1493]: Found sda Jan 29 16:54:50.164947 extend-filesystems[1493]: Found sda1 Jan 29 16:54:50.164947 extend-filesystems[1493]: Found sda2 Jan 29 16:54:50.164947 extend-filesystems[1493]: Found sda3 Jan 29 16:54:50.164947 extend-filesystems[1493]: Found usr Jan 29 16:54:50.164947 extend-filesystems[1493]: Found sda4 Jan 29 16:54:50.164947 extend-filesystems[1493]: Found sda6 Jan 29 16:54:50.164947 extend-filesystems[1493]: Found sda7 Jan 29 16:54:50.164947 extend-filesystems[1493]: Found sda9 Jan 29 16:54:50.164947 extend-filesystems[1493]: Checking size of /dev/sda9 Jan 29 16:54:50.273864 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 29 16:54:50.164373 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:54:50.280250 extend-filesystems[1493]: Resized partition /dev/sda9 Jan 29 16:54:50.186058 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:54:50.301737 extend-filesystems[1517]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:54:50.322728 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1430) Jan 29 16:54:50.198437 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:54:50.323572 update_engine[1513]: I20250129 16:54:50.250397 1513 main.cc:92] Flatcar Update Engine starting Jan 29 16:54:50.323572 update_engine[1513]: I20250129 16:54:50.263235 1513 update_check_scheduler.cc:74] Next update check in 5m15s Jan 29 16:54:50.199059 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:54:50.204511 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:54:50.331389 jq[1516]: true Jan 29 16:54:50.224775 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:54:50.232303 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:54:50.245120 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:54:50.267353 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:54:50.267608 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:54:50.267944 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:54:50.268163 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:54:50.292234 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:54:50.294005 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:54:50.343664 (ntainerd)[1524]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:54:50.361969 jq[1523]: true Jan 29 16:54:50.385987 tar[1522]: linux-amd64/LICENSE Jan 29 16:54:50.390882 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:54:50.398791 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:54:50.398820 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:54:50.399360 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:54:50.399375 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:54:50.412775 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:54:50.425263 tar[1522]: linux-amd64/helm Jan 29 16:54:50.463959 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:54:50.476608 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:54:50.494071 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:54:50.536961 systemd-logind[1509]: New seat seat0. Jan 29 16:54:50.558133 systemd-logind[1509]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 16:54:50.558156 systemd-logind[1509]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:54:50.558464 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:54:50.606766 locksmithd[1541]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:54:50.619793 sshd_keygen[1511]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:54:50.639089 systemd-networkd[1425]: eth1: Gained IPv6LL Jan 29 16:54:50.642035 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Jan 29 16:54:50.646319 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:54:50.650070 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:54:50.654875 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:54:50.668762 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:54:50.679094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:54:50.682998 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:54:50.692076 bash[1560]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:54:50.695284 systemd[1]: Started sshd@0-168.119.110.78:22-194.0.234.37:45354.service - OpenSSH per-connection server daemon (194.0.234.37:45354). Jan 29 16:54:50.700971 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:54:50.704790 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:54:50.710949 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:54:50.733208 systemd[1]: Starting sshkeys.service... Jan 29 16:54:50.746230 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:54:50.771465 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 29 16:54:50.773784 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 16:54:50.785440 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 16:54:50.793230 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:54:50.804671 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:54:50.818770 containerd[1524]: time="2025-01-29T16:54:50.815732206Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:54:50.833380 coreos-metadata[1595]: Jan 29 16:54:50.821 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 29 16:54:50.833380 coreos-metadata[1595]: Jan 29 16:54:50.833 INFO Fetch successful Jan 29 16:54:50.833613 extend-filesystems[1517]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 16:54:50.833613 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 29 16:54:50.833613 extend-filesystems[1517]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 29 16:54:50.822389 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:54:50.851895 extend-filesystems[1493]: Resized filesystem in /dev/sda9 Jan 29 16:54:50.851895 extend-filesystems[1493]: Found sr0 Jan 29 16:54:50.834771 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:54:50.840521 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:54:50.841750 unknown[1595]: wrote ssh authorized keys file for user: core Jan 29 16:54:50.842990 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:54:50.843614 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:54:50.883020 containerd[1524]: time="2025-01-29T16:54:50.882982712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:54:50.886535 containerd[1524]: time="2025-01-29T16:54:50.885539056Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:54:50.886535 containerd[1524]: time="2025-01-29T16:54:50.885563041Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:54:50.886535 containerd[1524]: time="2025-01-29T16:54:50.885577779Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:54:50.886535 containerd[1524]: time="2025-01-29T16:54:50.885721529Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:54:50.886535 containerd[1524]: time="2025-01-29T16:54:50.885735324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:54:50.886535 containerd[1524]: time="2025-01-29T16:54:50.885803062Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:54:50.886535 containerd[1524]: time="2025-01-29T16:54:50.885814283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:54:50.886681 containerd[1524]: time="2025-01-29T16:54:50.886610717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:54:50.886681 containerd[1524]: time="2025-01-29T16:54:50.886627437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:54:50.886681 containerd[1524]: time="2025-01-29T16:54:50.886642757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:54:50.886681 containerd[1524]: time="2025-01-29T16:54:50.886651513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:54:50.886754 containerd[1524]: time="2025-01-29T16:54:50.886744908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:54:50.887035 containerd[1524]: time="2025-01-29T16:54:50.887015195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:54:50.887204 containerd[1524]: time="2025-01-29T16:54:50.887182258Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:54:50.887231 containerd[1524]: time="2025-01-29T16:54:50.887203698Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:54:50.887329 containerd[1524]: time="2025-01-29T16:54:50.887310168Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:54:50.887398 containerd[1524]: time="2025-01-29T16:54:50.887384598Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:54:50.898273 containerd[1524]: time="2025-01-29T16:54:50.898250869Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:54:50.899936 containerd[1524]: time="2025-01-29T16:54:50.899834339Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:54:50.899936 containerd[1524]: time="2025-01-29T16:54:50.899857242Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:54:50.899936 containerd[1524]: time="2025-01-29T16:54:50.899872680Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:54:50.899936 containerd[1524]: time="2025-01-29T16:54:50.899893249Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:54:50.900125 containerd[1524]: time="2025-01-29T16:54:50.900029935Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:54:50.900266 containerd[1524]: time="2025-01-29T16:54:50.900246521Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:54:50.900406 containerd[1524]: time="2025-01-29T16:54:50.900353202Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:54:50.900406 containerd[1524]: time="2025-01-29T16:54:50.900369422Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:54:50.900406 containerd[1524]: time="2025-01-29T16:54:50.900381795Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:54:50.900406 containerd[1524]: time="2025-01-29T16:54:50.900394118Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:54:50.900406 containerd[1524]: time="2025-01-29T16:54:50.900405119Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:54:50.900499 containerd[1524]: time="2025-01-29T16:54:50.900416330Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:54:50.900499 containerd[1524]: time="2025-01-29T16:54:50.900427821Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:54:50.900499 containerd[1524]: time="2025-01-29T16:54:50.900440736Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:54:50.900499 containerd[1524]: time="2025-01-29T16:54:50.900452347Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:54:50.900499 containerd[1524]: time="2025-01-29T16:54:50.900462947Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:54:50.900499 containerd[1524]: time="2025-01-29T16:54:50.900472225Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:54:50.900499 containerd[1524]: time="2025-01-29T16:54:50.900490940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:54:50.900618 containerd[1524]: time="2025-01-29T16:54:50.900502842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:54:50.900618 containerd[1524]: time="2025-01-29T16:54:50.900514123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:54:50.900618 containerd[1524]: time="2025-01-29T16:54:50.900525315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:54:50.900618 containerd[1524]: time="2025-01-29T16:54:50.900536175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:54:50.900618 containerd[1524]: time="2025-01-29T16:54:50.900551854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:54:50.900618 containerd[1524]: time="2025-01-29T16:54:50.900562785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:54:50.900618 containerd[1524]: time="2025-01-29T16:54:50.900573926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:54:50.900618 containerd[1524]: time="2025-01-29T16:54:50.900585969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:54:50.900618 containerd[1524]: time="2025-01-29T16:54:50.900599784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:54:50.900618 containerd[1524]: time="2025-01-29T16:54:50.900610224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:54:50.900618 containerd[1524]: time="2025-01-29T16:54:50.900621545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:54:50.900811 containerd[1524]: time="2025-01-29T16:54:50.900632445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:54:50.900811 containerd[1524]: time="2025-01-29T16:54:50.900645670Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:54:50.900811 containerd[1524]: time="2025-01-29T16:54:50.900662181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:54:50.900811 containerd[1524]: time="2025-01-29T16:54:50.900673713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:54:50.900811 containerd[1524]: time="2025-01-29T16:54:50.900683772Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:54:50.900811 containerd[1524]: time="2025-01-29T16:54:50.900715411Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:54:50.900811 containerd[1524]: time="2025-01-29T16:54:50.900729337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:54:50.900811 containerd[1524]: time="2025-01-29T16:54:50.900740118Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:54:50.900811 containerd[1524]: time="2025-01-29T16:54:50.900751428Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:54:50.900811 containerd[1524]: time="2025-01-29T16:54:50.900760496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:54:50.900811 containerd[1524]: time="2025-01-29T16:54:50.900771145Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:54:50.900811 containerd[1524]: time="2025-01-29T16:54:50.900780363Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:54:50.900811 containerd[1524]: time="2025-01-29T16:54:50.900789420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:54:50.901719 containerd[1524]: time="2025-01-29T16:54:50.901075937Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:54:50.901719 containerd[1524]: time="2025-01-29T16:54:50.901117355Z" level=info msg="Connect containerd service" Jan 29 16:54:50.901719 containerd[1524]: time="2025-01-29T16:54:50.901149054Z" level=info msg="using legacy CRI server" Jan 29 16:54:50.901719 containerd[1524]: time="2025-01-29T16:54:50.901155356Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:54:50.901719 containerd[1524]: time="2025-01-29T16:54:50.901255374Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:54:50.901918 update-ssh-keys[1609]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:54:50.903766 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 16:54:50.912325 systemd[1]: Finished sshkeys.service. Jan 29 16:54:50.915324 containerd[1524]: time="2025-01-29T16:54:50.915288554Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:54:50.919661 containerd[1524]: time="2025-01-29T16:54:50.919469386Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:54:50.919661 containerd[1524]: time="2025-01-29T16:54:50.919528527Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:54:50.919661 containerd[1524]: time="2025-01-29T16:54:50.919572680Z" level=info msg="Start subscribing containerd event" Jan 29 16:54:50.919661 containerd[1524]: time="2025-01-29T16:54:50.919610631Z" level=info msg="Start recovering state" Jan 29 16:54:50.919766 containerd[1524]: time="2025-01-29T16:54:50.919675042Z" level=info msg="Start event monitor" Jan 29 16:54:50.919766 containerd[1524]: time="2025-01-29T16:54:50.919685552Z" level=info msg="Start snapshots syncer" Jan 29 16:54:50.919766 containerd[1524]: time="2025-01-29T16:54:50.919694508Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:54:50.919766 containerd[1524]: time="2025-01-29T16:54:50.919701622Z" level=info msg="Start streaming server" Jan 29 16:54:50.919766 containerd[1524]: time="2025-01-29T16:54:50.919756465Z" level=info msg="containerd successfully booted in 0.152461s" Jan 29 16:54:50.921432 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:54:51.215390 systemd-networkd[1425]: eth0: Gained IPv6LL Jan 29 16:54:51.216978 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Jan 29 16:54:51.234121 tar[1522]: linux-amd64/README.md Jan 29 16:54:51.245632 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:54:51.998232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:54:52.002235 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:54:52.006092 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:54:52.008321 systemd[1]: Startup finished in 1.602s (kernel) + 8.696s (initrd) + 5.528s (userspace) = 15.827s. Jan 29 16:54:52.025869 sshd[1585]: Connection closed by authenticating user root 194.0.234.37 port 45354 [preauth] Jan 29 16:54:52.040057 systemd[1]: sshd@0-168.119.110.78:22-194.0.234.37:45354.service: Deactivated successfully. Jan 29 16:54:52.867622 kubelet[1624]: E0129 16:54:52.867484 1624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:54:52.872193 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:54:52.872697 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:54:52.873695 systemd[1]: kubelet.service: Consumed 1.380s CPU time, 251M memory peak. Jan 29 16:55:03.123871 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:55:03.131287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:55:03.328415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:55:03.333178 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:55:03.384981 kubelet[1645]: E0129 16:55:03.384824 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:55:03.391631 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:55:03.391866 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:55:03.392349 systemd[1]: kubelet.service: Consumed 233ms CPU time, 104.3M memory peak. Jan 29 16:55:13.464773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:55:13.474206 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:55:13.635110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:55:13.638809 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:55:13.695611 kubelet[1661]: E0129 16:55:13.695524 1661 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:55:13.703032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:55:13.703352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:55:13.703897 systemd[1]: kubelet.service: Consumed 219ms CPU time, 105.8M memory peak. Jan 29 16:55:21.691154 systemd-timesyncd[1414]: Contacted time server 49.13.14.46:123 (2.flatcar.pool.ntp.org). Jan 29 16:55:21.691281 systemd-timesyncd[1414]: Initial clock synchronization to Wed 2025-01-29 16:55:22.072178 UTC. Jan 29 16:55:23.715726 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 16:55:23.724235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:55:23.915530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:55:23.920165 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:55:23.966108 kubelet[1676]: E0129 16:55:23.965924 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:55:23.970887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:55:23.971122 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:55:23.971516 systemd[1]: kubelet.service: Consumed 195ms CPU time, 106.2M memory peak. Jan 29 16:55:34.215694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 16:55:34.222335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:55:34.472330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:55:34.474352 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:55:34.526234 kubelet[1692]: E0129 16:55:34.526147 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:55:34.531281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:55:34.531682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:55:34.532449 systemd[1]: kubelet.service: Consumed 262ms CPU time, 103.5M memory peak. Jan 29 16:55:35.200095 update_engine[1513]: I20250129 16:55:35.199967 1513 update_attempter.cc:509] Updating boot flags... Jan 29 16:55:35.301052 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1709) Jan 29 16:55:35.387364 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1708) Jan 29 16:55:35.449049 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1708) Jan 29 16:55:44.714738 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 16:55:44.723722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:55:44.938238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:55:44.949235 (kubelet)[1729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:55:44.997506 kubelet[1729]: E0129 16:55:44.997318 1729 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:55:45.003720 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:55:45.004197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:55:45.004995 systemd[1]: kubelet.service: Consumed 235ms CPU time, 101.8M memory peak. Jan 29 16:55:55.214863 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 16:55:55.221303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:55:55.421434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:55:55.425055 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:55:55.478072 kubelet[1743]: E0129 16:55:55.477894 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:55:55.484528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:55:55.484998 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:55:55.485888 systemd[1]: kubelet.service: Consumed 234ms CPU time, 105M memory peak. Jan 29 16:56:05.715586 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 16:56:05.729191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:56:05.930241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:56:05.942397 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:56:05.979961 kubelet[1759]: E0129 16:56:05.979702 1759 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:56:05.982691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:56:05.983148 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:56:05.983838 systemd[1]: kubelet.service: Consumed 230ms CPU time, 103.8M memory peak. Jan 29 16:56:16.214799 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 29 16:56:16.221245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:56:16.432854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:56:16.438167 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:56:16.473648 kubelet[1775]: E0129 16:56:16.473517 1775 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:56:16.477077 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:56:16.477312 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:56:16.477712 systemd[1]: kubelet.service: Consumed 227ms CPU time, 101.8M memory peak. Jan 29 16:56:26.715429 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 29 16:56:26.724261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:56:26.954222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:56:26.955902 (kubelet)[1790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:56:27.008313 kubelet[1790]: E0129 16:56:27.008141 1790 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:56:27.015392 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:56:27.015596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:56:27.016066 systemd[1]: kubelet.service: Consumed 250ms CPU time, 101.2M memory peak. Jan 29 16:56:37.215527 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 29 16:56:37.223320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:56:37.435026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:56:37.439250 (kubelet)[1806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:56:37.475628 kubelet[1806]: E0129 16:56:37.475489 1806 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:56:37.479524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:56:37.479759 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:56:37.480292 systemd[1]: kubelet.service: Consumed 233ms CPU time, 103.3M memory peak. Jan 29 16:56:38.336333 systemd[1]: Started sshd@1-168.119.110.78:22-147.75.109.163:53996.service - OpenSSH per-connection server daemon (147.75.109.163:53996). Jan 29 16:56:39.355215 sshd[1814]: Accepted publickey for core from 147.75.109.163 port 53996 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 16:56:39.358094 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:56:39.371086 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:56:39.378324 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:56:39.395275 systemd-logind[1509]: New session 1 of user core. Jan 29 16:56:39.411887 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:56:39.424584 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:56:39.442152 (systemd)[1818]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:56:39.447119 systemd-logind[1509]: New session c1 of user core. Jan 29 16:56:39.658200 systemd[1818]: Queued start job for default target default.target. Jan 29 16:56:39.669387 systemd[1818]: Created slice app.slice - User Application Slice. Jan 29 16:56:39.669414 systemd[1818]: Reached target paths.target - Paths. Jan 29 16:56:39.669455 systemd[1818]: Reached target timers.target - Timers. Jan 29 16:56:39.671129 systemd[1818]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:56:39.691725 systemd[1818]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:56:39.692269 systemd[1818]: Reached target sockets.target - Sockets. Jan 29 16:56:39.692501 systemd[1818]: Reached target basic.target - Basic System. Jan 29 16:56:39.692595 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:56:39.692599 systemd[1818]: Reached target default.target - Main User Target. Jan 29 16:56:39.692663 systemd[1818]: Startup finished in 229ms. Jan 29 16:56:39.703092 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:56:40.408557 systemd[1]: Started sshd@2-168.119.110.78:22-147.75.109.163:54000.service - OpenSSH per-connection server daemon (147.75.109.163:54000). Jan 29 16:56:41.418176 sshd[1829]: Accepted publickey for core from 147.75.109.163 port 54000 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 16:56:41.421909 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:56:41.431548 systemd-logind[1509]: New session 2 of user core. Jan 29 16:56:41.439133 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:56:42.107308 sshd[1831]: Connection closed by 147.75.109.163 port 54000 Jan 29 16:56:42.108756 sshd-session[1829]: pam_unix(sshd:session): session closed for user core Jan 29 16:56:42.115392 systemd[1]: sshd@2-168.119.110.78:22-147.75.109.163:54000.service: Deactivated successfully. Jan 29 16:56:42.119463 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:56:42.123639 systemd-logind[1509]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:56:42.126370 systemd-logind[1509]: Removed session 2. Jan 29 16:56:42.292476 systemd[1]: Started sshd@3-168.119.110.78:22-147.75.109.163:54006.service - OpenSSH per-connection server daemon (147.75.109.163:54006). Jan 29 16:56:43.302913 sshd[1837]: Accepted publickey for core from 147.75.109.163 port 54006 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 16:56:43.306911 sshd-session[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:56:43.318204 systemd-logind[1509]: New session 3 of user core. Jan 29 16:56:43.331190 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:56:43.988228 sshd[1839]: Connection closed by 147.75.109.163 port 54006 Jan 29 16:56:43.989666 sshd-session[1837]: pam_unix(sshd:session): session closed for user core Jan 29 16:56:43.998344 systemd[1]: sshd@3-168.119.110.78:22-147.75.109.163:54006.service: Deactivated successfully. Jan 29 16:56:44.003649 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:56:44.005323 systemd-logind[1509]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:56:44.007643 systemd-logind[1509]: Removed session 3. Jan 29 16:56:44.174418 systemd[1]: Started sshd@4-168.119.110.78:22-147.75.109.163:54018.service - OpenSSH per-connection server daemon (147.75.109.163:54018). Jan 29 16:56:45.191967 sshd[1845]: Accepted publickey for core from 147.75.109.163 port 54018 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 16:56:45.195484 sshd-session[1845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:56:45.206704 systemd-logind[1509]: New session 4 of user core. Jan 29 16:56:45.214340 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:56:45.881765 sshd[1847]: Connection closed by 147.75.109.163 port 54018 Jan 29 16:56:45.883350 sshd-session[1845]: pam_unix(sshd:session): session closed for user core Jan 29 16:56:45.889817 systemd[1]: sshd@4-168.119.110.78:22-147.75.109.163:54018.service: Deactivated successfully. Jan 29 16:56:45.894276 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:56:45.898557 systemd-logind[1509]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:56:45.900760 systemd-logind[1509]: Removed session 4. Jan 29 16:56:46.068270 systemd[1]: Started sshd@5-168.119.110.78:22-147.75.109.163:54026.service - OpenSSH per-connection server daemon (147.75.109.163:54026). Jan 29 16:56:47.099801 sshd[1853]: Accepted publickey for core from 147.75.109.163 port 54026 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 16:56:47.102901 sshd-session[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:56:47.113729 systemd-logind[1509]: New session 5 of user core. Jan 29 16:56:47.128172 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:56:47.644862 sudo[1856]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:56:47.645645 sudo[1856]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:56:47.647897 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 29 16:56:47.664212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:56:47.666277 sudo[1856]: pam_unix(sudo:session): session closed for user root Jan 29 16:56:47.827294 sshd[1855]: Connection closed by 147.75.109.163 port 54026 Jan 29 16:56:47.831465 sshd-session[1853]: pam_unix(sshd:session): session closed for user core Jan 29 16:56:47.844238 systemd[1]: sshd@5-168.119.110.78:22-147.75.109.163:54026.service: Deactivated successfully. Jan 29 16:56:47.854294 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:56:47.860016 systemd-logind[1509]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:56:47.869703 systemd-logind[1509]: Removed session 5. Jan 29 16:56:47.898126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:56:47.908240 (kubelet)[1869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:56:47.949755 kubelet[1869]: E0129 16:56:47.949649 1869 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:56:47.955961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:56:47.956186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:56:47.956618 systemd[1]: kubelet.service: Consumed 252ms CPU time, 102M memory peak. Jan 29 16:56:48.006191 systemd[1]: Started sshd@6-168.119.110.78:22-147.75.109.163:39738.service - OpenSSH per-connection server daemon (147.75.109.163:39738). Jan 29 16:56:48.989126 sshd[1878]: Accepted publickey for core from 147.75.109.163 port 39738 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 16:56:48.992339 sshd-session[1878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:56:49.001111 systemd-logind[1509]: New session 6 of user core. Jan 29 16:56:49.016346 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:56:49.519746 sudo[1882]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:56:49.520512 sudo[1882]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:56:49.526723 sudo[1882]: pam_unix(sudo:session): session closed for user root Jan 29 16:56:49.534341 sudo[1881]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:56:49.534668 sudo[1881]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:56:49.553538 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:56:49.611304 augenrules[1904]: No rules Jan 29 16:56:49.613159 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:56:49.613734 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:56:49.616396 sudo[1881]: pam_unix(sudo:session): session closed for user root Jan 29 16:56:49.775558 sshd[1880]: Connection closed by 147.75.109.163 port 39738 Jan 29 16:56:49.777262 sshd-session[1878]: pam_unix(sshd:session): session closed for user core Jan 29 16:56:49.785729 systemd[1]: sshd@6-168.119.110.78:22-147.75.109.163:39738.service: Deactivated successfully. Jan 29 16:56:49.790606 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:56:49.792457 systemd-logind[1509]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:56:49.794507 systemd-logind[1509]: Removed session 6. Jan 29 16:56:49.963549 systemd[1]: Started sshd@7-168.119.110.78:22-147.75.109.163:39748.service - OpenSSH per-connection server daemon (147.75.109.163:39748). Jan 29 16:56:50.988145 sshd[1913]: Accepted publickey for core from 147.75.109.163 port 39748 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 16:56:50.991035 sshd-session[1913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:56:51.001397 systemd-logind[1509]: New session 7 of user core. Jan 29 16:56:51.013292 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:56:51.523063 sudo[1916]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:56:51.523717 sudo[1916]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:56:52.091426 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:56:52.092628 (dockerd)[1934]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:56:52.880804 dockerd[1934]: time="2025-01-29T16:56:52.880362157Z" level=info msg="Starting up" Jan 29 16:56:53.116975 dockerd[1934]: time="2025-01-29T16:56:53.116610391Z" level=info msg="Loading containers: start." Jan 29 16:56:53.393968 kernel: Initializing XFRM netlink socket Jan 29 16:56:53.528661 systemd-networkd[1425]: docker0: Link UP Jan 29 16:56:53.565050 dockerd[1934]: time="2025-01-29T16:56:53.564974269Z" level=info msg="Loading containers: done." Jan 29 16:56:53.588858 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck950395719-merged.mount: Deactivated successfully. Jan 29 16:56:53.592441 dockerd[1934]: time="2025-01-29T16:56:53.592368779Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:56:53.592592 dockerd[1934]: time="2025-01-29T16:56:53.592515473Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:56:53.592711 dockerd[1934]: time="2025-01-29T16:56:53.592667348Z" level=info msg="Daemon has completed initialization" Jan 29 16:56:53.648138 dockerd[1934]: time="2025-01-29T16:56:53.647714518Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:56:53.647867 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:56:54.716864 containerd[1524]: time="2025-01-29T16:56:54.716760611Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 16:56:55.395861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3575601762.mount: Deactivated successfully. Jan 29 16:56:56.367231 containerd[1524]: time="2025-01-29T16:56:56.367176920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:56.368542 containerd[1524]: time="2025-01-29T16:56:56.368509192Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674916" Jan 29 16:56:56.369771 containerd[1524]: time="2025-01-29T16:56:56.369733778Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:56.372497 containerd[1524]: time="2025-01-29T16:56:56.372454936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:56.373446 containerd[1524]: time="2025-01-29T16:56:56.373311603Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 1.656500402s" Jan 29 16:56:56.373446 containerd[1524]: time="2025-01-29T16:56:56.373350703Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 29 16:56:56.374078 containerd[1524]: time="2025-01-29T16:56:56.374023832Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 16:56:57.601406 containerd[1524]: time="2025-01-29T16:56:57.601337085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:57.602483 containerd[1524]: time="2025-01-29T16:56:57.602448063Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770731" Jan 29 16:56:57.603439 containerd[1524]: time="2025-01-29T16:56:57.603405482Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:57.605779 containerd[1524]: time="2025-01-29T16:56:57.605738198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:57.607022 containerd[1524]: time="2025-01-29T16:56:57.606712742Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.232547685s" Jan 29 16:56:57.607022 containerd[1524]: time="2025-01-29T16:56:57.606737050Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 29 16:56:57.607428 containerd[1524]: time="2025-01-29T16:56:57.607341843Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 16:56:57.964740 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 29 16:56:57.972256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:56:58.181726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:56:58.194340 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:56:58.240543 kubelet[2187]: E0129 16:56:58.240351 2187 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:56:58.245345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:56:58.245579 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:56:58.246258 systemd[1]: kubelet.service: Consumed 243ms CPU time, 103.6M memory peak. Jan 29 16:56:58.675373 containerd[1524]: time="2025-01-29T16:56:58.675089341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:58.676719 containerd[1524]: time="2025-01-29T16:56:58.676666028Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169779" Jan 29 16:56:58.678019 containerd[1524]: time="2025-01-29T16:56:58.677980564Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:58.681015 containerd[1524]: time="2025-01-29T16:56:58.680972792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:56:58.682124 containerd[1524]: time="2025-01-29T16:56:58.681888089Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 1.074365206s" Jan 29 16:56:58.682124 containerd[1524]: time="2025-01-29T16:56:58.681913743Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 29 16:56:58.682426 containerd[1524]: time="2025-01-29T16:56:58.682410360Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 16:56:59.905378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249747325.mount: Deactivated successfully. Jan 29 16:57:00.253514 containerd[1524]: time="2025-01-29T16:57:00.253438006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:00.254961 containerd[1524]: time="2025-01-29T16:57:00.254872289Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909492" Jan 29 16:57:00.256170 containerd[1524]: time="2025-01-29T16:57:00.256120619Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:00.258264 containerd[1524]: time="2025-01-29T16:57:00.258212605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:00.258779 containerd[1524]: time="2025-01-29T16:57:00.258735154Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 1.576255955s" Jan 29 16:57:00.258820 containerd[1524]: time="2025-01-29T16:57:00.258779561Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 29 16:57:00.260102 containerd[1524]: time="2025-01-29T16:57:00.260069083Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 16:57:00.853614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount964020697.mount: Deactivated successfully. Jan 29 16:57:01.742894 containerd[1524]: time="2025-01-29T16:57:01.742804953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:01.744251 containerd[1524]: time="2025-01-29T16:57:01.744196762Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565333" Jan 29 16:57:01.745432 containerd[1524]: time="2025-01-29T16:57:01.745392078Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:01.748377 containerd[1524]: time="2025-01-29T16:57:01.748320047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:01.751312 containerd[1524]: time="2025-01-29T16:57:01.749913419Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.489729676s" Jan 29 16:57:01.751312 containerd[1524]: time="2025-01-29T16:57:01.749960451Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 29 16:57:01.752300 containerd[1524]: time="2025-01-29T16:57:01.752280759Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 16:57:02.280460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3726119755.mount: Deactivated successfully. Jan 29 16:57:02.290247 containerd[1524]: time="2025-01-29T16:57:02.290169285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:02.291727 containerd[1524]: time="2025-01-29T16:57:02.291651691Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321158" Jan 29 16:57:02.292958 containerd[1524]: time="2025-01-29T16:57:02.292839543Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:02.297673 containerd[1524]: time="2025-01-29T16:57:02.297561604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:02.299305 containerd[1524]: time="2025-01-29T16:57:02.299126737Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 546.753546ms" Jan 29 16:57:02.299305 containerd[1524]: time="2025-01-29T16:57:02.299174200Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 16:57:02.300914 containerd[1524]: time="2025-01-29T16:57:02.300554077Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 16:57:02.908113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount763426379.mount: Deactivated successfully. Jan 29 16:57:04.391722 containerd[1524]: time="2025-01-29T16:57:04.391635754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:04.392995 containerd[1524]: time="2025-01-29T16:57:04.392948540Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551382" Jan 29 16:57:04.394403 containerd[1524]: time="2025-01-29T16:57:04.394354071Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:04.397341 containerd[1524]: time="2025-01-29T16:57:04.397302103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:04.398399 containerd[1524]: time="2025-01-29T16:57:04.398354991Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.097762137s" Jan 29 16:57:04.398449 containerd[1524]: time="2025-01-29T16:57:04.398400311Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 29 16:57:06.798810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:57:06.799454 systemd[1]: kubelet.service: Consumed 243ms CPU time, 103.6M memory peak. Jan 29 16:57:06.812149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:57:06.844511 systemd[1]: Reload requested from client PID 2344 ('systemctl') (unit session-7.scope)... Jan 29 16:57:06.844681 systemd[1]: Reloading... Jan 29 16:57:07.011948 zram_generator::config[2389]: No configuration found. Jan 29 16:57:07.125897 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:57:07.238152 systemd[1]: Reloading finished in 392 ms. Jan 29 16:57:07.289155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:57:07.294125 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:57:07.298508 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:57:07.298753 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:57:07.298801 systemd[1]: kubelet.service: Consumed 146ms CPU time, 91.3M memory peak. Jan 29 16:57:07.304217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:57:07.481133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:57:07.490561 (kubelet)[2445]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:57:07.559519 kubelet[2445]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:57:07.561257 kubelet[2445]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 16:57:07.561257 kubelet[2445]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:57:07.561257 kubelet[2445]: I0129 16:57:07.560447 2445 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:57:07.922789 kubelet[2445]: I0129 16:57:07.922643 2445 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 16:57:07.922789 kubelet[2445]: I0129 16:57:07.922708 2445 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:57:07.923778 kubelet[2445]: I0129 16:57:07.923735 2445 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 16:57:07.977673 kubelet[2445]: I0129 16:57:07.976976 2445 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:57:07.981919 kubelet[2445]: E0129 16:57:07.981104 2445 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://168.119.110.78:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 168.119.110.78:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:07.997214 kubelet[2445]: E0129 16:57:07.997134 2445 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:57:07.997214 kubelet[2445]: I0129 16:57:07.997188 2445 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:57:08.005059 kubelet[2445]: I0129 16:57:08.004981 2445 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:57:08.011515 kubelet[2445]: I0129 16:57:08.011425 2445 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:57:08.011800 kubelet[2445]: I0129 16:57:08.011491 2445 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-c-e7d65f4211","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:57:08.011800 kubelet[2445]: I0129 16:57:08.011791 2445 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:57:08.012043 kubelet[2445]: I0129 16:57:08.011811 2445 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 16:57:08.012096 kubelet[2445]: I0129 16:57:08.012066 2445 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:57:08.019894 kubelet[2445]: I0129 16:57:08.019842 2445 kubelet.go:446] "Attempting to sync node with API server" Jan 29 16:57:08.019894 kubelet[2445]: I0129 16:57:08.019883 2445 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:57:08.021053 kubelet[2445]: I0129 16:57:08.019915 2445 kubelet.go:352] "Adding apiserver pod source" Jan 29 16:57:08.021053 kubelet[2445]: I0129 16:57:08.019955 2445 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:57:08.029377 kubelet[2445]: W0129 16:57:08.027790 2445 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://168.119.110.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 168.119.110.78:6443: connect: connection refused Jan 29 16:57:08.029377 kubelet[2445]: E0129 16:57:08.029186 2445 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://168.119.110.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.110.78:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:08.029580 kubelet[2445]: W0129 16:57:08.029394 2445 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://168.119.110.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-c-e7d65f4211&limit=500&resourceVersion=0": dial tcp 168.119.110.78:6443: connect: connection refused Jan 29 16:57:08.029580 kubelet[2445]: E0129 16:57:08.029454 2445 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://168.119.110.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-c-e7d65f4211&limit=500&resourceVersion=0\": dial tcp 168.119.110.78:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:08.029659 kubelet[2445]: I0129 16:57:08.029606 2445 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:57:08.035692 kubelet[2445]: I0129 16:57:08.035471 2445 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:57:08.037985 kubelet[2445]: W0129 16:57:08.036731 2445 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:57:08.038673 kubelet[2445]: I0129 16:57:08.038634 2445 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 16:57:08.038737 kubelet[2445]: I0129 16:57:08.038692 2445 server.go:1287] "Started kubelet" Jan 29 16:57:08.042335 kubelet[2445]: I0129 16:57:08.042282 2445 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:57:08.046235 kubelet[2445]: I0129 16:57:08.045481 2445 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:57:08.046235 kubelet[2445]: I0129 16:57:08.046086 2445 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:57:08.049299 kubelet[2445]: I0129 16:57:08.049273 2445 server.go:490] "Adding debug handlers to kubelet server" Jan 29 16:57:08.054015 kubelet[2445]: I0129 16:57:08.052778 2445 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:57:08.063587 kubelet[2445]: E0129 16:57:08.057107 2445 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://168.119.110.78:6443/api/v1/namespaces/default/events\": dial tcp 168.119.110.78:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-0-c-e7d65f4211.181f383fd78be564 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-c-e7d65f4211,UID:ci-4230-0-0-c-e7d65f4211,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-c-e7d65f4211,},FirstTimestamp:2025-01-29 16:57:08.038665572 +0000 UTC m=+0.541343606,LastTimestamp:2025-01-29 16:57:08.038665572 +0000 UTC m=+0.541343606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-c-e7d65f4211,}" Jan 29 16:57:08.066942 kubelet[2445]: I0129 16:57:08.066139 2445 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:57:08.067469 kubelet[2445]: E0129 16:57:08.067428 2445 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-0-c-e7d65f4211\" not found" Jan 29 16:57:08.067515 kubelet[2445]: I0129 16:57:08.067505 2445 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 16:57:08.067809 kubelet[2445]: I0129 16:57:08.067774 2445 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:57:08.071621 kubelet[2445]: I0129 16:57:08.071549 2445 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:57:08.072064 kubelet[2445]: W0129 16:57:08.072013 2445 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://168.119.110.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.110.78:6443: connect: connection refused Jan 29 16:57:08.072114 kubelet[2445]: E0129 16:57:08.072075 2445 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://168.119.110.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.110.78:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:08.072191 kubelet[2445]: E0129 16:57:08.072157 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.110.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-c-e7d65f4211?timeout=10s\": dial tcp 168.119.110.78:6443: connect: connection refused" interval="200ms" Jan 29 16:57:08.075612 kubelet[2445]: I0129 16:57:08.075591 2445 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:57:08.075752 kubelet[2445]: I0129 16:57:08.075736 2445 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:57:08.080351 kubelet[2445]: E0129 16:57:08.080336 2445 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:57:08.080522 kubelet[2445]: I0129 16:57:08.080511 2445 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:57:08.089140 kubelet[2445]: I0129 16:57:08.089092 2445 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:57:08.104345 kubelet[2445]: I0129 16:57:08.104319 2445 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:57:08.104464 kubelet[2445]: I0129 16:57:08.104454 2445 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 16:57:08.104523 kubelet[2445]: I0129 16:57:08.104514 2445 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 16:57:08.109650 kubelet[2445]: I0129 16:57:08.109634 2445 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 16:57:08.109764 kubelet[2445]: E0129 16:57:08.109747 2445 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:57:08.110531 kubelet[2445]: W0129 16:57:08.110498 2445 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://168.119.110.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.110.78:6443: connect: connection refused Jan 29 16:57:08.110617 kubelet[2445]: E0129 16:57:08.110600 2445 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://168.119.110.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 168.119.110.78:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:08.115598 kubelet[2445]: I0129 16:57:08.115561 2445 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 16:57:08.115979 kubelet[2445]: I0129 16:57:08.115574 2445 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 16:57:08.115979 kubelet[2445]: I0129 16:57:08.115686 2445 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:57:08.118055 kubelet[2445]: I0129 16:57:08.118043 2445 policy_none.go:49] "None policy: Start" Jan 29 16:57:08.118122 kubelet[2445]: I0129 16:57:08.118113 2445 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 16:57:08.118342 kubelet[2445]: I0129 16:57:08.118163 2445 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:57:08.125110 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:57:08.140746 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:57:08.147542 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:57:08.160781 kubelet[2445]: I0129 16:57:08.160078 2445 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:57:08.160781 kubelet[2445]: I0129 16:57:08.160284 2445 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:57:08.160781 kubelet[2445]: I0129 16:57:08.160295 2445 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:57:08.160781 kubelet[2445]: I0129 16:57:08.160713 2445 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:57:08.162184 kubelet[2445]: E0129 16:57:08.162171 2445 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 16:57:08.162288 kubelet[2445]: E0129 16:57:08.162277 2445 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-0-0-c-e7d65f4211\" not found" Jan 29 16:57:08.233037 systemd[1]: Created slice kubepods-burstable-pod89c87bf03b17009a2599b1e5ba4fd5f4.slice - libcontainer container kubepods-burstable-pod89c87bf03b17009a2599b1e5ba4fd5f4.slice. Jan 29 16:57:08.246013 kubelet[2445]: E0129 16:57:08.245581 2445 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-c-e7d65f4211\" not found" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:08.253053 systemd[1]: Created slice kubepods-burstable-pod9c25746c358f5b8910a11c5979ae51c5.slice - libcontainer container kubepods-burstable-pod9c25746c358f5b8910a11c5979ae51c5.slice. Jan 29 16:57:08.264483 kubelet[2445]: I0129 16:57:08.263277 2445 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:08.264483 kubelet[2445]: E0129 16:57:08.263831 2445 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://168.119.110.78:6443/api/v1/nodes\": dial tcp 168.119.110.78:6443: connect: connection refused" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:08.265152 kubelet[2445]: E0129 16:57:08.265102 2445 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-c-e7d65f4211\" not found" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:08.271784 systemd[1]: Created slice kubepods-burstable-pod9ff8515355a17c1009301eef32a1abd6.slice - libcontainer container kubepods-burstable-pod9ff8515355a17c1009301eef32a1abd6.slice. Jan 29 16:57:08.273876 kubelet[2445]: I0129 16:57:08.273103 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c25746c358f5b8910a11c5979ae51c5-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-c-e7d65f4211\" (UID: \"9c25746c358f5b8910a11c5979ae51c5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:08.273876 kubelet[2445]: I0129 16:57:08.273154 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c25746c358f5b8910a11c5979ae51c5-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-c-e7d65f4211\" (UID: \"9c25746c358f5b8910a11c5979ae51c5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:08.273876 kubelet[2445]: I0129 16:57:08.273195 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c25746c358f5b8910a11c5979ae51c5-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-c-e7d65f4211\" (UID: \"9c25746c358f5b8910a11c5979ae51c5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:08.273876 kubelet[2445]: I0129 16:57:08.273224 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ff8515355a17c1009301eef32a1abd6-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-c-e7d65f4211\" (UID: \"9ff8515355a17c1009301eef32a1abd6\") " pod="kube-system/kube-scheduler-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:08.273876 kubelet[2445]: I0129 16:57:08.273254 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89c87bf03b17009a2599b1e5ba4fd5f4-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-c-e7d65f4211\" (UID: \"89c87bf03b17009a2599b1e5ba4fd5f4\") " pod="kube-system/kube-apiserver-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:08.274234 kubelet[2445]: I0129 16:57:08.273287 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c25746c358f5b8910a11c5979ae51c5-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-c-e7d65f4211\" (UID: \"9c25746c358f5b8910a11c5979ae51c5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:08.274234 kubelet[2445]: I0129 16:57:08.273318 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c25746c358f5b8910a11c5979ae51c5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-c-e7d65f4211\" (UID: \"9c25746c358f5b8910a11c5979ae51c5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:08.274234 kubelet[2445]: I0129 16:57:08.273350 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89c87bf03b17009a2599b1e5ba4fd5f4-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-c-e7d65f4211\" (UID: \"89c87bf03b17009a2599b1e5ba4fd5f4\") " pod="kube-system/kube-apiserver-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:08.274234 kubelet[2445]: I0129 16:57:08.273380 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89c87bf03b17009a2599b1e5ba4fd5f4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-c-e7d65f4211\" (UID: \"89c87bf03b17009a2599b1e5ba4fd5f4\") " pod="kube-system/kube-apiserver-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:08.274560 kubelet[2445]: E0129 16:57:08.274401 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.110.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-c-e7d65f4211?timeout=10s\": dial tcp 168.119.110.78:6443: connect: connection refused" interval="400ms" Jan 29 16:57:08.276448 kubelet[2445]: E0129 16:57:08.276389 2445 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-c-e7d65f4211\" not found" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:08.467283 kubelet[2445]: I0129 16:57:08.467192 2445 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:08.467717 kubelet[2445]: E0129 16:57:08.467669 2445 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://168.119.110.78:6443/api/v1/nodes\": dial tcp 168.119.110.78:6443: connect: connection refused" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:08.548809 containerd[1524]: time="2025-01-29T16:57:08.548584293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-c-e7d65f4211,Uid:89c87bf03b17009a2599b1e5ba4fd5f4,Namespace:kube-system,Attempt:0,}" Jan 29 16:57:08.566832 containerd[1524]: time="2025-01-29T16:57:08.566727491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-c-e7d65f4211,Uid:9c25746c358f5b8910a11c5979ae51c5,Namespace:kube-system,Attempt:0,}" Jan 29 16:57:08.578072 containerd[1524]: time="2025-01-29T16:57:08.577761407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-c-e7d65f4211,Uid:9ff8515355a17c1009301eef32a1abd6,Namespace:kube-system,Attempt:0,}" Jan 29 16:57:08.675915 kubelet[2445]: E0129 16:57:08.675830 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.110.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-c-e7d65f4211?timeout=10s\": dial tcp 168.119.110.78:6443: connect: connection refused" interval="800ms" Jan 29 16:57:08.872107 kubelet[2445]: I0129 16:57:08.871878 2445 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:08.872908 kubelet[2445]: E0129 16:57:08.872462 2445 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://168.119.110.78:6443/api/v1/nodes\": dial tcp 168.119.110.78:6443: connect: connection refused" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:09.088567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2840850177.mount: Deactivated successfully. Jan 29 16:57:09.102564 containerd[1524]: time="2025-01-29T16:57:09.100981606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:57:09.105662 containerd[1524]: time="2025-01-29T16:57:09.105558053Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Jan 29 16:57:09.108252 containerd[1524]: time="2025-01-29T16:57:09.108133772Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:57:09.111825 containerd[1524]: time="2025-01-29T16:57:09.111720314Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:57:09.114433 containerd[1524]: time="2025-01-29T16:57:09.114349398Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:57:09.116084 containerd[1524]: time="2025-01-29T16:57:09.115998956Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:57:09.117571 containerd[1524]: time="2025-01-29T16:57:09.117332907Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:57:09.118899 containerd[1524]: time="2025-01-29T16:57:09.118772309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:57:09.122701 containerd[1524]: time="2025-01-29T16:57:09.122038306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 555.148906ms" Jan 29 16:57:09.125541 containerd[1524]: time="2025-01-29T16:57:09.125088265Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 576.371485ms" Jan 29 16:57:09.143199 containerd[1524]: time="2025-01-29T16:57:09.143019810Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 565.123682ms" Jan 29 16:57:09.320183 containerd[1524]: time="2025-01-29T16:57:09.319626476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:57:09.320183 containerd[1524]: time="2025-01-29T16:57:09.320010384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:57:09.320183 containerd[1524]: time="2025-01-29T16:57:09.320024960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:09.320183 containerd[1524]: time="2025-01-29T16:57:09.320106156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:09.321133 containerd[1524]: time="2025-01-29T16:57:09.320776739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:57:09.321133 containerd[1524]: time="2025-01-29T16:57:09.320837116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:57:09.321133 containerd[1524]: time="2025-01-29T16:57:09.320848788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:09.321133 containerd[1524]: time="2025-01-29T16:57:09.320956721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:09.321286 containerd[1524]: time="2025-01-29T16:57:09.317077625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:57:09.321286 containerd[1524]: time="2025-01-29T16:57:09.321220744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:57:09.321356 containerd[1524]: time="2025-01-29T16:57:09.321240289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:09.322983 containerd[1524]: time="2025-01-29T16:57:09.322615616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:09.344070 systemd[1]: Started cri-containerd-af7be32aec566629f499dbf589c82d0d58dead606ce3043be43af1cb3188d7a4.scope - libcontainer container af7be32aec566629f499dbf589c82d0d58dead606ce3043be43af1cb3188d7a4. Jan 29 16:57:09.350402 systemd[1]: Started cri-containerd-7751b8bdf9658985225445b73d636b31247bea86ffe199f227e51f70d68007ca.scope - libcontainer container 7751b8bdf9658985225445b73d636b31247bea86ffe199f227e51f70d68007ca. Jan 29 16:57:09.357120 systemd[1]: Started cri-containerd-76b3343d7139f0e3c3242310319b19793cc6edb0736361de9d933665b29f6f19.scope - libcontainer container 76b3343d7139f0e3c3242310319b19793cc6edb0736361de9d933665b29f6f19. Jan 29 16:57:09.419163 containerd[1524]: time="2025-01-29T16:57:09.418983524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-c-e7d65f4211,Uid:89c87bf03b17009a2599b1e5ba4fd5f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"af7be32aec566629f499dbf589c82d0d58dead606ce3043be43af1cb3188d7a4\"" Jan 29 16:57:09.425516 containerd[1524]: time="2025-01-29T16:57:09.425357214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-c-e7d65f4211,Uid:9ff8515355a17c1009301eef32a1abd6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7751b8bdf9658985225445b73d636b31247bea86ffe199f227e51f70d68007ca\"" Jan 29 16:57:09.425516 containerd[1524]: time="2025-01-29T16:57:09.425479914Z" level=info msg="CreateContainer within sandbox \"af7be32aec566629f499dbf589c82d0d58dead606ce3043be43af1cb3188d7a4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:57:09.428074 containerd[1524]: time="2025-01-29T16:57:09.428030658Z" level=info msg="CreateContainer within sandbox \"7751b8bdf9658985225445b73d636b31247bea86ffe199f227e51f70d68007ca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:57:09.434440 containerd[1524]: time="2025-01-29T16:57:09.434346944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-c-e7d65f4211,Uid:9c25746c358f5b8910a11c5979ae51c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"76b3343d7139f0e3c3242310319b19793cc6edb0736361de9d933665b29f6f19\"" Jan 29 16:57:09.437458 containerd[1524]: time="2025-01-29T16:57:09.437415217Z" level=info msg="CreateContainer within sandbox \"76b3343d7139f0e3c3242310319b19793cc6edb0736361de9d933665b29f6f19\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:57:09.450118 kubelet[2445]: W0129 16:57:09.449991 2445 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://168.119.110.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.110.78:6443: connect: connection refused Jan 29 16:57:09.450118 kubelet[2445]: E0129 16:57:09.450067 2445 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://168.119.110.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.110.78:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:09.456696 containerd[1524]: time="2025-01-29T16:57:09.456561731Z" level=info msg="CreateContainer within sandbox \"7751b8bdf9658985225445b73d636b31247bea86ffe199f227e51f70d68007ca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d66cecb75b1cd2abced8b4b74ebf0537086443767a8892e29b3c8514863479b4\"" Jan 29 16:57:09.457976 containerd[1524]: time="2025-01-29T16:57:09.457556435Z" level=info msg="StartContainer for \"d66cecb75b1cd2abced8b4b74ebf0537086443767a8892e29b3c8514863479b4\"" Jan 29 16:57:09.462044 containerd[1524]: time="2025-01-29T16:57:09.461947289Z" level=info msg="CreateContainer within sandbox \"76b3343d7139f0e3c3242310319b19793cc6edb0736361de9d933665b29f6f19\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"76988c597e20b554cef21829ef4bb3e41378ec4d000912ef8d6bd420b32d5db9\"" Jan 29 16:57:09.462543 containerd[1524]: time="2025-01-29T16:57:09.462425395Z" level=info msg="CreateContainer within sandbox \"af7be32aec566629f499dbf589c82d0d58dead606ce3043be43af1cb3188d7a4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d3edd3d7f8bd9dd9d6c7140b901763301851bb781391bbbeda1cf987cfee1ae8\"" Jan 29 16:57:09.463190 containerd[1524]: time="2025-01-29T16:57:09.463174709Z" level=info msg="StartContainer for \"76988c597e20b554cef21829ef4bb3e41378ec4d000912ef8d6bd420b32d5db9\"" Jan 29 16:57:09.463783 kubelet[2445]: W0129 16:57:09.463593 2445 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://168.119.110.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 168.119.110.78:6443: connect: connection refused Jan 29 16:57:09.463783 kubelet[2445]: E0129 16:57:09.463641 2445 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://168.119.110.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.110.78:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:09.463959 containerd[1524]: time="2025-01-29T16:57:09.463675468Z" level=info msg="StartContainer for \"d3edd3d7f8bd9dd9d6c7140b901763301851bb781391bbbeda1cf987cfee1ae8\"" Jan 29 16:57:09.477334 kubelet[2445]: E0129 16:57:09.477286 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.110.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-c-e7d65f4211?timeout=10s\": dial tcp 168.119.110.78:6443: connect: connection refused" interval="1.6s" Jan 29 16:57:09.491474 systemd[1]: Started cri-containerd-d66cecb75b1cd2abced8b4b74ebf0537086443767a8892e29b3c8514863479b4.scope - libcontainer container d66cecb75b1cd2abced8b4b74ebf0537086443767a8892e29b3c8514863479b4. Jan 29 16:57:09.505266 systemd[1]: Started cri-containerd-d3edd3d7f8bd9dd9d6c7140b901763301851bb781391bbbeda1cf987cfee1ae8.scope - libcontainer container d3edd3d7f8bd9dd9d6c7140b901763301851bb781391bbbeda1cf987cfee1ae8. Jan 29 16:57:09.516037 systemd[1]: Started cri-containerd-76988c597e20b554cef21829ef4bb3e41378ec4d000912ef8d6bd420b32d5db9.scope - libcontainer container 76988c597e20b554cef21829ef4bb3e41378ec4d000912ef8d6bd420b32d5db9. Jan 29 16:57:09.561325 containerd[1524]: time="2025-01-29T16:57:09.561284803Z" level=info msg="StartContainer for \"d66cecb75b1cd2abced8b4b74ebf0537086443767a8892e29b3c8514863479b4\" returns successfully" Jan 29 16:57:09.593016 containerd[1524]: time="2025-01-29T16:57:09.592965112Z" level=info msg="StartContainer for \"d3edd3d7f8bd9dd9d6c7140b901763301851bb781391bbbeda1cf987cfee1ae8\" returns successfully" Jan 29 16:57:09.604209 containerd[1524]: time="2025-01-29T16:57:09.604160999Z" level=info msg="StartContainer for \"76988c597e20b554cef21829ef4bb3e41378ec4d000912ef8d6bd420b32d5db9\" returns successfully" Jan 29 16:57:09.609720 kubelet[2445]: W0129 16:57:09.609416 2445 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://168.119.110.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-c-e7d65f4211&limit=500&resourceVersion=0": dial tcp 168.119.110.78:6443: connect: connection refused Jan 29 16:57:09.609720 kubelet[2445]: E0129 16:57:09.609494 2445 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://168.119.110.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-c-e7d65f4211&limit=500&resourceVersion=0\": dial tcp 168.119.110.78:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:09.612530 kubelet[2445]: W0129 16:57:09.612477 2445 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://168.119.110.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.110.78:6443: connect: connection refused Jan 29 16:57:09.612685 kubelet[2445]: E0129 16:57:09.612654 2445 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://168.119.110.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 168.119.110.78:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:57:09.676855 kubelet[2445]: I0129 16:57:09.676763 2445 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:09.677689 kubelet[2445]: E0129 16:57:09.677654 2445 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://168.119.110.78:6443/api/v1/nodes\": dial tcp 168.119.110.78:6443: connect: connection refused" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:10.125187 kubelet[2445]: E0129 16:57:10.124825 2445 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-c-e7d65f4211\" not found" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:10.125580 kubelet[2445]: E0129 16:57:10.125456 2445 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-c-e7d65f4211\" not found" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:10.129844 kubelet[2445]: E0129 16:57:10.129711 2445 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-c-e7d65f4211\" not found" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:11.134183 kubelet[2445]: E0129 16:57:11.133871 2445 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-c-e7d65f4211\" not found" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:11.134183 kubelet[2445]: E0129 16:57:11.134019 2445 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-0-c-e7d65f4211\" not found" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:11.187157 kubelet[2445]: E0129 16:57:11.187063 2445 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-0-0-c-e7d65f4211\" not found" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:11.280400 kubelet[2445]: I0129 16:57:11.280318 2445 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:11.296628 kubelet[2445]: I0129 16:57:11.296569 2445 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:11.296628 kubelet[2445]: E0129 16:57:11.296643 2445 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4230-0-0-c-e7d65f4211\": node \"ci-4230-0-0-c-e7d65f4211\" not found" Jan 29 16:57:11.300753 kubelet[2445]: E0129 16:57:11.300693 2445 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-0-c-e7d65f4211\" not found" Jan 29 16:57:11.401909 kubelet[2445]: E0129 16:57:11.401700 2445 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-0-c-e7d65f4211\" not found" Jan 29 16:57:11.502909 kubelet[2445]: E0129 16:57:11.502835 2445 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-0-c-e7d65f4211\" not found" Jan 29 16:57:11.603752 kubelet[2445]: E0129 16:57:11.603655 2445 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-0-c-e7d65f4211\" not found" Jan 29 16:57:11.704476 kubelet[2445]: E0129 16:57:11.704347 2445 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-0-c-e7d65f4211\" not found" Jan 29 16:57:11.769886 kubelet[2445]: I0129 16:57:11.769788 2445 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:11.778352 kubelet[2445]: E0129 16:57:11.778273 2445 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-0-0-c-e7d65f4211\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:11.778352 kubelet[2445]: I0129 16:57:11.778325 2445 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:11.780578 kubelet[2445]: E0129 16:57:11.780519 2445 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-0-0-c-e7d65f4211\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:11.780578 kubelet[2445]: I0129 16:57:11.780556 2445 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:11.783740 kubelet[2445]: E0129 16:57:11.783680 2445 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-0-0-c-e7d65f4211\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:12.026316 kubelet[2445]: I0129 16:57:12.026267 2445 apiserver.go:52] "Watching apiserver" Jan 29 16:57:12.068917 kubelet[2445]: I0129 16:57:12.068835 2445 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:57:12.142294 kubelet[2445]: I0129 16:57:12.142250 2445 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:12.915338 kubelet[2445]: I0129 16:57:12.915276 2445 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:13.174351 kubelet[2445]: I0129 16:57:13.174130 2445 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:13.456142 systemd[1]: Reload requested from client PID 2717 ('systemctl') (unit session-7.scope)... Jan 29 16:57:13.456173 systemd[1]: Reloading... Jan 29 16:57:13.599993 zram_generator::config[2763]: No configuration found. Jan 29 16:57:13.720866 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:57:13.850645 systemd[1]: Reloading finished in 393 ms. Jan 29 16:57:13.893021 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:57:13.920199 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:57:13.920698 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:57:13.920784 systemd[1]: kubelet.service: Consumed 1.077s CPU time, 121.3M memory peak. Jan 29 16:57:13.926273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:57:14.178142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:57:14.183367 (kubelet)[2813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:57:14.256518 kubelet[2813]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:57:14.256518 kubelet[2813]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 16:57:14.256518 kubelet[2813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:57:14.258573 kubelet[2813]: I0129 16:57:14.257003 2813 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:57:14.266634 kubelet[2813]: I0129 16:57:14.266604 2813 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 16:57:14.268264 kubelet[2813]: I0129 16:57:14.268249 2813 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:57:14.268985 kubelet[2813]: I0129 16:57:14.268970 2813 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 16:57:14.273660 kubelet[2813]: I0129 16:57:14.273626 2813 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:57:14.284570 kubelet[2813]: I0129 16:57:14.284529 2813 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:57:14.309433 kubelet[2813]: E0129 16:57:14.309260 2813 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:57:14.309433 kubelet[2813]: I0129 16:57:14.309299 2813 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:57:14.313509 kubelet[2813]: I0129 16:57:14.313472 2813 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:57:14.313753 kubelet[2813]: I0129 16:57:14.313711 2813 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:57:14.313988 kubelet[2813]: I0129 16:57:14.313746 2813 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-c-e7d65f4211","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:57:14.313988 kubelet[2813]: I0129 16:57:14.313984 2813 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:57:14.314108 kubelet[2813]: I0129 16:57:14.313995 2813 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 16:57:14.314108 kubelet[2813]: I0129 16:57:14.314041 2813 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:57:14.314238 kubelet[2813]: I0129 16:57:14.314221 2813 kubelet.go:446] "Attempting to sync node with API server" Jan 29 16:57:14.314269 kubelet[2813]: I0129 16:57:14.314238 2813 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:57:14.314269 kubelet[2813]: I0129 16:57:14.314257 2813 kubelet.go:352] "Adding apiserver pod source" Jan 29 16:57:14.314269 kubelet[2813]: I0129 16:57:14.314268 2813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:57:14.317417 kubelet[2813]: I0129 16:57:14.317349 2813 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:57:14.320527 kubelet[2813]: I0129 16:57:14.320000 2813 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:57:14.323115 kubelet[2813]: I0129 16:57:14.322172 2813 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 16:57:14.323115 kubelet[2813]: I0129 16:57:14.322257 2813 server.go:1287] "Started kubelet" Jan 29 16:57:14.330371 kubelet[2813]: I0129 16:57:14.329486 2813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:57:14.337576 kubelet[2813]: I0129 16:57:14.337553 2813 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 16:57:14.338386 kubelet[2813]: I0129 16:57:14.338172 2813 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:57:14.338813 kubelet[2813]: I0129 16:57:14.338794 2813 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:57:14.338979 kubelet[2813]: I0129 16:57:14.338964 2813 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:57:14.347032 kubelet[2813]: I0129 16:57:14.345304 2813 server.go:490] "Adding debug handlers to kubelet server" Jan 29 16:57:14.347524 kubelet[2813]: I0129 16:57:14.347379 2813 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:57:14.347880 kubelet[2813]: I0129 16:57:14.347868 2813 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:57:14.350961 kubelet[2813]: I0129 16:57:14.349307 2813 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:57:14.351878 kubelet[2813]: I0129 16:57:14.350986 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:57:14.355285 kubelet[2813]: E0129 16:57:14.354612 2813 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:57:14.355642 kubelet[2813]: I0129 16:57:14.355495 2813 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:57:14.355642 kubelet[2813]: I0129 16:57:14.355506 2813 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:57:14.355642 kubelet[2813]: I0129 16:57:14.355603 2813 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:57:14.360543 kubelet[2813]: I0129 16:57:14.360467 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:57:14.362105 kubelet[2813]: I0129 16:57:14.362028 2813 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 16:57:14.362105 kubelet[2813]: I0129 16:57:14.362054 2813 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 16:57:14.362105 kubelet[2813]: I0129 16:57:14.362061 2813 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 16:57:14.362385 kubelet[2813]: E0129 16:57:14.362227 2813 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:57:14.405802 kubelet[2813]: I0129 16:57:14.405773 2813 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 16:57:14.405802 kubelet[2813]: I0129 16:57:14.405789 2813 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 16:57:14.405802 kubelet[2813]: I0129 16:57:14.405805 2813 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:57:14.406017 kubelet[2813]: I0129 16:57:14.405981 2813 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:57:14.406017 kubelet[2813]: I0129 16:57:14.405991 2813 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:57:14.406017 kubelet[2813]: I0129 16:57:14.406007 2813 policy_none.go:49] "None policy: Start" Jan 29 16:57:14.406017 kubelet[2813]: I0129 16:57:14.406015 2813 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 16:57:14.406152 kubelet[2813]: I0129 16:57:14.406024 2813 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:57:14.406152 kubelet[2813]: I0129 16:57:14.406110 2813 state_mem.go:75] "Updated machine memory state" Jan 29 16:57:14.409702 kubelet[2813]: I0129 16:57:14.409684 2813 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:57:14.410134 kubelet[2813]: I0129 16:57:14.409825 2813 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:57:14.410134 kubelet[2813]: I0129 16:57:14.409837 2813 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:57:14.410134 kubelet[2813]: I0129 16:57:14.410072 2813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:57:14.412370 kubelet[2813]: E0129 16:57:14.412350 2813 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 16:57:14.462455 sudo[2846]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 16:57:14.462827 sudo[2846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 16:57:14.465614 kubelet[2813]: I0129 16:57:14.465229 2813 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:14.468250 kubelet[2813]: I0129 16:57:14.467366 2813 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:14.468701 kubelet[2813]: I0129 16:57:14.468424 2813 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:14.473608 kubelet[2813]: E0129 16:57:14.473565 2813 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-0-0-c-e7d65f4211\" already exists" pod="kube-system/kube-scheduler-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:14.475698 kubelet[2813]: E0129 16:57:14.475529 2813 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-0-0-c-e7d65f4211\" already exists" pod="kube-system/kube-apiserver-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:14.475698 kubelet[2813]: E0129 16:57:14.475612 2813 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-0-0-c-e7d65f4211\" already exists" pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:14.515437 kubelet[2813]: I0129 16:57:14.515400 2813 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:14.525778 kubelet[2813]: I0129 16:57:14.525439 2813 kubelet_node_status.go:125] "Node was previously registered" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:14.525778 kubelet[2813]: I0129 16:57:14.525513 2813 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:14.541086 kubelet[2813]: I0129 16:57:14.541043 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89c87bf03b17009a2599b1e5ba4fd5f4-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-c-e7d65f4211\" (UID: \"89c87bf03b17009a2599b1e5ba4fd5f4\") " pod="kube-system/kube-apiserver-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:14.541300 kubelet[2813]: I0129 16:57:14.541285 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c25746c358f5b8910a11c5979ae51c5-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-c-e7d65f4211\" (UID: \"9c25746c358f5b8910a11c5979ae51c5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:14.541429 kubelet[2813]: I0129 16:57:14.541387 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c25746c358f5b8910a11c5979ae51c5-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-c-e7d65f4211\" (UID: \"9c25746c358f5b8910a11c5979ae51c5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:14.541429 kubelet[2813]: I0129 16:57:14.541407 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c25746c358f5b8910a11c5979ae51c5-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-c-e7d65f4211\" (UID: \"9c25746c358f5b8910a11c5979ae51c5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:14.541596 kubelet[2813]: I0129 16:57:14.541527 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c25746c358f5b8910a11c5979ae51c5-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-c-e7d65f4211\" (UID: \"9c25746c358f5b8910a11c5979ae51c5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:14.541596 kubelet[2813]: I0129 16:57:14.541547 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9ff8515355a17c1009301eef32a1abd6-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-c-e7d65f4211\" (UID: \"9ff8515355a17c1009301eef32a1abd6\") " pod="kube-system/kube-scheduler-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:14.541764 kubelet[2813]: I0129 16:57:14.541563 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89c87bf03b17009a2599b1e5ba4fd5f4-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-c-e7d65f4211\" (UID: \"89c87bf03b17009a2599b1e5ba4fd5f4\") " pod="kube-system/kube-apiserver-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:14.541764 kubelet[2813]: I0129 16:57:14.541692 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89c87bf03b17009a2599b1e5ba4fd5f4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-c-e7d65f4211\" (UID: \"89c87bf03b17009a2599b1e5ba4fd5f4\") " pod="kube-system/kube-apiserver-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:14.541764 kubelet[2813]: I0129 16:57:14.541710 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c25746c358f5b8910a11c5979ae51c5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-c-e7d65f4211\" (UID: \"9c25746c358f5b8910a11c5979ae51c5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:15.059662 sudo[2846]: pam_unix(sudo:session): session closed for user root Jan 29 16:57:15.314879 kubelet[2813]: I0129 16:57:15.314722 2813 apiserver.go:52] "Watching apiserver" Jan 29 16:57:15.339269 kubelet[2813]: I0129 16:57:15.339210 2813 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:57:15.388941 kubelet[2813]: I0129 16:57:15.388300 2813 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:15.389237 kubelet[2813]: I0129 16:57:15.389209 2813 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:15.390907 kubelet[2813]: I0129 16:57:15.390881 2813 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:15.406610 kubelet[2813]: E0129 16:57:15.406573 2813 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-0-0-c-e7d65f4211\" already exists" pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:15.407030 kubelet[2813]: E0129 16:57:15.407007 2813 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-0-0-c-e7d65f4211\" already exists" pod="kube-system/kube-apiserver-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:15.412247 kubelet[2813]: E0129 16:57:15.412221 2813 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-0-0-c-e7d65f4211\" already exists" pod="kube-system/kube-scheduler-ci-4230-0-0-c-e7d65f4211" Jan 29 16:57:15.463308 kubelet[2813]: I0129 16:57:15.463211 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" podStartSLOduration=3.463139778 podStartE2EDuration="3.463139778s" podCreationTimestamp="2025-01-29 16:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:57:15.462791775 +0000 UTC m=+1.272014855" watchObservedRunningTime="2025-01-29 16:57:15.463139778 +0000 UTC m=+1.272362869" Jan 29 16:57:15.463506 kubelet[2813]: I0129 16:57:15.463351 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-0-0-c-e7d65f4211" podStartSLOduration=2.4633427980000002 podStartE2EDuration="2.463342798s" podCreationTimestamp="2025-01-29 16:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:57:15.452666776 +0000 UTC m=+1.261889857" watchObservedRunningTime="2025-01-29 16:57:15.463342798 +0000 UTC m=+1.272565879" Jan 29 16:57:17.182412 sudo[1916]: pam_unix(sudo:session): session closed for user root Jan 29 16:57:17.343712 sshd[1915]: Connection closed by 147.75.109.163 port 39748 Jan 29 16:57:17.346482 sshd-session[1913]: pam_unix(sshd:session): session closed for user core Jan 29 16:57:17.351804 systemd-logind[1509]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:57:17.352817 systemd[1]: sshd@7-168.119.110.78:22-147.75.109.163:39748.service: Deactivated successfully. Jan 29 16:57:17.356089 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:57:17.356371 systemd[1]: session-7.scope: Consumed 5.538s CPU time, 214.2M memory peak. Jan 29 16:57:17.358654 systemd-logind[1509]: Removed session 7. Jan 29 16:57:19.588066 kubelet[2813]: I0129 16:57:19.588030 2813 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:57:19.588827 kubelet[2813]: I0129 16:57:19.588520 2813 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:57:19.588889 containerd[1524]: time="2025-01-29T16:57:19.588327898Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:57:20.553491 kubelet[2813]: I0129 16:57:20.553056 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-0-0-c-e7d65f4211" podStartSLOduration=8.550765252 podStartE2EDuration="8.550765252s" podCreationTimestamp="2025-01-29 16:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:57:15.477810709 +0000 UTC m=+1.287033791" watchObservedRunningTime="2025-01-29 16:57:20.550765252 +0000 UTC m=+6.359988343" Jan 29 16:57:20.582602 kubelet[2813]: I0129 16:57:20.582554 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-hostproc\") pod \"cilium-8sv6h\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " pod="kube-system/cilium-8sv6h" Jan 29 16:57:20.582602 kubelet[2813]: I0129 16:57:20.582600 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-cilium-run\") pod \"cilium-8sv6h\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " pod="kube-system/cilium-8sv6h" Jan 29 16:57:20.582801 kubelet[2813]: I0129 16:57:20.582623 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-etc-cni-netd\") pod \"cilium-8sv6h\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " pod="kube-system/cilium-8sv6h" Jan 29 16:57:20.582801 kubelet[2813]: I0129 16:57:20.582662 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-lib-modules\") pod \"cilium-8sv6h\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " pod="kube-system/cilium-8sv6h" Jan 29 16:57:20.582801 kubelet[2813]: I0129 16:57:20.582686 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a42266af-4520-4d6a-b43a-dee7a81ab497-cilium-config-path\") pod \"cilium-8sv6h\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " pod="kube-system/cilium-8sv6h" Jan 29 16:57:20.582801 kubelet[2813]: I0129 16:57:20.582706 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8006f3f1-4b03-4229-909b-3c75edf39ab4-kube-proxy\") pod \"kube-proxy-68pmd\" (UID: \"8006f3f1-4b03-4229-909b-3c75edf39ab4\") " pod="kube-system/kube-proxy-68pmd" Jan 29 16:57:20.582801 kubelet[2813]: I0129 16:57:20.582726 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-bpf-maps\") pod \"cilium-8sv6h\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " pod="kube-system/cilium-8sv6h" Jan 29 16:57:20.582801 kubelet[2813]: I0129 16:57:20.582744 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8006f3f1-4b03-4229-909b-3c75edf39ab4-lib-modules\") pod \"kube-proxy-68pmd\" (UID: \"8006f3f1-4b03-4229-909b-3c75edf39ab4\") " pod="kube-system/kube-proxy-68pmd" Jan 29 16:57:20.583043 kubelet[2813]: I0129 16:57:20.582763 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a42266af-4520-4d6a-b43a-dee7a81ab497-clustermesh-secrets\") pod \"cilium-8sv6h\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " pod="kube-system/cilium-8sv6h" Jan 29 16:57:20.583043 kubelet[2813]: I0129 16:57:20.582781 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-host-proc-sys-net\") pod \"cilium-8sv6h\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " pod="kube-system/cilium-8sv6h" Jan 29 16:57:20.583043 kubelet[2813]: I0129 16:57:20.582806 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-cilium-cgroup\") pod \"cilium-8sv6h\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " pod="kube-system/cilium-8sv6h" Jan 29 16:57:20.583043 kubelet[2813]: I0129 16:57:20.582834 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-xtables-lock\") pod \"cilium-8sv6h\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " pod="kube-system/cilium-8sv6h" Jan 29 16:57:20.583043 kubelet[2813]: I0129 16:57:20.582852 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-host-proc-sys-kernel\") pod \"cilium-8sv6h\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " pod="kube-system/cilium-8sv6h" Jan 29 16:57:20.583043 kubelet[2813]: I0129 16:57:20.582873 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a42266af-4520-4d6a-b43a-dee7a81ab497-hubble-tls\") pod \"cilium-8sv6h\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " pod="kube-system/cilium-8sv6h" Jan 29 16:57:20.583266 kubelet[2813]: I0129 16:57:20.582896 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8006f3f1-4b03-4229-909b-3c75edf39ab4-xtables-lock\") pod \"kube-proxy-68pmd\" (UID: \"8006f3f1-4b03-4229-909b-3c75edf39ab4\") " pod="kube-system/kube-proxy-68pmd" Jan 29 16:57:20.583266 kubelet[2813]: I0129 16:57:20.582916 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-cni-path\") pod \"cilium-8sv6h\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " pod="kube-system/cilium-8sv6h" Jan 29 16:57:20.583266 kubelet[2813]: I0129 16:57:20.582968 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fznvt\" (UniqueName: \"kubernetes.io/projected/a42266af-4520-4d6a-b43a-dee7a81ab497-kube-api-access-fznvt\") pod \"cilium-8sv6h\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " pod="kube-system/cilium-8sv6h" Jan 29 16:57:20.583266 kubelet[2813]: I0129 16:57:20.582994 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hzqk\" (UniqueName: \"kubernetes.io/projected/8006f3f1-4b03-4229-909b-3c75edf39ab4-kube-api-access-5hzqk\") pod \"kube-proxy-68pmd\" (UID: \"8006f3f1-4b03-4229-909b-3c75edf39ab4\") " pod="kube-system/kube-proxy-68pmd" Jan 29 16:57:20.583458 systemd[1]: Created slice kubepods-besteffort-pod8006f3f1_4b03_4229_909b_3c75edf39ab4.slice - libcontainer container kubepods-besteffort-pod8006f3f1_4b03_4229_909b_3c75edf39ab4.slice. Jan 29 16:57:20.598618 systemd[1]: Created slice kubepods-burstable-poda42266af_4520_4d6a_b43a_dee7a81ab497.slice - libcontainer container kubepods-burstable-poda42266af_4520_4d6a_b43a_dee7a81ab497.slice. Jan 29 16:57:20.740591 systemd[1]: Created slice kubepods-besteffort-pod0b34e1d8_a11a_4e2c_bc0e_971716ea9b60.slice - libcontainer container kubepods-besteffort-pod0b34e1d8_a11a_4e2c_bc0e_971716ea9b60.slice. Jan 29 16:57:20.788113 kubelet[2813]: I0129 16:57:20.788060 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b34e1d8-a11a-4e2c-bc0e-971716ea9b60-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-r7x2k\" (UID: \"0b34e1d8-a11a-4e2c-bc0e-971716ea9b60\") " pod="kube-system/cilium-operator-6c4d7847fc-r7x2k" Jan 29 16:57:20.788113 kubelet[2813]: I0129 16:57:20.788104 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvmnl\" (UniqueName: \"kubernetes.io/projected/0b34e1d8-a11a-4e2c-bc0e-971716ea9b60-kube-api-access-lvmnl\") pod \"cilium-operator-6c4d7847fc-r7x2k\" (UID: \"0b34e1d8-a11a-4e2c-bc0e-971716ea9b60\") " pod="kube-system/cilium-operator-6c4d7847fc-r7x2k" Jan 29 16:57:20.906767 containerd[1524]: time="2025-01-29T16:57:20.904952205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-68pmd,Uid:8006f3f1-4b03-4229-909b-3c75edf39ab4,Namespace:kube-system,Attempt:0,}" Jan 29 16:57:20.910207 containerd[1524]: time="2025-01-29T16:57:20.908822916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8sv6h,Uid:a42266af-4520-4d6a-b43a-dee7a81ab497,Namespace:kube-system,Attempt:0,}" Jan 29 16:57:20.974485 containerd[1524]: time="2025-01-29T16:57:20.974388124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:57:20.974848 containerd[1524]: time="2025-01-29T16:57:20.974745162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:57:20.975046 containerd[1524]: time="2025-01-29T16:57:20.974961491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:20.976921 containerd[1524]: time="2025-01-29T16:57:20.976862073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:20.977138 containerd[1524]: time="2025-01-29T16:57:20.977035592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:57:20.977504 containerd[1524]: time="2025-01-29T16:57:20.977114807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:57:20.978210 containerd[1524]: time="2025-01-29T16:57:20.977814126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:20.978345 containerd[1524]: time="2025-01-29T16:57:20.978290694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:21.007092 systemd[1]: Started cri-containerd-ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a.scope - libcontainer container ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a. Jan 29 16:57:21.009255 systemd[1]: Started cri-containerd-fe9c589cbcc977ccce29350b309d61f61ffdf435d7f7ab9772394cd7421feba1.scope - libcontainer container fe9c589cbcc977ccce29350b309d61f61ffdf435d7f7ab9772394cd7421feba1. Jan 29 16:57:21.039430 containerd[1524]: time="2025-01-29T16:57:21.039387200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8sv6h,Uid:a42266af-4520-4d6a-b43a-dee7a81ab497,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\"" Jan 29 16:57:21.043812 containerd[1524]: time="2025-01-29T16:57:21.043686346Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:57:21.045108 containerd[1524]: time="2025-01-29T16:57:21.045078255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r7x2k,Uid:0b34e1d8-a11a-4e2c-bc0e-971716ea9b60,Namespace:kube-system,Attempt:0,}" Jan 29 16:57:21.053137 containerd[1524]: time="2025-01-29T16:57:21.052558553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-68pmd,Uid:8006f3f1-4b03-4229-909b-3c75edf39ab4,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe9c589cbcc977ccce29350b309d61f61ffdf435d7f7ab9772394cd7421feba1\"" Jan 29 16:57:21.056516 containerd[1524]: time="2025-01-29T16:57:21.056470915Z" level=info msg="CreateContainer within sandbox \"fe9c589cbcc977ccce29350b309d61f61ffdf435d7f7ab9772394cd7421feba1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:57:21.080607 containerd[1524]: time="2025-01-29T16:57:21.080485024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:57:21.080607 containerd[1524]: time="2025-01-29T16:57:21.080551015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:57:21.080607 containerd[1524]: time="2025-01-29T16:57:21.080564511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:21.081109 containerd[1524]: time="2025-01-29T16:57:21.080881826Z" level=info msg="CreateContainer within sandbox \"fe9c589cbcc977ccce29350b309d61f61ffdf435d7f7ab9772394cd7421feba1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6c0a95d9bc0fa6ae8cdf66320cbd1ed05db64475b1dffca22f18b40dcbaf99fc\"" Jan 29 16:57:21.082676 containerd[1524]: time="2025-01-29T16:57:21.081840556Z" level=info msg="StartContainer for \"6c0a95d9bc0fa6ae8cdf66320cbd1ed05db64475b1dffca22f18b40dcbaf99fc\"" Jan 29 16:57:21.083680 containerd[1524]: time="2025-01-29T16:57:21.083367665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:21.109820 systemd[1]: Started cri-containerd-1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c.scope - libcontainer container 1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c. Jan 29 16:57:21.135076 systemd[1]: Started cri-containerd-6c0a95d9bc0fa6ae8cdf66320cbd1ed05db64475b1dffca22f18b40dcbaf99fc.scope - libcontainer container 6c0a95d9bc0fa6ae8cdf66320cbd1ed05db64475b1dffca22f18b40dcbaf99fc. Jan 29 16:57:21.163339 containerd[1524]: time="2025-01-29T16:57:21.162486933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r7x2k,Uid:0b34e1d8-a11a-4e2c-bc0e-971716ea9b60,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c\"" Jan 29 16:57:21.184484 containerd[1524]: time="2025-01-29T16:57:21.184375988Z" level=info msg="StartContainer for \"6c0a95d9bc0fa6ae8cdf66320cbd1ed05db64475b1dffca22f18b40dcbaf99fc\" returns successfully" Jan 29 16:57:27.038732 kubelet[2813]: I0129 16:57:27.038620 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-68pmd" podStartSLOduration=7.038595401 podStartE2EDuration="7.038595401s" podCreationTimestamp="2025-01-29 16:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:57:21.421898046 +0000 UTC m=+7.231121147" watchObservedRunningTime="2025-01-29 16:57:27.038595401 +0000 UTC m=+12.847818481" Jan 29 16:57:29.102565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2254838055.mount: Deactivated successfully. Jan 29 16:57:30.811317 containerd[1524]: time="2025-01-29T16:57:30.811228038Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:30.812643 containerd[1524]: time="2025-01-29T16:57:30.812572799Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 16:57:30.815944 containerd[1524]: time="2025-01-29T16:57:30.814627619Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:30.816028 containerd[1524]: time="2025-01-29T16:57:30.815902629Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.772185064s" Jan 29 16:57:30.816114 containerd[1524]: time="2025-01-29T16:57:30.816082125Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 16:57:30.818733 containerd[1524]: time="2025-01-29T16:57:30.818703027Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:57:30.822346 containerd[1524]: time="2025-01-29T16:57:30.822310077Z" level=info msg="CreateContainer within sandbox \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:57:30.909561 containerd[1524]: time="2025-01-29T16:57:30.909489627Z" level=info msg="CreateContainer within sandbox \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2\"" Jan 29 16:57:30.911584 containerd[1524]: time="2025-01-29T16:57:30.911488292Z" level=info msg="StartContainer for \"a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2\"" Jan 29 16:57:31.056847 systemd[1]: run-containerd-runc-k8s.io-a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2-runc.QMs8Aw.mount: Deactivated successfully. Jan 29 16:57:31.067239 systemd[1]: Started cri-containerd-a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2.scope - libcontainer container a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2. Jan 29 16:57:31.116321 containerd[1524]: time="2025-01-29T16:57:31.116225222Z" level=info msg="StartContainer for \"a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2\" returns successfully" Jan 29 16:57:31.137629 systemd[1]: cri-containerd-a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2.scope: Deactivated successfully. Jan 29 16:57:31.328969 containerd[1524]: time="2025-01-29T16:57:31.328553126Z" level=info msg="shim disconnected" id=a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2 namespace=k8s.io Jan 29 16:57:31.328969 containerd[1524]: time="2025-01-29T16:57:31.328647933Z" level=warning msg="cleaning up after shim disconnected" id=a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2 namespace=k8s.io Jan 29 16:57:31.328969 containerd[1524]: time="2025-01-29T16:57:31.328657402Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:57:31.345383 containerd[1524]: time="2025-01-29T16:57:31.345291972Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:57:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:57:31.456169 containerd[1524]: time="2025-01-29T16:57:31.456110019Z" level=info msg="CreateContainer within sandbox \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:57:31.475419 containerd[1524]: time="2025-01-29T16:57:31.475349308Z" level=info msg="CreateContainer within sandbox \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4\"" Jan 29 16:57:31.476644 containerd[1524]: time="2025-01-29T16:57:31.476499398Z" level=info msg="StartContainer for \"eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4\"" Jan 29 16:57:31.510112 systemd[1]: Started cri-containerd-eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4.scope - libcontainer container eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4. Jan 29 16:57:31.548993 containerd[1524]: time="2025-01-29T16:57:31.548831272Z" level=info msg="StartContainer for \"eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4\" returns successfully" Jan 29 16:57:31.565191 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:57:31.566456 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:57:31.566792 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:57:31.576339 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:57:31.576630 systemd[1]: cri-containerd-eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4.scope: Deactivated successfully. Jan 29 16:57:31.629299 containerd[1524]: time="2025-01-29T16:57:31.629205662Z" level=info msg="shim disconnected" id=eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4 namespace=k8s.io Jan 29 16:57:31.629882 containerd[1524]: time="2025-01-29T16:57:31.629404364Z" level=warning msg="cleaning up after shim disconnected" id=eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4 namespace=k8s.io Jan 29 16:57:31.629882 containerd[1524]: time="2025-01-29T16:57:31.629415075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:57:31.629694 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:57:31.899656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2-rootfs.mount: Deactivated successfully. Jan 29 16:57:32.466537 containerd[1524]: time="2025-01-29T16:57:32.466467966Z" level=info msg="CreateContainer within sandbox \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:57:32.511896 containerd[1524]: time="2025-01-29T16:57:32.511424488Z" level=info msg="CreateContainer within sandbox \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148\"" Jan 29 16:57:32.517318 containerd[1524]: time="2025-01-29T16:57:32.517066010Z" level=info msg="StartContainer for \"34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148\"" Jan 29 16:57:32.577165 systemd[1]: Started cri-containerd-34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148.scope - libcontainer container 34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148. Jan 29 16:57:32.640963 containerd[1524]: time="2025-01-29T16:57:32.640872735Z" level=info msg="StartContainer for \"34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148\" returns successfully" Jan 29 16:57:32.642967 systemd[1]: cri-containerd-34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148.scope: Deactivated successfully. Jan 29 16:57:32.686623 containerd[1524]: time="2025-01-29T16:57:32.686370695Z" level=info msg="shim disconnected" id=34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148 namespace=k8s.io Jan 29 16:57:32.686623 containerd[1524]: time="2025-01-29T16:57:32.686448451Z" level=warning msg="cleaning up after shim disconnected" id=34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148 namespace=k8s.io Jan 29 16:57:32.686623 containerd[1524]: time="2025-01-29T16:57:32.686456466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:57:32.898520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148-rootfs.mount: Deactivated successfully. Jan 29 16:57:33.471681 containerd[1524]: time="2025-01-29T16:57:33.471383011Z" level=info msg="CreateContainer within sandbox \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:57:33.506060 containerd[1524]: time="2025-01-29T16:57:33.504918532Z" level=info msg="CreateContainer within sandbox \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5\"" Jan 29 16:57:33.509078 containerd[1524]: time="2025-01-29T16:57:33.508887592Z" level=info msg="StartContainer for \"0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5\"" Jan 29 16:57:33.571223 systemd[1]: Started cri-containerd-0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5.scope - libcontainer container 0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5. Jan 29 16:57:33.619911 systemd[1]: cri-containerd-0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5.scope: Deactivated successfully. Jan 29 16:57:33.622332 containerd[1524]: time="2025-01-29T16:57:33.621294423Z" level=info msg="StartContainer for \"0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5\" returns successfully" Jan 29 16:57:33.665569 containerd[1524]: time="2025-01-29T16:57:33.665365461Z" level=info msg="shim disconnected" id=0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5 namespace=k8s.io Jan 29 16:57:33.665569 containerd[1524]: time="2025-01-29T16:57:33.665449730Z" level=warning msg="cleaning up after shim disconnected" id=0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5 namespace=k8s.io Jan 29 16:57:33.665569 containerd[1524]: time="2025-01-29T16:57:33.665466902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:57:33.690826 containerd[1524]: time="2025-01-29T16:57:33.690765643Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:57:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:57:33.900010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5-rootfs.mount: Deactivated successfully. Jan 29 16:57:34.440268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount75798321.mount: Deactivated successfully. Jan 29 16:57:34.515668 containerd[1524]: time="2025-01-29T16:57:34.512440594Z" level=info msg="CreateContainer within sandbox \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:57:34.562770 containerd[1524]: time="2025-01-29T16:57:34.562627584Z" level=info msg="CreateContainer within sandbox \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413\"" Jan 29 16:57:34.564337 containerd[1524]: time="2025-01-29T16:57:34.563687872Z" level=info msg="StartContainer for \"3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413\"" Jan 29 16:57:34.603225 systemd[1]: Started cri-containerd-3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413.scope - libcontainer container 3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413. Jan 29 16:57:34.655689 containerd[1524]: time="2025-01-29T16:57:34.655633660Z" level=info msg="StartContainer for \"3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413\" returns successfully" Jan 29 16:57:34.955836 kubelet[2813]: I0129 16:57:34.955670 2813 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 16:57:35.020580 systemd[1]: Created slice kubepods-burstable-podff16cd6e_6404_4f6a_82d7_440f765e54ec.slice - libcontainer container kubepods-burstable-podff16cd6e_6404_4f6a_82d7_440f765e54ec.slice. Jan 29 16:57:35.040013 systemd[1]: Created slice kubepods-burstable-pod0d82d8be_ad23_4db6_84e0_c5d29f30acd3.slice - libcontainer container kubepods-burstable-pod0d82d8be_ad23_4db6_84e0_c5d29f30acd3.slice. Jan 29 16:57:35.099817 kubelet[2813]: I0129 16:57:35.099495 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff16cd6e-6404-4f6a-82d7-440f765e54ec-config-volume\") pod \"coredns-668d6bf9bc-nkgp5\" (UID: \"ff16cd6e-6404-4f6a-82d7-440f765e54ec\") " pod="kube-system/coredns-668d6bf9bc-nkgp5" Jan 29 16:57:35.099817 kubelet[2813]: I0129 16:57:35.099772 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s59g\" (UniqueName: \"kubernetes.io/projected/ff16cd6e-6404-4f6a-82d7-440f765e54ec-kube-api-access-8s59g\") pod \"coredns-668d6bf9bc-nkgp5\" (UID: \"ff16cd6e-6404-4f6a-82d7-440f765e54ec\") " pod="kube-system/coredns-668d6bf9bc-nkgp5" Jan 29 16:57:35.099817 kubelet[2813]: I0129 16:57:35.099797 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdvkw\" (UniqueName: \"kubernetes.io/projected/0d82d8be-ad23-4db6-84e0-c5d29f30acd3-kube-api-access-rdvkw\") pod \"coredns-668d6bf9bc-bmz7h\" (UID: \"0d82d8be-ad23-4db6-84e0-c5d29f30acd3\") " pod="kube-system/coredns-668d6bf9bc-bmz7h" Jan 29 16:57:35.099817 kubelet[2813]: I0129 16:57:35.099814 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d82d8be-ad23-4db6-84e0-c5d29f30acd3-config-volume\") pod \"coredns-668d6bf9bc-bmz7h\" (UID: \"0d82d8be-ad23-4db6-84e0-c5d29f30acd3\") " pod="kube-system/coredns-668d6bf9bc-bmz7h" Jan 29 16:57:35.269452 containerd[1524]: time="2025-01-29T16:57:35.269371038Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:35.272795 containerd[1524]: time="2025-01-29T16:57:35.272721519Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 16:57:35.274216 containerd[1524]: time="2025-01-29T16:57:35.274175452Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:57:35.276558 containerd[1524]: time="2025-01-29T16:57:35.276354202Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.45746732s" Jan 29 16:57:35.276558 containerd[1524]: time="2025-01-29T16:57:35.276544332Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 16:57:35.281899 containerd[1524]: time="2025-01-29T16:57:35.281775802Z" level=info msg="CreateContainer within sandbox \"1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:57:35.311732 containerd[1524]: time="2025-01-29T16:57:35.311679260Z" level=info msg="CreateContainer within sandbox \"1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8\"" Jan 29 16:57:35.313081 containerd[1524]: time="2025-01-29T16:57:35.313035138Z" level=info msg="StartContainer for \"9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8\"" Jan 29 16:57:35.332542 containerd[1524]: time="2025-01-29T16:57:35.332459626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nkgp5,Uid:ff16cd6e-6404-4f6a-82d7-440f765e54ec,Namespace:kube-system,Attempt:0,}" Jan 29 16:57:35.350365 containerd[1524]: time="2025-01-29T16:57:35.350217862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bmz7h,Uid:0d82d8be-ad23-4db6-84e0-c5d29f30acd3,Namespace:kube-system,Attempt:0,}" Jan 29 16:57:35.365454 systemd[1]: Started cri-containerd-9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8.scope - libcontainer container 9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8. Jan 29 16:57:35.481768 containerd[1524]: time="2025-01-29T16:57:35.481508377Z" level=info msg="StartContainer for \"9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8\" returns successfully" Jan 29 16:57:35.553782 kubelet[2813]: I0129 16:57:35.553599 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-r7x2k" podStartSLOduration=1.4387496419999999 podStartE2EDuration="15.553574996s" podCreationTimestamp="2025-01-29 16:57:20 +0000 UTC" firstStartedPulling="2025-01-29 16:57:21.164366382 +0000 UTC m=+6.973589443" lastFinishedPulling="2025-01-29 16:57:35.279191736 +0000 UTC m=+21.088414797" observedRunningTime="2025-01-29 16:57:35.517900097 +0000 UTC m=+21.327123159" watchObservedRunningTime="2025-01-29 16:57:35.553574996 +0000 UTC m=+21.362798057" Jan 29 16:57:39.379065 systemd-networkd[1425]: cilium_host: Link UP Jan 29 16:57:39.379261 systemd-networkd[1425]: cilium_net: Link UP Jan 29 16:57:39.379456 systemd-networkd[1425]: cilium_net: Gained carrier Jan 29 16:57:39.379644 systemd-networkd[1425]: cilium_host: Gained carrier Jan 29 16:57:39.559552 systemd-networkd[1425]: cilium_vxlan: Link UP Jan 29 16:57:39.559561 systemd-networkd[1425]: cilium_vxlan: Gained carrier Jan 29 16:57:39.832831 systemd-networkd[1425]: cilium_host: Gained IPv6LL Jan 29 16:57:39.983488 systemd-networkd[1425]: cilium_net: Gained IPv6LL Jan 29 16:57:40.114962 kernel: NET: Registered PF_ALG protocol family Jan 29 16:57:40.816186 systemd-networkd[1425]: cilium_vxlan: Gained IPv6LL Jan 29 16:57:41.146460 systemd-networkd[1425]: lxc_health: Link UP Jan 29 16:57:41.153486 systemd-networkd[1425]: lxc_health: Gained carrier Jan 29 16:57:41.478121 systemd-networkd[1425]: lxc6d57d2cbe374: Link UP Jan 29 16:57:41.486346 kernel: eth0: renamed from tmp54ade Jan 29 16:57:41.487865 systemd-networkd[1425]: lxc6d57d2cbe374: Gained carrier Jan 29 16:57:41.511575 systemd-networkd[1425]: lxce1ded4b9a279: Link UP Jan 29 16:57:41.513312 kernel: eth0: renamed from tmpbcbbb Jan 29 16:57:41.520459 systemd-networkd[1425]: lxce1ded4b9a279: Gained carrier Jan 29 16:57:42.353171 systemd-networkd[1425]: lxc_health: Gained IPv6LL Jan 29 16:57:42.543218 systemd-networkd[1425]: lxc6d57d2cbe374: Gained IPv6LL Jan 29 16:57:42.927217 systemd-networkd[1425]: lxce1ded4b9a279: Gained IPv6LL Jan 29 16:57:42.947099 kubelet[2813]: I0129 16:57:42.947009 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8sv6h" podStartSLOduration=13.171912599 podStartE2EDuration="22.946981511s" podCreationTimestamp="2025-01-29 16:57:20 +0000 UTC" firstStartedPulling="2025-01-29 16:57:21.042722286 +0000 UTC m=+6.851945348" lastFinishedPulling="2025-01-29 16:57:30.817791179 +0000 UTC m=+16.627014260" observedRunningTime="2025-01-29 16:57:35.5540919 +0000 UTC m=+21.363314962" watchObservedRunningTime="2025-01-29 16:57:42.946981511 +0000 UTC m=+28.756204612" Jan 29 16:57:45.206303 containerd[1524]: time="2025-01-29T16:57:45.205449069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:57:45.206303 containerd[1524]: time="2025-01-29T16:57:45.205515164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:57:45.206303 containerd[1524]: time="2025-01-29T16:57:45.205525584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:45.206303 containerd[1524]: time="2025-01-29T16:57:45.205617840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:45.252783 systemd[1]: Started cri-containerd-bcbbbf059914eb95744f638af3eb4d92eb9ce12cf1d6d64aef6a63085673af4b.scope - libcontainer container bcbbbf059914eb95744f638af3eb4d92eb9ce12cf1d6d64aef6a63085673af4b. Jan 29 16:57:45.292200 containerd[1524]: time="2025-01-29T16:57:45.292061246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:57:45.292200 containerd[1524]: time="2025-01-29T16:57:45.292141839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:57:45.292200 containerd[1524]: time="2025-01-29T16:57:45.292161407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:45.292731 containerd[1524]: time="2025-01-29T16:57:45.292648075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:57:45.336174 systemd[1]: Started cri-containerd-54ade3f0a16fb2b2e63be3f7e350526e34fc92ca52e2684220478c938cfee0c9.scope - libcontainer container 54ade3f0a16fb2b2e63be3f7e350526e34fc92ca52e2684220478c938cfee0c9. Jan 29 16:57:45.355738 containerd[1524]: time="2025-01-29T16:57:45.355692651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nkgp5,Uid:ff16cd6e-6404-4f6a-82d7-440f765e54ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcbbbf059914eb95744f638af3eb4d92eb9ce12cf1d6d64aef6a63085673af4b\"" Jan 29 16:57:45.359323 containerd[1524]: time="2025-01-29T16:57:45.359273868Z" level=info msg="CreateContainer within sandbox \"bcbbbf059914eb95744f638af3eb4d92eb9ce12cf1d6d64aef6a63085673af4b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:57:45.388034 containerd[1524]: time="2025-01-29T16:57:45.387992261Z" level=info msg="CreateContainer within sandbox \"bcbbbf059914eb95744f638af3eb4d92eb9ce12cf1d6d64aef6a63085673af4b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7059bd3e877e43a35e7847d43abacbe6a5167c152d1bf219e2fc0324e11b4f72\"" Jan 29 16:57:45.388680 containerd[1524]: time="2025-01-29T16:57:45.388655014Z" level=info msg="StartContainer for \"7059bd3e877e43a35e7847d43abacbe6a5167c152d1bf219e2fc0324e11b4f72\"" Jan 29 16:57:45.420858 containerd[1524]: time="2025-01-29T16:57:45.420782045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bmz7h,Uid:0d82d8be-ad23-4db6-84e0-c5d29f30acd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"54ade3f0a16fb2b2e63be3f7e350526e34fc92ca52e2684220478c938cfee0c9\"" Jan 29 16:57:45.426461 containerd[1524]: time="2025-01-29T16:57:45.425985656Z" level=info msg="CreateContainer within sandbox \"54ade3f0a16fb2b2e63be3f7e350526e34fc92ca52e2684220478c938cfee0c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:57:45.441180 systemd[1]: Started cri-containerd-7059bd3e877e43a35e7847d43abacbe6a5167c152d1bf219e2fc0324e11b4f72.scope - libcontainer container 7059bd3e877e43a35e7847d43abacbe6a5167c152d1bf219e2fc0324e11b4f72. Jan 29 16:57:45.454604 containerd[1524]: time="2025-01-29T16:57:45.454350844Z" level=info msg="CreateContainer within sandbox \"54ade3f0a16fb2b2e63be3f7e350526e34fc92ca52e2684220478c938cfee0c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fd3af655c9560a474b562acc01a067e9f0e8f9ac4b616dc715a4de1f988a4aa2\"" Jan 29 16:57:45.455731 containerd[1524]: time="2025-01-29T16:57:45.455671111Z" level=info msg="StartContainer for \"fd3af655c9560a474b562acc01a067e9f0e8f9ac4b616dc715a4de1f988a4aa2\"" Jan 29 16:57:45.490252 containerd[1524]: time="2025-01-29T16:57:45.490217655Z" level=info msg="StartContainer for \"7059bd3e877e43a35e7847d43abacbe6a5167c152d1bf219e2fc0324e11b4f72\" returns successfully" Jan 29 16:57:45.508098 systemd[1]: Started cri-containerd-fd3af655c9560a474b562acc01a067e9f0e8f9ac4b616dc715a4de1f988a4aa2.scope - libcontainer container fd3af655c9560a474b562acc01a067e9f0e8f9ac4b616dc715a4de1f988a4aa2. Jan 29 16:57:45.541625 containerd[1524]: time="2025-01-29T16:57:45.541323128Z" level=info msg="StartContainer for \"fd3af655c9560a474b562acc01a067e9f0e8f9ac4b616dc715a4de1f988a4aa2\" returns successfully" Jan 29 16:57:45.559911 kubelet[2813]: I0129 16:57:45.559815 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bmz7h" podStartSLOduration=25.559726154 podStartE2EDuration="25.559726154s" podCreationTimestamp="2025-01-29 16:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:57:45.558764782 +0000 UTC m=+31.367987843" watchObservedRunningTime="2025-01-29 16:57:45.559726154 +0000 UTC m=+31.368949215" Jan 29 16:57:45.576732 kubelet[2813]: I0129 16:57:45.576661 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nkgp5" podStartSLOduration=25.57664455 podStartE2EDuration="25.57664455s" podCreationTimestamp="2025-01-29 16:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:57:45.574486967 +0000 UTC m=+31.383710027" watchObservedRunningTime="2025-01-29 16:57:45.57664455 +0000 UTC m=+31.385867611" Jan 29 16:57:46.220920 systemd[1]: run-containerd-runc-k8s.io-54ade3f0a16fb2b2e63be3f7e350526e34fc92ca52e2684220478c938cfee0c9-runc.HSYEVz.mount: Deactivated successfully. Jan 29 16:59:55.292364 systemd[1]: Started sshd@8-168.119.110.78:22-147.75.109.163:36354.service - OpenSSH per-connection server daemon (147.75.109.163:36354). Jan 29 16:59:56.354020 sshd[4218]: Accepted publickey for core from 147.75.109.163 port 36354 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 16:59:56.358444 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:59:56.389006 systemd-logind[1509]: New session 8 of user core. Jan 29 16:59:56.392117 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:59:57.678816 sshd[4223]: Connection closed by 147.75.109.163 port 36354 Jan 29 16:59:57.680300 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Jan 29 16:59:57.689709 systemd[1]: sshd@8-168.119.110.78:22-147.75.109.163:36354.service: Deactivated successfully. Jan 29 16:59:57.694493 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:59:57.696624 systemd-logind[1509]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:59:57.699184 systemd-logind[1509]: Removed session 8. Jan 29 17:00:02.866717 systemd[1]: Started sshd@9-168.119.110.78:22-147.75.109.163:51176.service - OpenSSH per-connection server daemon (147.75.109.163:51176). Jan 29 17:00:03.856385 sshd[4236]: Accepted publickey for core from 147.75.109.163 port 51176 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:03.860643 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:03.872521 systemd-logind[1509]: New session 9 of user core. Jan 29 17:00:03.886429 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 17:00:04.653079 sshd[4238]: Connection closed by 147.75.109.163 port 51176 Jan 29 17:00:04.654353 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:04.661237 systemd[1]: sshd@9-168.119.110.78:22-147.75.109.163:51176.service: Deactivated successfully. Jan 29 17:00:04.664720 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 17:00:04.666542 systemd-logind[1509]: Session 9 logged out. Waiting for processes to exit. Jan 29 17:00:04.669303 systemd-logind[1509]: Removed session 9. Jan 29 17:00:05.218231 update_engine[1513]: I20250129 17:00:05.218119 1513 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 17:00:05.218231 update_engine[1513]: I20250129 17:00:05.218200 1513 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 17:00:05.222273 update_engine[1513]: I20250129 17:00:05.222205 1513 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 17:00:05.223282 update_engine[1513]: I20250129 17:00:05.223222 1513 omaha_request_params.cc:62] Current group set to alpha Jan 29 17:00:05.223690 update_engine[1513]: I20250129 17:00:05.223439 1513 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 17:00:05.223690 update_engine[1513]: I20250129 17:00:05.223467 1513 update_attempter.cc:643] Scheduling an action processor start. Jan 29 17:00:05.223690 update_engine[1513]: I20250129 17:00:05.223500 1513 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 17:00:05.223690 update_engine[1513]: I20250129 17:00:05.223569 1513 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 17:00:05.225282 update_engine[1513]: I20250129 17:00:05.224040 1513 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 17:00:05.225282 update_engine[1513]: I20250129 17:00:05.224063 1513 omaha_request_action.cc:272] Request: Jan 29 17:00:05.225282 update_engine[1513]: Jan 29 17:00:05.225282 update_engine[1513]: Jan 29 17:00:05.225282 update_engine[1513]: Jan 29 17:00:05.225282 update_engine[1513]: Jan 29 17:00:05.225282 update_engine[1513]: Jan 29 17:00:05.225282 update_engine[1513]: Jan 29 17:00:05.225282 update_engine[1513]: Jan 29 17:00:05.225282 update_engine[1513]: Jan 29 17:00:05.225282 update_engine[1513]: I20250129 17:00:05.224079 1513 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 17:00:05.242834 update_engine[1513]: I20250129 17:00:05.242785 1513 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 17:00:05.243495 locksmithd[1541]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 17:00:05.244647 update_engine[1513]: I20250129 17:00:05.244222 1513 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 17:00:05.244734 update_engine[1513]: E20250129 17:00:05.244661 1513 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 17:00:05.244819 update_engine[1513]: I20250129 17:00:05.244776 1513 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 17:00:09.843614 systemd[1]: Started sshd@10-168.119.110.78:22-147.75.109.163:52958.service - OpenSSH per-connection server daemon (147.75.109.163:52958). Jan 29 17:00:10.855173 sshd[4251]: Accepted publickey for core from 147.75.109.163 port 52958 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:10.858749 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:10.868156 systemd-logind[1509]: New session 10 of user core. Jan 29 17:00:10.873161 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 17:00:11.696747 sshd[4253]: Connection closed by 147.75.109.163 port 52958 Jan 29 17:00:11.698040 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:11.710193 systemd[1]: sshd@10-168.119.110.78:22-147.75.109.163:52958.service: Deactivated successfully. Jan 29 17:00:11.715197 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 17:00:11.721796 systemd-logind[1509]: Session 10 logged out. Waiting for processes to exit. Jan 29 17:00:11.723092 systemd-logind[1509]: Removed session 10. Jan 29 17:00:11.883500 systemd[1]: Started sshd@11-168.119.110.78:22-147.75.109.163:52968.service - OpenSSH per-connection server daemon (147.75.109.163:52968). Jan 29 17:00:12.930089 sshd[4266]: Accepted publickey for core from 147.75.109.163 port 52968 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:12.932907 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:12.941773 systemd-logind[1509]: New session 11 of user core. Jan 29 17:00:12.948244 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 17:00:13.844701 sshd[4268]: Connection closed by 147.75.109.163 port 52968 Jan 29 17:00:13.847905 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:13.857311 systemd[1]: sshd@11-168.119.110.78:22-147.75.109.163:52968.service: Deactivated successfully. Jan 29 17:00:13.863959 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 17:00:13.866250 systemd-logind[1509]: Session 11 logged out. Waiting for processes to exit. Jan 29 17:00:13.868784 systemd-logind[1509]: Removed session 11. Jan 29 17:00:14.030075 systemd[1]: Started sshd@12-168.119.110.78:22-147.75.109.163:52982.service - OpenSSH per-connection server daemon (147.75.109.163:52982). Jan 29 17:00:15.064264 sshd[4278]: Accepted publickey for core from 147.75.109.163 port 52982 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:15.067168 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:15.075360 systemd-logind[1509]: New session 12 of user core. Jan 29 17:00:15.086277 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 17:00:15.181494 update_engine[1513]: I20250129 17:00:15.181363 1513 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 17:00:15.182290 update_engine[1513]: I20250129 17:00:15.181694 1513 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 17:00:15.182290 update_engine[1513]: I20250129 17:00:15.182032 1513 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 17:00:15.183106 update_engine[1513]: E20250129 17:00:15.182444 1513 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 17:00:15.183106 update_engine[1513]: I20250129 17:00:15.182484 1513 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 29 17:00:15.877623 sshd[4282]: Connection closed by 147.75.109.163 port 52982 Jan 29 17:00:15.878805 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:15.886318 systemd[1]: sshd@12-168.119.110.78:22-147.75.109.163:52982.service: Deactivated successfully. Jan 29 17:00:15.890805 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 17:00:15.892658 systemd-logind[1509]: Session 12 logged out. Waiting for processes to exit. Jan 29 17:00:15.894763 systemd-logind[1509]: Removed session 12. Jan 29 17:00:21.059327 systemd[1]: Started sshd@13-168.119.110.78:22-147.75.109.163:60088.service - OpenSSH per-connection server daemon (147.75.109.163:60088). Jan 29 17:00:22.063259 sshd[4295]: Accepted publickey for core from 147.75.109.163 port 60088 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:22.065952 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:22.072880 systemd-logind[1509]: New session 13 of user core. Jan 29 17:00:22.076162 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 17:00:22.871898 sshd[4299]: Connection closed by 147.75.109.163 port 60088 Jan 29 17:00:22.873040 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:22.878888 systemd[1]: sshd@13-168.119.110.78:22-147.75.109.163:60088.service: Deactivated successfully. Jan 29 17:00:22.883797 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 17:00:22.887777 systemd-logind[1509]: Session 13 logged out. Waiting for processes to exit. Jan 29 17:00:22.890505 systemd-logind[1509]: Removed session 13. Jan 29 17:00:23.059416 systemd[1]: Started sshd@14-168.119.110.78:22-147.75.109.163:60102.service - OpenSSH per-connection server daemon (147.75.109.163:60102). Jan 29 17:00:24.048676 sshd[4311]: Accepted publickey for core from 147.75.109.163 port 60102 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:24.051890 sshd-session[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:24.061376 systemd-logind[1509]: New session 14 of user core. Jan 29 17:00:24.066323 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 17:00:25.102211 sshd[4313]: Connection closed by 147.75.109.163 port 60102 Jan 29 17:00:25.104816 sshd-session[4311]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:25.116694 systemd[1]: sshd@14-168.119.110.78:22-147.75.109.163:60102.service: Deactivated successfully. Jan 29 17:00:25.120402 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 17:00:25.123515 systemd-logind[1509]: Session 14 logged out. Waiting for processes to exit. Jan 29 17:00:25.125979 systemd-logind[1509]: Removed session 14. Jan 29 17:00:25.181397 update_engine[1513]: I20250129 17:00:25.181298 1513 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 17:00:25.181815 update_engine[1513]: I20250129 17:00:25.181649 1513 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 17:00:25.182005 update_engine[1513]: I20250129 17:00:25.181973 1513 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 17:00:25.182442 update_engine[1513]: E20250129 17:00:25.182407 1513 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 17:00:25.182507 update_engine[1513]: I20250129 17:00:25.182467 1513 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 29 17:00:25.277493 systemd[1]: Started sshd@15-168.119.110.78:22-147.75.109.163:60106.service - OpenSSH per-connection server daemon (147.75.109.163:60106). Jan 29 17:00:26.298544 sshd[4323]: Accepted publickey for core from 147.75.109.163 port 60106 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:26.302558 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:26.314345 systemd-logind[1509]: New session 15 of user core. Jan 29 17:00:26.324209 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 17:00:28.018234 sshd[4325]: Connection closed by 147.75.109.163 port 60106 Jan 29 17:00:28.019393 sshd-session[4323]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:28.027392 systemd[1]: sshd@15-168.119.110.78:22-147.75.109.163:60106.service: Deactivated successfully. Jan 29 17:00:28.033571 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 17:00:28.035093 systemd-logind[1509]: Session 15 logged out. Waiting for processes to exit. Jan 29 17:00:28.037064 systemd-logind[1509]: Removed session 15. Jan 29 17:00:28.203037 systemd[1]: Started sshd@16-168.119.110.78:22-147.75.109.163:50604.service - OpenSSH per-connection server daemon (147.75.109.163:50604). Jan 29 17:00:29.212094 sshd[4342]: Accepted publickey for core from 147.75.109.163 port 50604 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:29.215413 sshd-session[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:29.223904 systemd-logind[1509]: New session 16 of user core. Jan 29 17:00:29.237271 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 17:00:30.192766 sshd[4344]: Connection closed by 147.75.109.163 port 50604 Jan 29 17:00:30.193882 sshd-session[4342]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:30.203217 systemd[1]: sshd@16-168.119.110.78:22-147.75.109.163:50604.service: Deactivated successfully. Jan 29 17:00:30.209382 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 17:00:30.213238 systemd-logind[1509]: Session 16 logged out. Waiting for processes to exit. Jan 29 17:00:30.216220 systemd-logind[1509]: Removed session 16. Jan 29 17:00:30.379447 systemd[1]: Started sshd@17-168.119.110.78:22-147.75.109.163:50612.service - OpenSSH per-connection server daemon (147.75.109.163:50612). Jan 29 17:00:31.406957 sshd[4354]: Accepted publickey for core from 147.75.109.163 port 50612 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:31.410106 sshd-session[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:31.419424 systemd-logind[1509]: New session 17 of user core. Jan 29 17:00:31.425162 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 17:00:32.208373 sshd[4356]: Connection closed by 147.75.109.163 port 50612 Jan 29 17:00:32.209434 sshd-session[4354]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:32.217390 systemd-logind[1509]: Session 17 logged out. Waiting for processes to exit. Jan 29 17:00:32.218201 systemd[1]: sshd@17-168.119.110.78:22-147.75.109.163:50612.service: Deactivated successfully. Jan 29 17:00:32.222991 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 17:00:32.225328 systemd-logind[1509]: Removed session 17. Jan 29 17:00:35.182765 update_engine[1513]: I20250129 17:00:35.182652 1513 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 17:00:35.183278 update_engine[1513]: I20250129 17:00:35.183133 1513 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 17:00:35.183558 update_engine[1513]: I20250129 17:00:35.183513 1513 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 17:00:35.184149 update_engine[1513]: E20250129 17:00:35.184108 1513 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 17:00:35.184217 update_engine[1513]: I20250129 17:00:35.184173 1513 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 17:00:35.184217 update_engine[1513]: I20250129 17:00:35.184191 1513 omaha_request_action.cc:617] Omaha request response: Jan 29 17:00:35.184360 update_engine[1513]: E20250129 17:00:35.184325 1513 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 29 17:00:35.184393 update_engine[1513]: I20250129 17:00:35.184367 1513 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 29 17:00:35.184424 update_engine[1513]: I20250129 17:00:35.184382 1513 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 17:00:35.184424 update_engine[1513]: I20250129 17:00:35.184396 1513 update_attempter.cc:306] Processing Done. Jan 29 17:00:35.184479 update_engine[1513]: E20250129 17:00:35.184419 1513 update_attempter.cc:619] Update failed. Jan 29 17:00:35.186738 update_engine[1513]: I20250129 17:00:35.186681 1513 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 29 17:00:35.186738 update_engine[1513]: I20250129 17:00:35.186726 1513 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 29 17:00:35.186830 update_engine[1513]: I20250129 17:00:35.186743 1513 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 29 17:00:35.187213 update_engine[1513]: I20250129 17:00:35.186858 1513 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 17:00:35.187213 update_engine[1513]: I20250129 17:00:35.186903 1513 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 17:00:35.187213 update_engine[1513]: I20250129 17:00:35.186918 1513 omaha_request_action.cc:272] Request: Jan 29 17:00:35.187213 update_engine[1513]: Jan 29 17:00:35.187213 update_engine[1513]: Jan 29 17:00:35.187213 update_engine[1513]: Jan 29 17:00:35.187213 update_engine[1513]: Jan 29 17:00:35.187213 update_engine[1513]: Jan 29 17:00:35.187213 update_engine[1513]: Jan 29 17:00:35.187213 update_engine[1513]: I20250129 17:00:35.186966 1513 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 17:00:35.187493 locksmithd[1541]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 29 17:00:35.187865 update_engine[1513]: I20250129 17:00:35.187264 1513 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 17:00:35.187865 update_engine[1513]: I20250129 17:00:35.187575 1513 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 17:00:35.188158 update_engine[1513]: E20250129 17:00:35.188115 1513 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 17:00:35.188203 update_engine[1513]: I20250129 17:00:35.188184 1513 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 17:00:35.188235 update_engine[1513]: I20250129 17:00:35.188199 1513 omaha_request_action.cc:617] Omaha request response: Jan 29 17:00:35.188235 update_engine[1513]: I20250129 17:00:35.188215 1513 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 17:00:35.188293 update_engine[1513]: I20250129 17:00:35.188228 1513 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 17:00:35.188293 update_engine[1513]: I20250129 17:00:35.188242 1513 update_attempter.cc:306] Processing Done. Jan 29 17:00:35.188293 update_engine[1513]: I20250129 17:00:35.188257 1513 update_attempter.cc:310] Error event sent. Jan 29 17:00:35.188293 update_engine[1513]: I20250129 17:00:35.188275 1513 update_check_scheduler.cc:74] Next update check in 40m1s Jan 29 17:00:35.188752 locksmithd[1541]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 29 17:00:37.384229 systemd[1]: Started sshd@18-168.119.110.78:22-147.75.109.163:50614.service - OpenSSH per-connection server daemon (147.75.109.163:50614). Jan 29 17:00:38.399240 sshd[4371]: Accepted publickey for core from 147.75.109.163 port 50614 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:38.402286 sshd-session[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:38.412664 systemd-logind[1509]: New session 18 of user core. Jan 29 17:00:38.420190 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 17:00:39.201897 sshd[4373]: Connection closed by 147.75.109.163 port 50614 Jan 29 17:00:39.203835 sshd-session[4371]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:39.209993 systemd[1]: sshd@18-168.119.110.78:22-147.75.109.163:50614.service: Deactivated successfully. Jan 29 17:00:39.214801 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 17:00:39.218600 systemd-logind[1509]: Session 18 logged out. Waiting for processes to exit. Jan 29 17:00:39.221268 systemd-logind[1509]: Removed session 18. Jan 29 17:00:44.383292 systemd[1]: Started sshd@19-168.119.110.78:22-147.75.109.163:55620.service - OpenSSH per-connection server daemon (147.75.109.163:55620). Jan 29 17:00:45.384117 sshd[4385]: Accepted publickey for core from 147.75.109.163 port 55620 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:45.387361 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:45.397071 systemd-logind[1509]: New session 19 of user core. Jan 29 17:00:45.404108 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 17:00:46.205414 sshd[4387]: Connection closed by 147.75.109.163 port 55620 Jan 29 17:00:46.206721 sshd-session[4385]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:46.214212 systemd[1]: sshd@19-168.119.110.78:22-147.75.109.163:55620.service: Deactivated successfully. Jan 29 17:00:46.219792 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 17:00:46.226422 systemd-logind[1509]: Session 19 logged out. Waiting for processes to exit. Jan 29 17:00:46.228426 systemd-logind[1509]: Removed session 19. Jan 29 17:00:46.392761 systemd[1]: Started sshd@20-168.119.110.78:22-147.75.109.163:55636.service - OpenSSH per-connection server daemon (147.75.109.163:55636). Jan 29 17:00:47.383513 sshd[4399]: Accepted publickey for core from 147.75.109.163 port 55636 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:47.386627 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:47.394910 systemd-logind[1509]: New session 20 of user core. Jan 29 17:00:47.406201 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 17:00:49.348502 systemd[1]: run-containerd-runc-k8s.io-3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413-runc.ZwxkW5.mount: Deactivated successfully. Jan 29 17:00:49.369495 containerd[1524]: time="2025-01-29T17:00:49.369428323Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 17:00:49.375050 containerd[1524]: time="2025-01-29T17:00:49.374917766Z" level=info msg="StopContainer for \"9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8\" with timeout 30 (s)" Jan 29 17:00:49.376829 containerd[1524]: time="2025-01-29T17:00:49.376522903Z" level=info msg="Stop container \"9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8\" with signal terminated" Jan 29 17:00:49.388996 containerd[1524]: time="2025-01-29T17:00:49.388962912Z" level=info msg="StopContainer for \"3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413\" with timeout 2 (s)" Jan 29 17:00:49.390388 containerd[1524]: time="2025-01-29T17:00:49.390351779Z" level=info msg="Stop container \"3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413\" with signal terminated" Jan 29 17:00:49.399411 systemd[1]: cri-containerd-9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8.scope: Deactivated successfully. Jan 29 17:00:49.404305 systemd-networkd[1425]: lxc_health: Link DOWN Jan 29 17:00:49.404311 systemd-networkd[1425]: lxc_health: Lost carrier Jan 29 17:00:49.446228 systemd[1]: cri-containerd-3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413.scope: Deactivated successfully. Jan 29 17:00:49.446555 systemd[1]: cri-containerd-3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413.scope: Consumed 8.487s CPU time, 194.6M memory peak, 70.5M read from disk, 13.3M written to disk. Jan 29 17:00:49.453460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8-rootfs.mount: Deactivated successfully. Jan 29 17:00:49.465649 containerd[1524]: time="2025-01-29T17:00:49.465516303Z" level=info msg="shim disconnected" id=9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8 namespace=k8s.io Jan 29 17:00:49.465649 containerd[1524]: time="2025-01-29T17:00:49.465610462Z" level=warning msg="cleaning up after shim disconnected" id=9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8 namespace=k8s.io Jan 29 17:00:49.465649 containerd[1524]: time="2025-01-29T17:00:49.465644607Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:00:49.477968 kubelet[2813]: E0129 17:00:49.474523 2813 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 17:00:49.477887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413-rootfs.mount: Deactivated successfully. Jan 29 17:00:49.484264 containerd[1524]: time="2025-01-29T17:00:49.484162777Z" level=info msg="shim disconnected" id=3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413 namespace=k8s.io Jan 29 17:00:49.484264 containerd[1524]: time="2025-01-29T17:00:49.484213443Z" level=warning msg="cleaning up after shim disconnected" id=3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413 namespace=k8s.io Jan 29 17:00:49.484264 containerd[1524]: time="2025-01-29T17:00:49.484221037Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:00:49.490775 containerd[1524]: time="2025-01-29T17:00:49.490740896Z" level=info msg="StopContainer for \"9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8\" returns successfully" Jan 29 17:00:49.499157 containerd[1524]: time="2025-01-29T17:00:49.499059000Z" level=info msg="StopPodSandbox for \"1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c\"" Jan 29 17:00:49.502188 containerd[1524]: time="2025-01-29T17:00:49.502004503Z" level=info msg="Container to stop \"9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 17:00:49.508316 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c-shm.mount: Deactivated successfully. Jan 29 17:00:49.513474 containerd[1524]: time="2025-01-29T17:00:49.513300861Z" level=info msg="StopContainer for \"3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413\" returns successfully" Jan 29 17:00:49.514035 containerd[1524]: time="2025-01-29T17:00:49.513864371Z" level=info msg="StopPodSandbox for \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\"" Jan 29 17:00:49.514035 containerd[1524]: time="2025-01-29T17:00:49.513898265Z" level=info msg="Container to stop \"eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 17:00:49.514035 containerd[1524]: time="2025-01-29T17:00:49.513956135Z" level=info msg="Container to stop \"0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 17:00:49.514035 containerd[1524]: time="2025-01-29T17:00:49.513967596Z" level=info msg="Container to stop \"3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 17:00:49.514035 containerd[1524]: time="2025-01-29T17:00:49.513981443Z" level=info msg="Container to stop \"a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 17:00:49.514035 containerd[1524]: time="2025-01-29T17:00:49.513993987Z" level=info msg="Container to stop \"34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 17:00:49.525922 systemd[1]: cri-containerd-1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c.scope: Deactivated successfully. Jan 29 17:00:49.528117 systemd[1]: cri-containerd-ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a.scope: Deactivated successfully. Jan 29 17:00:49.581455 containerd[1524]: time="2025-01-29T17:00:49.581374260Z" level=info msg="shim disconnected" id=1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c namespace=k8s.io Jan 29 17:00:49.581455 containerd[1524]: time="2025-01-29T17:00:49.581435467Z" level=warning msg="cleaning up after shim disconnected" id=1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c namespace=k8s.io Jan 29 17:00:49.581455 containerd[1524]: time="2025-01-29T17:00:49.581444073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:00:49.582017 containerd[1524]: time="2025-01-29T17:00:49.581666976Z" level=info msg="shim disconnected" id=ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a namespace=k8s.io Jan 29 17:00:49.582017 containerd[1524]: time="2025-01-29T17:00:49.581687114Z" level=warning msg="cleaning up after shim disconnected" id=ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a namespace=k8s.io Jan 29 17:00:49.582017 containerd[1524]: time="2025-01-29T17:00:49.581693987Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:00:49.615954 containerd[1524]: time="2025-01-29T17:00:49.615331895Z" level=warning msg="cleanup warnings time=\"2025-01-29T17:00:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 17:00:49.617974 containerd[1524]: time="2025-01-29T17:00:49.617767029Z" level=info msg="TearDown network for sandbox \"1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c\" successfully" Jan 29 17:00:49.617974 containerd[1524]: time="2025-01-29T17:00:49.617793399Z" level=info msg="StopPodSandbox for \"1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c\" returns successfully" Jan 29 17:00:49.620786 containerd[1524]: time="2025-01-29T17:00:49.620748290Z" level=info msg="TearDown network for sandbox \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\" successfully" Jan 29 17:00:49.620786 containerd[1524]: time="2025-01-29T17:00:49.620772836Z" level=info msg="StopPodSandbox for \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\" returns successfully" Jan 29 17:00:49.665385 kubelet[2813]: I0129 17:00:49.664180 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b34e1d8-a11a-4e2c-bc0e-971716ea9b60-cilium-config-path\") pod \"0b34e1d8-a11a-4e2c-bc0e-971716ea9b60\" (UID: \"0b34e1d8-a11a-4e2c-bc0e-971716ea9b60\") " Jan 29 17:00:49.665385 kubelet[2813]: I0129 17:00:49.664274 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvmnl\" (UniqueName: \"kubernetes.io/projected/0b34e1d8-a11a-4e2c-bc0e-971716ea9b60-kube-api-access-lvmnl\") pod \"0b34e1d8-a11a-4e2c-bc0e-971716ea9b60\" (UID: \"0b34e1d8-a11a-4e2c-bc0e-971716ea9b60\") " Jan 29 17:00:49.695471 kubelet[2813]: I0129 17:00:49.693511 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b34e1d8-a11a-4e2c-bc0e-971716ea9b60-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0b34e1d8-a11a-4e2c-bc0e-971716ea9b60" (UID: "0b34e1d8-a11a-4e2c-bc0e-971716ea9b60"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 17:00:49.695810 kubelet[2813]: I0129 17:00:49.693499 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b34e1d8-a11a-4e2c-bc0e-971716ea9b60-kube-api-access-lvmnl" (OuterVolumeSpecName: "kube-api-access-lvmnl") pod "0b34e1d8-a11a-4e2c-bc0e-971716ea9b60" (UID: "0b34e1d8-a11a-4e2c-bc0e-971716ea9b60"). InnerVolumeSpecName "kube-api-access-lvmnl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 17:00:49.765102 kubelet[2813]: I0129 17:00:49.764999 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a42266af-4520-4d6a-b43a-dee7a81ab497-hubble-tls\") pod \"a42266af-4520-4d6a-b43a-dee7a81ab497\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " Jan 29 17:00:49.765274 kubelet[2813]: I0129 17:00:49.765263 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-cilium-run\") pod \"a42266af-4520-4d6a-b43a-dee7a81ab497\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " Jan 29 17:00:49.765335 kubelet[2813]: I0129 17:00:49.765324 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a42266af-4520-4d6a-b43a-dee7a81ab497-cilium-config-path\") pod \"a42266af-4520-4d6a-b43a-dee7a81ab497\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " Jan 29 17:00:49.765389 kubelet[2813]: I0129 17:00:49.765379 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-xtables-lock\") pod \"a42266af-4520-4d6a-b43a-dee7a81ab497\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " Jan 29 17:00:49.765460 kubelet[2813]: I0129 17:00:49.765448 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-cni-path\") pod \"a42266af-4520-4d6a-b43a-dee7a81ab497\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " Jan 29 17:00:49.765517 kubelet[2813]: I0129 17:00:49.765507 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-etc-cni-netd\") pod \"a42266af-4520-4d6a-b43a-dee7a81ab497\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " Jan 29 17:00:49.766944 kubelet[2813]: I0129 17:00:49.765559 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-bpf-maps\") pod \"a42266af-4520-4d6a-b43a-dee7a81ab497\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " Jan 29 17:00:49.766944 kubelet[2813]: I0129 17:00:49.765576 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fznvt\" (UniqueName: \"kubernetes.io/projected/a42266af-4520-4d6a-b43a-dee7a81ab497-kube-api-access-fznvt\") pod \"a42266af-4520-4d6a-b43a-dee7a81ab497\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " Jan 29 17:00:49.766944 kubelet[2813]: I0129 17:00:49.765597 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a42266af-4520-4d6a-b43a-dee7a81ab497-clustermesh-secrets\") pod \"a42266af-4520-4d6a-b43a-dee7a81ab497\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " Jan 29 17:00:49.766944 kubelet[2813]: I0129 17:00:49.765610 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-cilium-cgroup\") pod \"a42266af-4520-4d6a-b43a-dee7a81ab497\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " Jan 29 17:00:49.766944 kubelet[2813]: I0129 17:00:49.765625 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-hostproc\") pod \"a42266af-4520-4d6a-b43a-dee7a81ab497\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " Jan 29 17:00:49.766944 kubelet[2813]: I0129 17:00:49.765637 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-host-proc-sys-kernel\") pod \"a42266af-4520-4d6a-b43a-dee7a81ab497\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " Jan 29 17:00:49.767111 kubelet[2813]: I0129 17:00:49.765652 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-lib-modules\") pod \"a42266af-4520-4d6a-b43a-dee7a81ab497\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " Jan 29 17:00:49.767111 kubelet[2813]: I0129 17:00:49.765666 2813 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-host-proc-sys-net\") pod \"a42266af-4520-4d6a-b43a-dee7a81ab497\" (UID: \"a42266af-4520-4d6a-b43a-dee7a81ab497\") " Jan 29 17:00:49.767111 kubelet[2813]: I0129 17:00:49.765703 2813 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lvmnl\" (UniqueName: \"kubernetes.io/projected/0b34e1d8-a11a-4e2c-bc0e-971716ea9b60-kube-api-access-lvmnl\") on node \"ci-4230-0-0-c-e7d65f4211\" DevicePath \"\"" Jan 29 17:00:49.767111 kubelet[2813]: I0129 17:00:49.765712 2813 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b34e1d8-a11a-4e2c-bc0e-971716ea9b60-cilium-config-path\") on node \"ci-4230-0-0-c-e7d65f4211\" DevicePath \"\"" Jan 29 17:00:49.767111 kubelet[2813]: I0129 17:00:49.765736 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a42266af-4520-4d6a-b43a-dee7a81ab497" (UID: "a42266af-4520-4d6a-b43a-dee7a81ab497"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 17:00:49.767111 kubelet[2813]: I0129 17:00:49.765765 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a42266af-4520-4d6a-b43a-dee7a81ab497" (UID: "a42266af-4520-4d6a-b43a-dee7a81ab497"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 17:00:49.769203 kubelet[2813]: I0129 17:00:49.769186 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a42266af-4520-4d6a-b43a-dee7a81ab497-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a42266af-4520-4d6a-b43a-dee7a81ab497" (UID: "a42266af-4520-4d6a-b43a-dee7a81ab497"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 17:00:49.769289 kubelet[2813]: I0129 17:00:49.769274 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a42266af-4520-4d6a-b43a-dee7a81ab497" (UID: "a42266af-4520-4d6a-b43a-dee7a81ab497"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 17:00:49.769349 kubelet[2813]: I0129 17:00:49.769338 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-cni-path" (OuterVolumeSpecName: "cni-path") pod "a42266af-4520-4d6a-b43a-dee7a81ab497" (UID: "a42266af-4520-4d6a-b43a-dee7a81ab497"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 17:00:49.769405 kubelet[2813]: I0129 17:00:49.769394 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a42266af-4520-4d6a-b43a-dee7a81ab497" (UID: "a42266af-4520-4d6a-b43a-dee7a81ab497"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 17:00:49.769463 kubelet[2813]: I0129 17:00:49.769453 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a42266af-4520-4d6a-b43a-dee7a81ab497" (UID: "a42266af-4520-4d6a-b43a-dee7a81ab497"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 17:00:49.770299 kubelet[2813]: I0129 17:00:49.770259 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a42266af-4520-4d6a-b43a-dee7a81ab497-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a42266af-4520-4d6a-b43a-dee7a81ab497" (UID: "a42266af-4520-4d6a-b43a-dee7a81ab497"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 17:00:49.770350 kubelet[2813]: I0129 17:00:49.770327 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-hostproc" (OuterVolumeSpecName: "hostproc") pod "a42266af-4520-4d6a-b43a-dee7a81ab497" (UID: "a42266af-4520-4d6a-b43a-dee7a81ab497"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 17:00:49.771973 kubelet[2813]: I0129 17:00:49.771945 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a42266af-4520-4d6a-b43a-dee7a81ab497-kube-api-access-fznvt" (OuterVolumeSpecName: "kube-api-access-fznvt") pod "a42266af-4520-4d6a-b43a-dee7a81ab497" (UID: "a42266af-4520-4d6a-b43a-dee7a81ab497"). InnerVolumeSpecName "kube-api-access-fznvt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 17:00:49.772053 kubelet[2813]: I0129 17:00:49.772041 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a42266af-4520-4d6a-b43a-dee7a81ab497" (UID: "a42266af-4520-4d6a-b43a-dee7a81ab497"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 17:00:49.772112 kubelet[2813]: I0129 17:00:49.772100 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a42266af-4520-4d6a-b43a-dee7a81ab497" (UID: "a42266af-4520-4d6a-b43a-dee7a81ab497"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 17:00:49.772182 kubelet[2813]: I0129 17:00:49.772170 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a42266af-4520-4d6a-b43a-dee7a81ab497" (UID: "a42266af-4520-4d6a-b43a-dee7a81ab497"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 17:00:49.791176 kubelet[2813]: I0129 17:00:49.791095 2813 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a42266af-4520-4d6a-b43a-dee7a81ab497-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a42266af-4520-4d6a-b43a-dee7a81ab497" (UID: "a42266af-4520-4d6a-b43a-dee7a81ab497"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 29 17:00:49.866342 kubelet[2813]: I0129 17:00:49.866114 2813 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a42266af-4520-4d6a-b43a-dee7a81ab497-hubble-tls\") on node \"ci-4230-0-0-c-e7d65f4211\" DevicePath \"\"" Jan 29 17:00:49.866342 kubelet[2813]: I0129 17:00:49.866171 2813 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-cilium-run\") on node \"ci-4230-0-0-c-e7d65f4211\" DevicePath \"\"" Jan 29 17:00:49.866342 kubelet[2813]: I0129 17:00:49.866195 2813 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a42266af-4520-4d6a-b43a-dee7a81ab497-cilium-config-path\") on node \"ci-4230-0-0-c-e7d65f4211\" DevicePath \"\"" Jan 29 17:00:49.866342 kubelet[2813]: I0129 17:00:49.866221 2813 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-xtables-lock\") on node \"ci-4230-0-0-c-e7d65f4211\" DevicePath \"\"" Jan 29 17:00:49.866342 kubelet[2813]: I0129 17:00:49.866254 2813 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-cni-path\") on node \"ci-4230-0-0-c-e7d65f4211\" DevicePath \"\"" Jan 29 17:00:49.866342 kubelet[2813]: I0129 17:00:49.866277 2813 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fznvt\" (UniqueName: \"kubernetes.io/projected/a42266af-4520-4d6a-b43a-dee7a81ab497-kube-api-access-fznvt\") on node \"ci-4230-0-0-c-e7d65f4211\" DevicePath \"\"" Jan 29 17:00:49.866342 kubelet[2813]: I0129 17:00:49.866301 2813 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-etc-cni-netd\") on node \"ci-4230-0-0-c-e7d65f4211\" DevicePath \"\"" Jan 29 17:00:49.866342 kubelet[2813]: I0129 17:00:49.866325 2813 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-bpf-maps\") on node \"ci-4230-0-0-c-e7d65f4211\" DevicePath \"\"" Jan 29 17:00:49.867874 kubelet[2813]: I0129 17:00:49.866347 2813 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a42266af-4520-4d6a-b43a-dee7a81ab497-clustermesh-secrets\") on node \"ci-4230-0-0-c-e7d65f4211\" DevicePath \"\"" Jan 29 17:00:49.867874 kubelet[2813]: I0129 17:00:49.866373 2813 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-cilium-cgroup\") on node \"ci-4230-0-0-c-e7d65f4211\" DevicePath \"\"" Jan 29 17:00:49.867874 kubelet[2813]: I0129 17:00:49.866395 2813 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-hostproc\") on node \"ci-4230-0-0-c-e7d65f4211\" DevicePath \"\"" Jan 29 17:00:49.867874 kubelet[2813]: I0129 17:00:49.866416 2813 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-lib-modules\") on node \"ci-4230-0-0-c-e7d65f4211\" DevicePath \"\"" Jan 29 17:00:49.867874 kubelet[2813]: I0129 17:00:49.866437 2813 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-host-proc-sys-net\") on node \"ci-4230-0-0-c-e7d65f4211\" DevicePath \"\"" Jan 29 17:00:49.867874 kubelet[2813]: I0129 17:00:49.866459 2813 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a42266af-4520-4d6a-b43a-dee7a81ab497-host-proc-sys-kernel\") on node \"ci-4230-0-0-c-e7d65f4211\" DevicePath \"\"" Jan 29 17:00:50.022179 systemd[1]: Removed slice kubepods-besteffort-pod0b34e1d8_a11a_4e2c_bc0e_971716ea9b60.slice - libcontainer container kubepods-besteffort-pod0b34e1d8_a11a_4e2c_bc0e_971716ea9b60.slice. Jan 29 17:00:50.049095 kubelet[2813]: I0129 17:00:50.047837 2813 scope.go:117] "RemoveContainer" containerID="9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8" Jan 29 17:00:50.075327 systemd[1]: Removed slice kubepods-burstable-poda42266af_4520_4d6a_b43a_dee7a81ab497.slice - libcontainer container kubepods-burstable-poda42266af_4520_4d6a_b43a_dee7a81ab497.slice. Jan 29 17:00:50.075882 systemd[1]: kubepods-burstable-poda42266af_4520_4d6a_b43a_dee7a81ab497.slice: Consumed 8.639s CPU time, 194.9M memory peak, 70.6M read from disk, 13.3M written to disk. Jan 29 17:00:50.086156 containerd[1524]: time="2025-01-29T17:00:50.086081900Z" level=info msg="RemoveContainer for \"9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8\"" Jan 29 17:00:50.093482 containerd[1524]: time="2025-01-29T17:00:50.093429275Z" level=info msg="RemoveContainer for \"9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8\" returns successfully" Jan 29 17:00:50.094036 kubelet[2813]: I0129 17:00:50.093965 2813 scope.go:117] "RemoveContainer" containerID="9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8" Jan 29 17:00:50.094742 containerd[1524]: time="2025-01-29T17:00:50.094562127Z" level=error msg="ContainerStatus for \"9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8\": not found" Jan 29 17:00:50.095432 kubelet[2813]: E0129 17:00:50.095209 2813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8\": not found" containerID="9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8" Jan 29 17:00:50.128763 kubelet[2813]: I0129 17:00:50.095418 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8"} err="failed to get container status \"9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"9df1cf1d6ed21f6caef0ae9fb2e0807830f8d691c80881ac5de8b60ebbc526d8\": not found" Jan 29 17:00:50.128763 kubelet[2813]: I0129 17:00:50.128665 2813 scope.go:117] "RemoveContainer" containerID="3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413" Jan 29 17:00:50.130615 containerd[1524]: time="2025-01-29T17:00:50.130514679Z" level=info msg="RemoveContainer for \"3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413\"" Jan 29 17:00:50.136862 containerd[1524]: time="2025-01-29T17:00:50.136730605Z" level=info msg="RemoveContainer for \"3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413\" returns successfully" Jan 29 17:00:50.137114 kubelet[2813]: I0129 17:00:50.137083 2813 scope.go:117] "RemoveContainer" containerID="0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5" Jan 29 17:00:50.138677 containerd[1524]: time="2025-01-29T17:00:50.138273045Z" level=info msg="RemoveContainer for \"0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5\"" Jan 29 17:00:50.143373 containerd[1524]: time="2025-01-29T17:00:50.143340038Z" level=info msg="RemoveContainer for \"0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5\" returns successfully" Jan 29 17:00:50.143683 kubelet[2813]: I0129 17:00:50.143649 2813 scope.go:117] "RemoveContainer" containerID="34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148" Jan 29 17:00:50.144970 containerd[1524]: time="2025-01-29T17:00:50.144683219Z" level=info msg="RemoveContainer for \"34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148\"" Jan 29 17:00:50.150327 containerd[1524]: time="2025-01-29T17:00:50.150260322Z" level=info msg="RemoveContainer for \"34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148\" returns successfully" Jan 29 17:00:50.150606 kubelet[2813]: I0129 17:00:50.150548 2813 scope.go:117] "RemoveContainer" containerID="eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4" Jan 29 17:00:50.151814 containerd[1524]: time="2025-01-29T17:00:50.151785679Z" level=info msg="RemoveContainer for \"eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4\"" Jan 29 17:00:50.159603 containerd[1524]: time="2025-01-29T17:00:50.159370716Z" level=info msg="RemoveContainer for \"eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4\" returns successfully" Jan 29 17:00:50.160059 kubelet[2813]: I0129 17:00:50.160031 2813 scope.go:117] "RemoveContainer" containerID="a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2" Jan 29 17:00:50.161476 containerd[1524]: time="2025-01-29T17:00:50.161365103Z" level=info msg="RemoveContainer for \"a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2\"" Jan 29 17:00:50.167042 containerd[1524]: time="2025-01-29T17:00:50.166902489Z" level=info msg="RemoveContainer for \"a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2\" returns successfully" Jan 29 17:00:50.167546 kubelet[2813]: I0129 17:00:50.167408 2813 scope.go:117] "RemoveContainer" containerID="3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413" Jan 29 17:00:50.167754 containerd[1524]: time="2025-01-29T17:00:50.167692741Z" level=error msg="ContainerStatus for \"3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413\": not found" Jan 29 17:00:50.167915 kubelet[2813]: E0129 17:00:50.167873 2813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413\": not found" containerID="3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413" Jan 29 17:00:50.168022 kubelet[2813]: I0129 17:00:50.167913 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413"} err="failed to get container status \"3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e291ef040c3600b1b7ab06dc3aa3bbe9ad51f4ed3a644e6155915c054ca5413\": not found" Jan 29 17:00:50.168022 kubelet[2813]: I0129 17:00:50.167975 2813 scope.go:117] "RemoveContainer" containerID="0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5" Jan 29 17:00:50.168459 containerd[1524]: time="2025-01-29T17:00:50.168414962Z" level=error msg="ContainerStatus for \"0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5\": not found" Jan 29 17:00:50.168850 kubelet[2813]: E0129 17:00:50.168742 2813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5\": not found" containerID="0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5" Jan 29 17:00:50.168912 kubelet[2813]: I0129 17:00:50.168884 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5"} err="failed to get container status \"0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5\": rpc error: code = NotFound desc = an error occurred when try to find container \"0068963738251ae83872a12de70e80cb6685b68d8520e8a01a7c1bb98d0eaaa5\": not found" Jan 29 17:00:50.168998 kubelet[2813]: I0129 17:00:50.168908 2813 scope.go:117] "RemoveContainer" containerID="34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148" Jan 29 17:00:50.169259 containerd[1524]: time="2025-01-29T17:00:50.169190406Z" level=error msg="ContainerStatus for \"34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148\": not found" Jan 29 17:00:50.169425 kubelet[2813]: E0129 17:00:50.169390 2813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148\": not found" containerID="34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148" Jan 29 17:00:50.169486 kubelet[2813]: I0129 17:00:50.169422 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148"} err="failed to get container status \"34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148\": rpc error: code = NotFound desc = an error occurred when try to find container \"34cecb6d7d4ce4e80a5bde3ed6a9f0a599d7bc6fb8e999ca5c2d89ad67282148\": not found" Jan 29 17:00:50.169486 kubelet[2813]: I0129 17:00:50.169444 2813 scope.go:117] "RemoveContainer" containerID="eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4" Jan 29 17:00:50.169902 containerd[1524]: time="2025-01-29T17:00:50.169859225Z" level=error msg="ContainerStatus for \"eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4\": not found" Jan 29 17:00:50.170197 kubelet[2813]: E0129 17:00:50.170069 2813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4\": not found" containerID="eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4" Jan 29 17:00:50.170197 kubelet[2813]: I0129 17:00:50.170100 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4"} err="failed to get container status \"eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"eaef95b16325e8b847f2dcab79bcbbdb4cfe0e1077d0250f7389cd0cd83903f4\": not found" Jan 29 17:00:50.170197 kubelet[2813]: I0129 17:00:50.170120 2813 scope.go:117] "RemoveContainer" containerID="a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2" Jan 29 17:00:50.170380 containerd[1524]: time="2025-01-29T17:00:50.170326424Z" level=error msg="ContainerStatus for \"a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2\": not found" Jan 29 17:00:50.170500 kubelet[2813]: E0129 17:00:50.170465 2813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2\": not found" containerID="a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2" Jan 29 17:00:50.170545 kubelet[2813]: I0129 17:00:50.170496 2813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2"} err="failed to get container status \"a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"a97cf2e776f91d6515f0e66139a80617650da10649420e9bf7305981d0de03e2\": not found" Jan 29 17:00:50.341313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c-rootfs.mount: Deactivated successfully. Jan 29 17:00:50.341605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a-rootfs.mount: Deactivated successfully. Jan 29 17:00:50.341824 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a-shm.mount: Deactivated successfully. Jan 29 17:00:50.342044 systemd[1]: var-lib-kubelet-pods-0b34e1d8\x2da11a\x2d4e2c\x2dbc0e\x2d971716ea9b60-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlvmnl.mount: Deactivated successfully. Jan 29 17:00:50.342213 systemd[1]: var-lib-kubelet-pods-a42266af\x2d4520\x2d4d6a\x2db43a\x2ddee7a81ab497-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfznvt.mount: Deactivated successfully. Jan 29 17:00:50.342391 systemd[1]: var-lib-kubelet-pods-a42266af\x2d4520\x2d4d6a\x2db43a\x2ddee7a81ab497-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 17:00:50.342614 systemd[1]: var-lib-kubelet-pods-a42266af\x2d4520\x2d4d6a\x2db43a\x2ddee7a81ab497-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 17:00:50.367275 kubelet[2813]: I0129 17:00:50.367049 2813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b34e1d8-a11a-4e2c-bc0e-971716ea9b60" path="/var/lib/kubelet/pods/0b34e1d8-a11a-4e2c-bc0e-971716ea9b60/volumes" Jan 29 17:00:50.368427 kubelet[2813]: I0129 17:00:50.368256 2813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a42266af-4520-4d6a-b43a-dee7a81ab497" path="/var/lib/kubelet/pods/a42266af-4520-4d6a-b43a-dee7a81ab497/volumes" Jan 29 17:00:51.354059 sshd[4401]: Connection closed by 147.75.109.163 port 55636 Jan 29 17:00:51.355301 sshd-session[4399]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:51.361574 systemd[1]: sshd@20-168.119.110.78:22-147.75.109.163:55636.service: Deactivated successfully. Jan 29 17:00:51.366685 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 17:00:51.370700 systemd-logind[1509]: Session 20 logged out. Waiting for processes to exit. Jan 29 17:00:51.373657 systemd-logind[1509]: Removed session 20. Jan 29 17:00:51.546499 systemd[1]: Started sshd@21-168.119.110.78:22-147.75.109.163:41398.service - OpenSSH per-connection server daemon (147.75.109.163:41398). Jan 29 17:00:52.562434 sshd[4567]: Accepted publickey for core from 147.75.109.163 port 41398 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:52.565651 sshd-session[4567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:52.577368 systemd-logind[1509]: New session 21 of user core. Jan 29 17:00:52.585269 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 17:00:53.730687 kubelet[2813]: I0129 17:00:53.730632 2813 memory_manager.go:355] "RemoveStaleState removing state" podUID="a42266af-4520-4d6a-b43a-dee7a81ab497" containerName="cilium-agent" Jan 29 17:00:53.730687 kubelet[2813]: I0129 17:00:53.730682 2813 memory_manager.go:355] "RemoveStaleState removing state" podUID="0b34e1d8-a11a-4e2c-bc0e-971716ea9b60" containerName="cilium-operator" Jan 29 17:00:53.812144 kubelet[2813]: I0129 17:00:53.810914 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e45d3d93-e139-4051-95ac-2f0d2037ffcf-etc-cni-netd\") pod \"cilium-sc4zf\" (UID: \"e45d3d93-e139-4051-95ac-2f0d2037ffcf\") " pod="kube-system/cilium-sc4zf" Jan 29 17:00:53.812144 kubelet[2813]: I0129 17:00:53.810959 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e45d3d93-e139-4051-95ac-2f0d2037ffcf-lib-modules\") pod \"cilium-sc4zf\" (UID: \"e45d3d93-e139-4051-95ac-2f0d2037ffcf\") " pod="kube-system/cilium-sc4zf" Jan 29 17:00:53.812144 kubelet[2813]: I0129 17:00:53.810977 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e45d3d93-e139-4051-95ac-2f0d2037ffcf-cilium-config-path\") pod \"cilium-sc4zf\" (UID: \"e45d3d93-e139-4051-95ac-2f0d2037ffcf\") " pod="kube-system/cilium-sc4zf" Jan 29 17:00:53.812144 kubelet[2813]: I0129 17:00:53.810991 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e45d3d93-e139-4051-95ac-2f0d2037ffcf-host-proc-sys-net\") pod \"cilium-sc4zf\" (UID: \"e45d3d93-e139-4051-95ac-2f0d2037ffcf\") " pod="kube-system/cilium-sc4zf" Jan 29 17:00:53.812144 kubelet[2813]: I0129 17:00:53.811008 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e45d3d93-e139-4051-95ac-2f0d2037ffcf-cilium-ipsec-secrets\") pod \"cilium-sc4zf\" (UID: \"e45d3d93-e139-4051-95ac-2f0d2037ffcf\") " pod="kube-system/cilium-sc4zf" Jan 29 17:00:53.812385 kubelet[2813]: I0129 17:00:53.811021 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e45d3d93-e139-4051-95ac-2f0d2037ffcf-host-proc-sys-kernel\") pod \"cilium-sc4zf\" (UID: \"e45d3d93-e139-4051-95ac-2f0d2037ffcf\") " pod="kube-system/cilium-sc4zf" Jan 29 17:00:53.812385 kubelet[2813]: I0129 17:00:53.811036 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e45d3d93-e139-4051-95ac-2f0d2037ffcf-bpf-maps\") pod \"cilium-sc4zf\" (UID: \"e45d3d93-e139-4051-95ac-2f0d2037ffcf\") " pod="kube-system/cilium-sc4zf" Jan 29 17:00:53.812385 kubelet[2813]: I0129 17:00:53.811050 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e45d3d93-e139-4051-95ac-2f0d2037ffcf-cni-path\") pod \"cilium-sc4zf\" (UID: \"e45d3d93-e139-4051-95ac-2f0d2037ffcf\") " pod="kube-system/cilium-sc4zf" Jan 29 17:00:53.812385 kubelet[2813]: I0129 17:00:53.811066 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e45d3d93-e139-4051-95ac-2f0d2037ffcf-cilium-cgroup\") pod \"cilium-sc4zf\" (UID: \"e45d3d93-e139-4051-95ac-2f0d2037ffcf\") " pod="kube-system/cilium-sc4zf" Jan 29 17:00:53.812385 kubelet[2813]: I0129 17:00:53.811081 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e45d3d93-e139-4051-95ac-2f0d2037ffcf-clustermesh-secrets\") pod \"cilium-sc4zf\" (UID: \"e45d3d93-e139-4051-95ac-2f0d2037ffcf\") " pod="kube-system/cilium-sc4zf" Jan 29 17:00:53.812385 kubelet[2813]: I0129 17:00:53.811096 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e45d3d93-e139-4051-95ac-2f0d2037ffcf-cilium-run\") pod \"cilium-sc4zf\" (UID: \"e45d3d93-e139-4051-95ac-2f0d2037ffcf\") " pod="kube-system/cilium-sc4zf" Jan 29 17:00:53.812521 kubelet[2813]: I0129 17:00:53.811111 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e45d3d93-e139-4051-95ac-2f0d2037ffcf-hostproc\") pod \"cilium-sc4zf\" (UID: \"e45d3d93-e139-4051-95ac-2f0d2037ffcf\") " pod="kube-system/cilium-sc4zf" Jan 29 17:00:53.812521 kubelet[2813]: I0129 17:00:53.811128 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e45d3d93-e139-4051-95ac-2f0d2037ffcf-xtables-lock\") pod \"cilium-sc4zf\" (UID: \"e45d3d93-e139-4051-95ac-2f0d2037ffcf\") " pod="kube-system/cilium-sc4zf" Jan 29 17:00:53.812521 kubelet[2813]: I0129 17:00:53.811142 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9gjd\" (UniqueName: \"kubernetes.io/projected/e45d3d93-e139-4051-95ac-2f0d2037ffcf-kube-api-access-d9gjd\") pod \"cilium-sc4zf\" (UID: \"e45d3d93-e139-4051-95ac-2f0d2037ffcf\") " pod="kube-system/cilium-sc4zf" Jan 29 17:00:53.812521 kubelet[2813]: I0129 17:00:53.811159 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e45d3d93-e139-4051-95ac-2f0d2037ffcf-hubble-tls\") pod \"cilium-sc4zf\" (UID: \"e45d3d93-e139-4051-95ac-2f0d2037ffcf\") " pod="kube-system/cilium-sc4zf" Jan 29 17:00:53.819617 systemd[1]: Created slice kubepods-burstable-pode45d3d93_e139_4051_95ac_2f0d2037ffcf.slice - libcontainer container kubepods-burstable-pode45d3d93_e139_4051_95ac_2f0d2037ffcf.slice. Jan 29 17:00:53.939894 sshd[4571]: Connection closed by 147.75.109.163 port 41398 Jan 29 17:00:53.947262 sshd-session[4567]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:53.978135 systemd[1]: sshd@21-168.119.110.78:22-147.75.109.163:41398.service: Deactivated successfully. Jan 29 17:00:53.980530 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 17:00:53.982998 systemd-logind[1509]: Session 21 logged out. Waiting for processes to exit. Jan 29 17:00:53.985399 systemd-logind[1509]: Removed session 21. Jan 29 17:00:54.115300 systemd[1]: Started sshd@22-168.119.110.78:22-147.75.109.163:41412.service - OpenSSH per-connection server daemon (147.75.109.163:41412). Jan 29 17:00:54.128518 containerd[1524]: time="2025-01-29T17:00:54.128026747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sc4zf,Uid:e45d3d93-e139-4051-95ac-2f0d2037ffcf,Namespace:kube-system,Attempt:0,}" Jan 29 17:00:54.179074 containerd[1524]: time="2025-01-29T17:00:54.178894112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 17:00:54.179279 containerd[1524]: time="2025-01-29T17:00:54.179104222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 17:00:54.179279 containerd[1524]: time="2025-01-29T17:00:54.179162803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 17:00:54.181715 containerd[1524]: time="2025-01-29T17:00:54.181585129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 17:00:54.214412 systemd[1]: Started cri-containerd-d6ca884d2cd8b2ce54eae023fe3e1a77f1e28a98ff6904520fe753f99a34b5f6.scope - libcontainer container d6ca884d2cd8b2ce54eae023fe3e1a77f1e28a98ff6904520fe753f99a34b5f6. Jan 29 17:00:54.253852 containerd[1524]: time="2025-01-29T17:00:54.253643590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sc4zf,Uid:e45d3d93-e139-4051-95ac-2f0d2037ffcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6ca884d2cd8b2ce54eae023fe3e1a77f1e28a98ff6904520fe753f99a34b5f6\"" Jan 29 17:00:54.258117 containerd[1524]: time="2025-01-29T17:00:54.258075909Z" level=info msg="CreateContainer within sandbox \"d6ca884d2cd8b2ce54eae023fe3e1a77f1e28a98ff6904520fe753f99a34b5f6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 17:00:54.276337 containerd[1524]: time="2025-01-29T17:00:54.276140238Z" level=info msg="CreateContainer within sandbox \"d6ca884d2cd8b2ce54eae023fe3e1a77f1e28a98ff6904520fe753f99a34b5f6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2c7469476ab39fceea4acbb1efc066b23c8d53a8d1c99c3f5a961ccad4306562\"" Jan 29 17:00:54.277869 containerd[1524]: time="2025-01-29T17:00:54.277825263Z" level=info msg="StartContainer for \"2c7469476ab39fceea4acbb1efc066b23c8d53a8d1c99c3f5a961ccad4306562\"" Jan 29 17:00:54.316145 systemd[1]: Started cri-containerd-2c7469476ab39fceea4acbb1efc066b23c8d53a8d1c99c3f5a961ccad4306562.scope - libcontainer container 2c7469476ab39fceea4acbb1efc066b23c8d53a8d1c99c3f5a961ccad4306562. Jan 29 17:00:54.355471 containerd[1524]: time="2025-01-29T17:00:54.355405453Z" level=info msg="StartContainer for \"2c7469476ab39fceea4acbb1efc066b23c8d53a8d1c99c3f5a961ccad4306562\" returns successfully" Jan 29 17:00:54.369909 systemd[1]: cri-containerd-2c7469476ab39fceea4acbb1efc066b23c8d53a8d1c99c3f5a961ccad4306562.scope: Deactivated successfully. Jan 29 17:00:54.415129 containerd[1524]: time="2025-01-29T17:00:54.415026684Z" level=info msg="shim disconnected" id=2c7469476ab39fceea4acbb1efc066b23c8d53a8d1c99c3f5a961ccad4306562 namespace=k8s.io Jan 29 17:00:54.415129 containerd[1524]: time="2025-01-29T17:00:54.415093421Z" level=warning msg="cleaning up after shim disconnected" id=2c7469476ab39fceea4acbb1efc066b23c8d53a8d1c99c3f5a961ccad4306562 namespace=k8s.io Jan 29 17:00:54.415129 containerd[1524]: time="2025-01-29T17:00:54.415105373Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:00:54.475579 kubelet[2813]: E0129 17:00:54.475501 2813 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 17:00:55.072378 containerd[1524]: time="2025-01-29T17:00:55.072063304Z" level=info msg="CreateContainer within sandbox \"d6ca884d2cd8b2ce54eae023fe3e1a77f1e28a98ff6904520fe753f99a34b5f6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 17:00:55.099620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4288384520.mount: Deactivated successfully. Jan 29 17:00:55.102687 containerd[1524]: time="2025-01-29T17:00:55.102521607Z" level=info msg="CreateContainer within sandbox \"d6ca884d2cd8b2ce54eae023fe3e1a77f1e28a98ff6904520fe753f99a34b5f6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ce10ffad559259ddd6f24113d3c424d5d86f78331d1ac3e10f94c93a9ba6ce74\"" Jan 29 17:00:55.105856 containerd[1524]: time="2025-01-29T17:00:55.105154244Z" level=info msg="StartContainer for \"ce10ffad559259ddd6f24113d3c424d5d86f78331d1ac3e10f94c93a9ba6ce74\"" Jan 29 17:00:55.139013 sshd[4589]: Accepted publickey for core from 147.75.109.163 port 41412 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:55.148385 sshd-session[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:55.163038 systemd-logind[1509]: New session 22 of user core. Jan 29 17:00:55.169559 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 17:00:55.194174 systemd[1]: Started cri-containerd-ce10ffad559259ddd6f24113d3c424d5d86f78331d1ac3e10f94c93a9ba6ce74.scope - libcontainer container ce10ffad559259ddd6f24113d3c424d5d86f78331d1ac3e10f94c93a9ba6ce74. Jan 29 17:00:55.243381 containerd[1524]: time="2025-01-29T17:00:55.243325055Z" level=info msg="StartContainer for \"ce10ffad559259ddd6f24113d3c424d5d86f78331d1ac3e10f94c93a9ba6ce74\" returns successfully" Jan 29 17:00:55.258755 systemd[1]: cri-containerd-ce10ffad559259ddd6f24113d3c424d5d86f78331d1ac3e10f94c93a9ba6ce74.scope: Deactivated successfully. Jan 29 17:00:55.259366 systemd[1]: cri-containerd-ce10ffad559259ddd6f24113d3c424d5d86f78331d1ac3e10f94c93a9ba6ce74.scope: Consumed 33ms CPU time, 6.5M memory peak, 1.2M read from disk. Jan 29 17:00:55.288527 containerd[1524]: time="2025-01-29T17:00:55.288463191Z" level=info msg="shim disconnected" id=ce10ffad559259ddd6f24113d3c424d5d86f78331d1ac3e10f94c93a9ba6ce74 namespace=k8s.io Jan 29 17:00:55.288527 containerd[1524]: time="2025-01-29T17:00:55.288514628Z" level=warning msg="cleaning up after shim disconnected" id=ce10ffad559259ddd6f24113d3c424d5d86f78331d1ac3e10f94c93a9ba6ce74 namespace=k8s.io Jan 29 17:00:55.288527 containerd[1524]: time="2025-01-29T17:00:55.288522813Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:00:55.818751 sshd[4710]: Connection closed by 147.75.109.163 port 41412 Jan 29 17:00:55.819477 sshd-session[4589]: pam_unix(sshd:session): session closed for user core Jan 29 17:00:55.823959 systemd[1]: sshd@22-168.119.110.78:22-147.75.109.163:41412.service: Deactivated successfully. Jan 29 17:00:55.826919 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 17:00:55.828253 systemd-logind[1509]: Session 22 logged out. Waiting for processes to exit. Jan 29 17:00:55.829180 systemd-logind[1509]: Removed session 22. Jan 29 17:00:55.927091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce10ffad559259ddd6f24113d3c424d5d86f78331d1ac3e10f94c93a9ba6ce74-rootfs.mount: Deactivated successfully. Jan 29 17:00:55.997405 systemd[1]: Started sshd@23-168.119.110.78:22-147.75.109.163:41418.service - OpenSSH per-connection server daemon (147.75.109.163:41418). Jan 29 17:00:56.083729 containerd[1524]: time="2025-01-29T17:00:56.083510295Z" level=info msg="CreateContainer within sandbox \"d6ca884d2cd8b2ce54eae023fe3e1a77f1e28a98ff6904520fe753f99a34b5f6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 17:00:56.130160 containerd[1524]: time="2025-01-29T17:00:56.130101469Z" level=info msg="CreateContainer within sandbox \"d6ca884d2cd8b2ce54eae023fe3e1a77f1e28a98ff6904520fe753f99a34b5f6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f99a5705cef2679f528761d4c8c215fab64f98edd51d46bb7e74b18dcf54e397\"" Jan 29 17:00:56.132884 containerd[1524]: time="2025-01-29T17:00:56.131033632Z" level=info msg="StartContainer for \"f99a5705cef2679f528761d4c8c215fab64f98edd51d46bb7e74b18dcf54e397\"" Jan 29 17:00:56.187121 systemd[1]: Started cri-containerd-f99a5705cef2679f528761d4c8c215fab64f98edd51d46bb7e74b18dcf54e397.scope - libcontainer container f99a5705cef2679f528761d4c8c215fab64f98edd51d46bb7e74b18dcf54e397. Jan 29 17:00:56.236318 containerd[1524]: time="2025-01-29T17:00:56.236259446Z" level=info msg="StartContainer for \"f99a5705cef2679f528761d4c8c215fab64f98edd51d46bb7e74b18dcf54e397\" returns successfully" Jan 29 17:00:56.245187 systemd[1]: cri-containerd-f99a5705cef2679f528761d4c8c215fab64f98edd51d46bb7e74b18dcf54e397.scope: Deactivated successfully. Jan 29 17:00:56.284186 containerd[1524]: time="2025-01-29T17:00:56.284091250Z" level=info msg="shim disconnected" id=f99a5705cef2679f528761d4c8c215fab64f98edd51d46bb7e74b18dcf54e397 namespace=k8s.io Jan 29 17:00:56.284186 containerd[1524]: time="2025-01-29T17:00:56.284157476Z" level=warning msg="cleaning up after shim disconnected" id=f99a5705cef2679f528761d4c8c215fab64f98edd51d46bb7e74b18dcf54e397 namespace=k8s.io Jan 29 17:00:56.284186 containerd[1524]: time="2025-01-29T17:00:56.284168958Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:00:56.301761 containerd[1524]: time="2025-01-29T17:00:56.301602126Z" level=warning msg="cleanup warnings time=\"2025-01-29T17:00:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 17:00:56.926440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f99a5705cef2679f528761d4c8c215fab64f98edd51d46bb7e74b18dcf54e397-rootfs.mount: Deactivated successfully. Jan 29 17:00:56.999219 sshd[4759]: Accepted publickey for core from 147.75.109.163 port 41418 ssh2: RSA SHA256:D+5b1Cj+O9aygK8hV9APwXirePIoadNYH1E5gWBrjGw Jan 29 17:00:57.002565 sshd-session[4759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 17:00:57.012093 systemd-logind[1509]: New session 23 of user core. Jan 29 17:00:57.020285 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 17:00:57.132443 containerd[1524]: time="2025-01-29T17:00:57.132113265Z" level=info msg="CreateContainer within sandbox \"d6ca884d2cd8b2ce54eae023fe3e1a77f1e28a98ff6904520fe753f99a34b5f6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 17:00:57.172536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2491622617.mount: Deactivated successfully. Jan 29 17:00:57.173318 containerd[1524]: time="2025-01-29T17:00:57.173238280Z" level=info msg="CreateContainer within sandbox \"d6ca884d2cd8b2ce54eae023fe3e1a77f1e28a98ff6904520fe753f99a34b5f6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ecf56056bd8be6e4a1118d0157ab49b20aaae5e48938f8b295ef43a89fe08029\"" Jan 29 17:00:57.176214 containerd[1524]: time="2025-01-29T17:00:57.175744520Z" level=info msg="StartContainer for \"ecf56056bd8be6e4a1118d0157ab49b20aaae5e48938f8b295ef43a89fe08029\"" Jan 29 17:00:57.239520 systemd[1]: Started cri-containerd-ecf56056bd8be6e4a1118d0157ab49b20aaae5e48938f8b295ef43a89fe08029.scope - libcontainer container ecf56056bd8be6e4a1118d0157ab49b20aaae5e48938f8b295ef43a89fe08029. Jan 29 17:00:57.286818 systemd[1]: cri-containerd-ecf56056bd8be6e4a1118d0157ab49b20aaae5e48938f8b295ef43a89fe08029.scope: Deactivated successfully. Jan 29 17:00:57.288088 containerd[1524]: time="2025-01-29T17:00:57.287549820Z" level=info msg="StartContainer for \"ecf56056bd8be6e4a1118d0157ab49b20aaae5e48938f8b295ef43a89fe08029\" returns successfully" Jan 29 17:00:57.329458 containerd[1524]: time="2025-01-29T17:00:57.329375449Z" level=info msg="shim disconnected" id=ecf56056bd8be6e4a1118d0157ab49b20aaae5e48938f8b295ef43a89fe08029 namespace=k8s.io Jan 29 17:00:57.329458 containerd[1524]: time="2025-01-29T17:00:57.329443338Z" level=warning msg="cleaning up after shim disconnected" id=ecf56056bd8be6e4a1118d0157ab49b20aaae5e48938f8b295ef43a89fe08029 namespace=k8s.io Jan 29 17:00:57.329458 containerd[1524]: time="2025-01-29T17:00:57.329456593Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:00:57.928245 systemd[1]: run-containerd-runc-k8s.io-ecf56056bd8be6e4a1118d0157ab49b20aaae5e48938f8b295ef43a89fe08029-runc.UC8wPa.mount: Deactivated successfully. Jan 29 17:00:57.928512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecf56056bd8be6e4a1118d0157ab49b20aaae5e48938f8b295ef43a89fe08029-rootfs.mount: Deactivated successfully. Jan 29 17:00:58.137787 containerd[1524]: time="2025-01-29T17:00:58.137587216Z" level=info msg="CreateContainer within sandbox \"d6ca884d2cd8b2ce54eae023fe3e1a77f1e28a98ff6904520fe753f99a34b5f6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 17:00:58.178420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1856345619.mount: Deactivated successfully. Jan 29 17:00:58.183644 containerd[1524]: time="2025-01-29T17:00:58.183465422Z" level=info msg="CreateContainer within sandbox \"d6ca884d2cd8b2ce54eae023fe3e1a77f1e28a98ff6904520fe753f99a34b5f6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a0a046a3994f4ee5963d4367bdb3b8fa3e91d6737ea19b42b1e237bc483876ad\"" Jan 29 17:00:58.184585 containerd[1524]: time="2025-01-29T17:00:58.184552421Z" level=info msg="StartContainer for \"a0a046a3994f4ee5963d4367bdb3b8fa3e91d6737ea19b42b1e237bc483876ad\"" Jan 29 17:00:58.201110 kubelet[2813]: I0129 17:00:58.201056 2813 setters.go:602] "Node became not ready" node="ci-4230-0-0-c-e7d65f4211" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T17:00:58Z","lastTransitionTime":"2025-01-29T17:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 17:00:58.269162 systemd[1]: Started cri-containerd-a0a046a3994f4ee5963d4367bdb3b8fa3e91d6737ea19b42b1e237bc483876ad.scope - libcontainer container a0a046a3994f4ee5963d4367bdb3b8fa3e91d6737ea19b42b1e237bc483876ad. Jan 29 17:00:58.323996 containerd[1524]: time="2025-01-29T17:00:58.323221557Z" level=info msg="StartContainer for \"a0a046a3994f4ee5963d4367bdb3b8fa3e91d6737ea19b42b1e237bc483876ad\" returns successfully" Jan 29 17:00:58.925823 systemd[1]: run-containerd-runc-k8s.io-a0a046a3994f4ee5963d4367bdb3b8fa3e91d6737ea19b42b1e237bc483876ad-runc.h0dtNQ.mount: Deactivated successfully. Jan 29 17:00:58.963041 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 17:01:02.291876 systemd-networkd[1425]: lxc_health: Link UP Jan 29 17:01:02.296186 systemd-networkd[1425]: lxc_health: Gained carrier Jan 29 17:01:04.015167 systemd-networkd[1425]: lxc_health: Gained IPv6LL Jan 29 17:01:04.156414 kubelet[2813]: I0129 17:01:04.156341 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sc4zf" podStartSLOduration=11.156316408 podStartE2EDuration="11.156316408s" podCreationTimestamp="2025-01-29 17:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 17:00:59.16703575 +0000 UTC m=+224.976258831" watchObservedRunningTime="2025-01-29 17:01:04.156316408 +0000 UTC m=+229.965539489" Jan 29 17:01:06.533521 kubelet[2813]: E0129 17:01:06.533464 2813 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37392->127.0.0.1:39563: write tcp 127.0.0.1:37392->127.0.0.1:39563: write: broken pipe Jan 29 17:01:08.862464 sshd[4818]: Connection closed by 147.75.109.163 port 41418 Jan 29 17:01:08.866208 sshd-session[4759]: pam_unix(sshd:session): session closed for user core Jan 29 17:01:08.880221 systemd[1]: sshd@23-168.119.110.78:22-147.75.109.163:41418.service: Deactivated successfully. Jan 29 17:01:08.886056 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 17:01:08.888247 systemd-logind[1509]: Session 23 logged out. Waiting for processes to exit. Jan 29 17:01:08.891231 systemd-logind[1509]: Removed session 23. Jan 29 17:01:14.384043 containerd[1524]: time="2025-01-29T17:01:14.383989151Z" level=info msg="StopPodSandbox for \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\"" Jan 29 17:01:14.386445 containerd[1524]: time="2025-01-29T17:01:14.384093400Z" level=info msg="TearDown network for sandbox \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\" successfully" Jan 29 17:01:14.386445 containerd[1524]: time="2025-01-29T17:01:14.384106245Z" level=info msg="StopPodSandbox for \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\" returns successfully" Jan 29 17:01:14.386445 containerd[1524]: time="2025-01-29T17:01:14.384474419Z" level=info msg="RemovePodSandbox for \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\"" Jan 29 17:01:14.386445 containerd[1524]: time="2025-01-29T17:01:14.384512982Z" level=info msg="Forcibly stopping sandbox \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\"" Jan 29 17:01:14.386445 containerd[1524]: time="2025-01-29T17:01:14.384585191Z" level=info msg="TearDown network for sandbox \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\" successfully" Jan 29 17:01:14.392347 containerd[1524]: time="2025-01-29T17:01:14.392297377Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 17:01:14.392420 containerd[1524]: time="2025-01-29T17:01:14.392361429Z" level=info msg="RemovePodSandbox \"ec4967c37363fc492e82b4ef86ea0968c0dddb27693abed6d781f27fc895c15a\" returns successfully" Jan 29 17:01:14.392996 containerd[1524]: time="2025-01-29T17:01:14.392862136Z" level=info msg="StopPodSandbox for \"1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c\"" Jan 29 17:01:14.392996 containerd[1524]: time="2025-01-29T17:01:14.392947399Z" level=info msg="TearDown network for sandbox \"1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c\" successfully" Jan 29 17:01:14.392996 containerd[1524]: time="2025-01-29T17:01:14.392957018Z" level=info msg="StopPodSandbox for \"1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c\" returns successfully" Jan 29 17:01:14.394857 containerd[1524]: time="2025-01-29T17:01:14.393240731Z" level=info msg="RemovePodSandbox for \"1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c\"" Jan 29 17:01:14.394857 containerd[1524]: time="2025-01-29T17:01:14.393258113Z" level=info msg="Forcibly stopping sandbox \"1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c\"" Jan 29 17:01:14.394857 containerd[1524]: time="2025-01-29T17:01:14.393300074Z" level=info msg="TearDown network for sandbox \"1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c\" successfully" Jan 29 17:01:14.397618 containerd[1524]: time="2025-01-29T17:01:14.397546739Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 17:01:14.397618 containerd[1524]: time="2025-01-29T17:01:14.397578028Z" level=info msg="RemovePodSandbox \"1a5d08e622168bdb282f7e5dd7b99c19f5d53b10933e743cddc603e27d184c0c\" returns successfully" Jan 29 17:01:25.667415 systemd[1]: cri-containerd-76988c597e20b554cef21829ef4bb3e41378ec4d000912ef8d6bd420b32d5db9.scope: Deactivated successfully. Jan 29 17:01:25.668725 systemd[1]: cri-containerd-76988c597e20b554cef21829ef4bb3e41378ec4d000912ef8d6bd420b32d5db9.scope: Consumed 5.513s CPU time, 76M memory peak, 23.9M read from disk. Jan 29 17:01:25.705607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76988c597e20b554cef21829ef4bb3e41378ec4d000912ef8d6bd420b32d5db9-rootfs.mount: Deactivated successfully. Jan 29 17:01:25.723254 containerd[1524]: time="2025-01-29T17:01:25.723086591Z" level=info msg="shim disconnected" id=76988c597e20b554cef21829ef4bb3e41378ec4d000912ef8d6bd420b32d5db9 namespace=k8s.io Jan 29 17:01:25.724168 containerd[1524]: time="2025-01-29T17:01:25.723788295Z" level=warning msg="cleaning up after shim disconnected" id=76988c597e20b554cef21829ef4bb3e41378ec4d000912ef8d6bd420b32d5db9 namespace=k8s.io Jan 29 17:01:25.724168 containerd[1524]: time="2025-01-29T17:01:25.723819023Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:01:26.099393 kubelet[2813]: E0129 17:01:26.099093 2813 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:54788->10.0.0.2:2379: read: connection timed out" Jan 29 17:01:26.208353 kubelet[2813]: I0129 17:01:26.207991 2813 scope.go:117] "RemoveContainer" containerID="76988c597e20b554cef21829ef4bb3e41378ec4d000912ef8d6bd420b32d5db9" Jan 29 17:01:26.215125 containerd[1524]: time="2025-01-29T17:01:26.215048174Z" level=info msg="CreateContainer within sandbox \"76b3343d7139f0e3c3242310319b19793cc6edb0736361de9d933665b29f6f19\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 29 17:01:26.244621 containerd[1524]: time="2025-01-29T17:01:26.244543434Z" level=info msg="CreateContainer within sandbox \"76b3343d7139f0e3c3242310319b19793cc6edb0736361de9d933665b29f6f19\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c2378e37a0466b07054cf8833e426a057c739480b5cf3c9776331b144b4b4945\"" Jan 29 17:01:26.249595 containerd[1524]: time="2025-01-29T17:01:26.246986984Z" level=info msg="StartContainer for \"c2378e37a0466b07054cf8833e426a057c739480b5cf3c9776331b144b4b4945\"" Jan 29 17:01:26.248169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount140950256.mount: Deactivated successfully. Jan 29 17:01:26.303140 systemd[1]: Started cri-containerd-c2378e37a0466b07054cf8833e426a057c739480b5cf3c9776331b144b4b4945.scope - libcontainer container c2378e37a0466b07054cf8833e426a057c739480b5cf3c9776331b144b4b4945. Jan 29 17:01:26.375637 containerd[1524]: time="2025-01-29T17:01:26.375447568Z" level=info msg="StartContainer for \"c2378e37a0466b07054cf8833e426a057c739480b5cf3c9776331b144b4b4945\" returns successfully" Jan 29 17:01:31.155358 systemd[1]: cri-containerd-d66cecb75b1cd2abced8b4b74ebf0537086443767a8892e29b3c8514863479b4.scope: Deactivated successfully. Jan 29 17:01:31.156900 systemd[1]: cri-containerd-d66cecb75b1cd2abced8b4b74ebf0537086443767a8892e29b3c8514863479b4.scope: Consumed 3.445s CPU time, 33.7M memory peak, 12.3M read from disk. Jan 29 17:01:31.214387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d66cecb75b1cd2abced8b4b74ebf0537086443767a8892e29b3c8514863479b4-rootfs.mount: Deactivated successfully. Jan 29 17:01:31.231055 containerd[1524]: time="2025-01-29T17:01:31.230915274Z" level=info msg="shim disconnected" id=d66cecb75b1cd2abced8b4b74ebf0537086443767a8892e29b3c8514863479b4 namespace=k8s.io Jan 29 17:01:31.232527 containerd[1524]: time="2025-01-29T17:01:31.231535261Z" level=warning msg="cleaning up after shim disconnected" id=d66cecb75b1cd2abced8b4b74ebf0537086443767a8892e29b3c8514863479b4 namespace=k8s.io Jan 29 17:01:31.232527 containerd[1524]: time="2025-01-29T17:01:31.231568105Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 17:01:31.751779 kubelet[2813]: E0129 17:01:31.737019 2813 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:54580->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-0-0-c-e7d65f4211.181f387acddf1292 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-0-0-c-e7d65f4211,UID:89c87bf03b17009a2599b1e5ba4fd5f4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-c-e7d65f4211,},FirstTimestamp:2025-01-29 17:01:21.27941493 +0000 UTC m=+247.088638031,LastTimestamp:2025-01-29 17:01:21.27941493 +0000 UTC m=+247.088638031,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-c-e7d65f4211,}" Jan 29 17:01:32.229373 kubelet[2813]: I0129 17:01:32.229282 2813 scope.go:117] "RemoveContainer" containerID="d66cecb75b1cd2abced8b4b74ebf0537086443767a8892e29b3c8514863479b4" Jan 29 17:01:32.233427 containerd[1524]: time="2025-01-29T17:01:32.233313642Z" level=info msg="CreateContainer within sandbox \"7751b8bdf9658985225445b73d636b31247bea86ffe199f227e51f70d68007ca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 29 17:01:32.259903 containerd[1524]: time="2025-01-29T17:01:32.259705860Z" level=info msg="CreateContainer within sandbox \"7751b8bdf9658985225445b73d636b31247bea86ffe199f227e51f70d68007ca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9cf4c869cc49cd55f9edacfa1b1c395201b14c9372593f66cb501bee03fbee49\"" Jan 29 17:01:32.260840 containerd[1524]: time="2025-01-29T17:01:32.260772825Z" level=info msg="StartContainer for \"9cf4c869cc49cd55f9edacfa1b1c395201b14c9372593f66cb501bee03fbee49\"" Jan 29 17:01:32.323406 systemd[1]: Started cri-containerd-9cf4c869cc49cd55f9edacfa1b1c395201b14c9372593f66cb501bee03fbee49.scope - libcontainer container 9cf4c869cc49cd55f9edacfa1b1c395201b14c9372593f66cb501bee03fbee49. Jan 29 17:01:32.391657 containerd[1524]: time="2025-01-29T17:01:32.391615168Z" level=info msg="StartContainer for \"9cf4c869cc49cd55f9edacfa1b1c395201b14c9372593f66cb501bee03fbee49\" returns successfully" Jan 29 17:01:36.101341 kubelet[2813]: E0129 17:01:36.100897 2813 controller.go:195] "Failed to update lease" err="Put \"https://168.119.110.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-c-e7d65f4211?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 17:01:36.657058 kubelet[2813]: I0129 17:01:36.656978 2813 status_manager.go:890] "Failed to get status for pod" podUID="9c25746c358f5b8910a11c5979ae51c5" pod="kube-system/kube-controller-manager-ci-4230-0-0-c-e7d65f4211" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:54724->10.0.0.2:2379: read: connection timed out"