Jan 29 16:28:46.020633 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:28:46.020664 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:28:46.020681 kernel: BIOS-provided physical RAM map: Jan 29 16:28:46.020691 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 16:28:46.020700 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 16:28:46.020710 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 16:28:46.020721 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jan 29 16:28:46.020731 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jan 29 16:28:46.020744 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 16:28:46.020754 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 16:28:46.020763 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 16:28:46.020773 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 16:28:46.020782 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 16:28:46.020792 kernel: NX (Execute Disable) protection: active Jan 29 16:28:46.020807 kernel: APIC: Static calls initialized Jan 29 16:28:46.020818 kernel: SMBIOS 3.0.0 present. Jan 29 16:28:46.020829 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 29 16:28:46.020839 kernel: Hypervisor detected: KVM Jan 29 16:28:46.020850 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:28:46.020860 kernel: kvm-clock: using sched offset of 3116398286 cycles Jan 29 16:28:46.020870 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:28:46.020882 kernel: tsc: Detected 2495.310 MHz processor Jan 29 16:28:46.020893 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:28:46.020904 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:28:46.020918 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jan 29 16:28:46.020929 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 16:28:46.020940 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:28:46.020950 kernel: Using GB pages for direct mapping Jan 29 16:28:46.020961 kernel: ACPI: Early table checksum verification disabled Jan 29 16:28:46.020972 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Jan 29 16:28:46.020982 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:28:46.020993 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:28:46.021004 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:28:46.021018 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jan 29 16:28:46.021029 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:28:46.021039 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:28:46.021050 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:28:46.021061 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:28:46.021071 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Jan 29 16:28:46.021082 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Jan 29 16:28:46.021101 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jan 29 16:28:46.021112 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Jan 29 16:28:46.021123 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Jan 29 16:28:46.021135 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Jan 29 16:28:46.021146 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Jan 29 16:28:46.021157 kernel: No NUMA configuration found Jan 29 16:28:46.021168 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jan 29 16:28:46.021182 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jan 29 16:28:46.021193 kernel: Zone ranges: Jan 29 16:28:46.021205 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:28:46.021216 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jan 29 16:28:46.021227 kernel: Normal empty Jan 29 16:28:46.021238 kernel: Movable zone start for each node Jan 29 16:28:46.021249 kernel: Early memory node ranges Jan 29 16:28:46.021260 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 16:28:46.021283 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jan 29 16:28:46.021298 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jan 29 16:28:46.021309 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:28:46.021321 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 16:28:46.021332 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 16:28:46.021343 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 16:28:46.021354 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:28:46.021365 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:28:46.021376 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 16:28:46.021388 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:28:46.021399 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:28:46.021413 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:28:46.021425 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:28:46.024428 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:28:46.024470 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 16:28:46.024482 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 16:28:46.024493 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 16:28:46.024505 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 16:28:46.024516 kernel: Booting paravirtualized kernel on KVM Jan 29 16:28:46.024528 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:28:46.024546 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 16:28:46.024557 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 16:28:46.024569 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 16:28:46.024580 kernel: pcpu-alloc: [0] 0 1 Jan 29 16:28:46.024592 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 16:28:46.024605 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:28:46.024617 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:28:46.024628 kernel: random: crng init done Jan 29 16:28:46.024643 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:28:46.024655 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 16:28:46.024666 kernel: Fallback order for Node 0: 0 Jan 29 16:28:46.024677 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jan 29 16:28:46.024689 kernel: Policy zone: DMA32 Jan 29 16:28:46.024700 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:28:46.024712 kernel: Memory: 1920004K/2047464K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 127200K reserved, 0K cma-reserved) Jan 29 16:28:46.024723 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:28:46.024738 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:28:46.024749 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:28:46.024760 kernel: Dynamic Preempt: voluntary Jan 29 16:28:46.024772 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:28:46.024784 kernel: rcu: RCU event tracing is enabled. Jan 29 16:28:46.024795 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:28:46.024807 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:28:46.024818 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:28:46.024830 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:28:46.024841 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:28:46.024856 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:28:46.024867 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 16:28:46.024878 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:28:46.024890 kernel: Console: colour VGA+ 80x25 Jan 29 16:28:46.024901 kernel: printk: console [tty0] enabled Jan 29 16:28:46.024912 kernel: printk: console [ttyS0] enabled Jan 29 16:28:46.024924 kernel: ACPI: Core revision 20230628 Jan 29 16:28:46.024935 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 16:28:46.024947 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:28:46.024961 kernel: x2apic enabled Jan 29 16:28:46.024973 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:28:46.024984 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 16:28:46.024995 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 16:28:46.025007 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495310) Jan 29 16:28:46.025018 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 16:28:46.025029 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 16:28:46.025041 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 16:28:46.025067 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:28:46.025079 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:28:46.025091 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:28:46.025103 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:28:46.025117 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 16:28:46.025129 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 16:28:46.025141 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 16:28:46.025153 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 16:28:46.025165 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 16:28:46.025181 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 16:28:46.025193 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 16:28:46.025205 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:28:46.025217 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:28:46.025229 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:28:46.025240 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:28:46.025252 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 16:28:46.025276 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:28:46.025292 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:28:46.025304 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:28:46.025316 kernel: landlock: Up and running. Jan 29 16:28:46.025327 kernel: SELinux: Initializing. Jan 29 16:28:46.025339 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:28:46.025351 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:28:46.025363 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 16:28:46.025375 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:28:46.025387 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:28:46.025402 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:28:46.025414 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 16:28:46.025426 kernel: ... version: 0 Jan 29 16:28:46.025453 kernel: ... bit width: 48 Jan 29 16:28:46.025465 kernel: ... generic registers: 6 Jan 29 16:28:46.025476 kernel: ... value mask: 0000ffffffffffff Jan 29 16:28:46.025488 kernel: ... max period: 00007fffffffffff Jan 29 16:28:46.025499 kernel: ... fixed-purpose events: 0 Jan 29 16:28:46.025511 kernel: ... event mask: 000000000000003f Jan 29 16:28:46.025527 kernel: signal: max sigframe size: 1776 Jan 29 16:28:46.025538 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:28:46.025550 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:28:46.025562 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:28:46.025574 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:28:46.025585 kernel: .... node #0, CPUs: #1 Jan 29 16:28:46.025597 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:28:46.025609 kernel: smpboot: Max logical packages: 1 Jan 29 16:28:46.025621 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Jan 29 16:28:46.025635 kernel: devtmpfs: initialized Jan 29 16:28:46.025647 kernel: x86/mm: Memory block size: 128MB Jan 29 16:28:46.025659 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:28:46.025671 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:28:46.025682 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:28:46.025694 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:28:46.025706 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:28:46.025718 kernel: audit: type=2000 audit(1738168124.802:1): state=initialized audit_enabled=0 res=1 Jan 29 16:28:46.025729 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:28:46.025744 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:28:46.025756 kernel: cpuidle: using governor menu Jan 29 16:28:46.025768 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:28:46.025779 kernel: dca service started, version 1.12.1 Jan 29 16:28:46.025791 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 16:28:46.025803 kernel: PCI: Using configuration type 1 for base access Jan 29 16:28:46.025815 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:28:46.025827 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:28:46.025838 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:28:46.025853 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:28:46.025865 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:28:46.025876 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:28:46.025888 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:28:46.025899 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:28:46.025911 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:28:46.025923 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:28:46.025935 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:28:46.025946 kernel: ACPI: Interpreter enabled Jan 29 16:28:46.025961 kernel: ACPI: PM: (supports S0 S5) Jan 29 16:28:46.025973 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:28:46.025985 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:28:46.025996 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:28:46.026008 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 16:28:46.026020 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:28:46.026252 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:28:46.027840 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 16:28:46.028028 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 16:28:46.028044 kernel: PCI host bridge to bus 0000:00 Jan 29 16:28:46.028217 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:28:46.028385 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:28:46.028562 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:28:46.028715 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jan 29 16:28:46.028865 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 16:28:46.029024 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 16:28:46.029175 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:28:46.029378 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 16:28:46.031690 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 29 16:28:46.031867 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jan 29 16:28:46.032053 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jan 29 16:28:46.032224 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jan 29 16:28:46.032420 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jan 29 16:28:46.032603 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:28:46.032778 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 16:28:46.032943 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jan 29 16:28:46.033116 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 16:28:46.033296 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jan 29 16:28:46.035571 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 16:28:46.035737 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jan 29 16:28:46.035896 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 16:28:46.036045 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jan 29 16:28:46.036211 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 16:28:46.036383 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jan 29 16:28:46.036562 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 16:28:46.036712 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jan 29 16:28:46.036870 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 16:28:46.037019 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jan 29 16:28:46.037181 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 16:28:46.037359 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jan 29 16:28:46.040985 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 29 16:28:46.041318 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jan 29 16:28:46.041691 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 16:28:46.041992 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 16:28:46.042361 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 16:28:46.043521 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jan 29 16:28:46.043661 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jan 29 16:28:46.043795 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 16:28:46.043930 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 16:28:46.044094 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 16:28:46.044297 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jan 29 16:28:46.044473 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 29 16:28:46.044606 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jan 29 16:28:46.044738 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 16:28:46.044871 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 16:28:46.045002 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 29 16:28:46.045139 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 16:28:46.045285 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jan 29 16:28:46.045413 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 16:28:46.047596 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 16:28:46.047733 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 16:28:46.047906 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 29 16:28:46.048060 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jan 29 16:28:46.048209 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jan 29 16:28:46.048560 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 16:28:46.048874 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 16:28:46.049180 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 16:28:46.051558 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 29 16:28:46.051714 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 29 16:28:46.051842 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 16:28:46.051974 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 16:28:46.052096 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 16:28:46.052232 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 16:28:46.052387 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jan 29 16:28:46.052533 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jan 29 16:28:46.052664 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 16:28:46.052786 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 16:28:46.052909 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 16:28:46.053045 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 29 16:28:46.053174 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jan 29 16:28:46.053321 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jan 29 16:28:46.055518 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 16:28:46.055667 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 16:28:46.055792 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 16:28:46.055803 kernel: acpiphp: Slot [0] registered Jan 29 16:28:46.055939 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 16:28:46.056067 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jan 29 16:28:46.056194 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jan 29 16:28:46.056339 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jan 29 16:28:46.056478 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 16:28:46.056821 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 16:28:46.057173 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 16:28:46.057202 kernel: acpiphp: Slot [0-2] registered Jan 29 16:28:46.057347 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 16:28:46.057800 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 29 16:28:46.057930 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 16:28:46.057941 kernel: acpiphp: Slot [0-3] registered Jan 29 16:28:46.058072 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 16:28:46.058210 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 16:28:46.058348 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 16:28:46.058359 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:28:46.058367 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:28:46.058376 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:28:46.058384 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:28:46.058392 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 16:28:46.058403 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 16:28:46.058412 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 16:28:46.058420 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 16:28:46.058427 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 16:28:46.058454 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 16:28:46.058462 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 16:28:46.058470 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 16:28:46.058478 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 16:28:46.058486 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 16:28:46.058497 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 16:28:46.058505 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 16:28:46.058513 kernel: iommu: Default domain type: Translated Jan 29 16:28:46.058521 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:28:46.058529 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:28:46.058537 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:28:46.058546 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 16:28:46.058554 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jan 29 16:28:46.058709 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 16:28:46.058838 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 16:28:46.058959 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:28:46.058970 kernel: vgaarb: loaded Jan 29 16:28:46.058984 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 16:28:46.058993 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 16:28:46.059001 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:28:46.059009 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:28:46.059017 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:28:46.059025 kernel: pnp: PnP ACPI init Jan 29 16:28:46.059165 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 16:28:46.059177 kernel: pnp: PnP ACPI: found 5 devices Jan 29 16:28:46.059186 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:28:46.059194 kernel: NET: Registered PF_INET protocol family Jan 29 16:28:46.059202 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:28:46.059210 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 16:28:46.059219 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:28:46.059227 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 16:28:46.059238 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 16:28:46.059246 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 16:28:46.059254 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:28:46.059272 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:28:46.059280 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:28:46.059289 kernel: NET: Registered PF_XDP protocol family Jan 29 16:28:46.059414 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 16:28:46.059596 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 16:28:46.059726 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 16:28:46.059855 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 16:28:46.059975 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 16:28:46.060095 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 16:28:46.060218 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 16:28:46.060352 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 16:28:46.060523 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 29 16:28:46.060648 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 16:28:46.060773 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 16:28:46.060894 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 16:28:46.061015 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 16:28:46.061136 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 16:28:46.061256 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 16:28:46.061463 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 16:28:46.061628 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 16:28:46.061771 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 16:28:46.061896 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 16:28:46.062017 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 16:28:46.062137 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 16:28:46.062258 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 16:28:46.062392 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 16:28:46.062528 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 16:28:46.062654 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 16:28:46.062782 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 29 16:28:46.062908 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 16:28:46.063030 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 16:28:46.063183 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 16:28:46.063333 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 29 16:28:46.063476 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 29 16:28:46.063645 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 16:28:46.063791 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 16:28:46.063920 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 29 16:28:46.064044 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 16:28:46.064168 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 16:28:46.064305 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:28:46.064427 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:28:46.064566 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:28:46.064681 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jan 29 16:28:46.064821 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 16:28:46.064938 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 16:28:46.065067 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 29 16:28:46.065187 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 29 16:28:46.065338 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 29 16:28:46.065507 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 16:28:46.065642 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 29 16:28:46.065761 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 16:28:46.065885 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 29 16:28:46.066005 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 16:28:46.066136 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 29 16:28:46.066254 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 16:28:46.066394 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jan 29 16:28:46.066641 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 16:28:46.066771 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 29 16:28:46.066892 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 29 16:28:46.067009 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 16:28:46.067147 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 29 16:28:46.067278 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jan 29 16:28:46.067405 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 16:28:46.067548 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 29 16:28:46.067752 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 29 16:28:46.067879 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 16:28:46.067895 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 16:28:46.067904 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:28:46.067912 kernel: Initialise system trusted keyrings Jan 29 16:28:46.067921 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 16:28:46.067929 kernel: Key type asymmetric registered Jan 29 16:28:46.067938 kernel: Asymmetric key parser 'x509' registered Jan 29 16:28:46.067946 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:28:46.067955 kernel: io scheduler mq-deadline registered Jan 29 16:28:46.067963 kernel: io scheduler kyber registered Jan 29 16:28:46.067971 kernel: io scheduler bfq registered Jan 29 16:28:46.068098 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 29 16:28:46.068247 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 29 16:28:46.068408 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 29 16:28:46.068580 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 29 16:28:46.068707 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 29 16:28:46.068830 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 29 16:28:46.068954 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 29 16:28:46.069077 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 29 16:28:46.069206 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 29 16:28:46.069341 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 29 16:28:46.069486 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 29 16:28:46.069631 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 29 16:28:46.069779 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 29 16:28:46.069910 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 29 16:28:46.070059 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 29 16:28:46.070223 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 29 16:28:46.070241 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 16:28:46.070400 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 29 16:28:46.070594 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 29 16:28:46.070614 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:28:46.070624 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 29 16:28:46.070636 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:28:46.070650 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:28:46.070664 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:28:46.070674 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:28:46.070686 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:28:46.070696 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:28:46.070845 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 16:28:46.070965 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 16:28:46.071091 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T16:28:45 UTC (1738168125) Jan 29 16:28:46.071214 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 16:28:46.071228 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 16:28:46.071238 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:28:46.071253 kernel: Segment Routing with IPv6 Jan 29 16:28:46.071281 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:28:46.071290 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:28:46.071298 kernel: Key type dns_resolver registered Jan 29 16:28:46.071307 kernel: IPI shorthand broadcast: enabled Jan 29 16:28:46.071315 kernel: sched_clock: Marking stable (1336010946, 170508835)->(1519914050, -13394269) Jan 29 16:28:46.071323 kernel: registered taskstats version 1 Jan 29 16:28:46.071332 kernel: Loading compiled-in X.509 certificates Jan 29 16:28:46.071341 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:28:46.071352 kernel: Key type .fscrypt registered Jan 29 16:28:46.071360 kernel: Key type fscrypt-provisioning registered Jan 29 16:28:46.071369 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:28:46.071377 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:28:46.071387 kernel: ima: No architecture policies found Jan 29 16:28:46.071398 kernel: clk: Disabling unused clocks Jan 29 16:28:46.071409 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:28:46.071424 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:28:46.071449 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:28:46.071457 kernel: Run /init as init process Jan 29 16:28:46.071466 kernel: with arguments: Jan 29 16:28:46.071474 kernel: /init Jan 29 16:28:46.071482 kernel: with environment: Jan 29 16:28:46.071490 kernel: HOME=/ Jan 29 16:28:46.071498 kernel: TERM=linux Jan 29 16:28:46.071507 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:28:46.071516 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:28:46.071531 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:28:46.071541 systemd[1]: Detected virtualization kvm. Jan 29 16:28:46.071549 systemd[1]: Detected architecture x86-64. Jan 29 16:28:46.071559 systemd[1]: Running in initrd. Jan 29 16:28:46.071575 systemd[1]: No hostname configured, using default hostname. Jan 29 16:28:46.071590 systemd[1]: Hostname set to . Jan 29 16:28:46.071600 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:28:46.071608 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:28:46.071621 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:28:46.071630 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:28:46.071639 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:28:46.071648 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:28:46.071657 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:28:46.071667 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:28:46.071679 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:28:46.071688 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:28:46.071697 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:28:46.071706 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:28:46.071714 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:28:46.071723 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:28:46.071732 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:28:46.071740 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:28:46.071749 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:28:46.071760 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:28:46.071769 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:28:46.071778 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:28:46.071789 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:28:46.071798 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:28:46.071806 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:28:46.071815 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:28:46.071824 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:28:46.071836 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:28:46.071844 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:28:46.071853 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:28:46.071862 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:28:46.071871 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:28:46.071879 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:28:46.071888 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:28:46.071928 systemd-journald[189]: Collecting audit messages is disabled. Jan 29 16:28:46.071952 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:28:46.071962 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:28:46.071973 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:28:46.071982 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:28:46.071991 kernel: Bridge firewalling registered Jan 29 16:28:46.072000 systemd-journald[189]: Journal started Jan 29 16:28:46.072020 systemd-journald[189]: Runtime Journal (/run/log/journal/3d722a5ce42e45c98255d4bbb0c3c36e) is 4.8M, max 38.3M, 33.5M free. Jan 29 16:28:46.014127 systemd-modules-load[190]: Inserted module 'overlay' Jan 29 16:28:46.094252 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:28:46.063564 systemd-modules-load[190]: Inserted module 'br_netfilter' Jan 29 16:28:46.095394 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:28:46.096237 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:28:46.097255 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:28:46.104798 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:28:46.107635 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:28:46.110817 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:28:46.121375 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:28:46.126252 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:28:46.134889 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:28:46.141701 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:28:46.142606 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:28:46.144273 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:28:46.147596 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:28:46.164218 dracut-cmdline[226]: dracut-dracut-053 Jan 29 16:28:46.167629 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:28:46.183926 systemd-resolved[222]: Positive Trust Anchors: Jan 29 16:28:46.184627 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:28:46.184658 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:28:46.190142 systemd-resolved[222]: Defaulting to hostname 'linux'. Jan 29 16:28:46.191271 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:28:46.192029 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:28:46.258511 kernel: SCSI subsystem initialized Jan 29 16:28:46.268467 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:28:46.292499 kernel: iscsi: registered transport (tcp) Jan 29 16:28:46.327749 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:28:46.327853 kernel: QLogic iSCSI HBA Driver Jan 29 16:28:46.409639 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:28:46.417708 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:28:46.456976 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:28:46.457073 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:28:46.459103 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:28:46.524563 kernel: raid6: avx2x4 gen() 18899 MB/s Jan 29 16:28:46.541536 kernel: raid6: avx2x2 gen() 30647 MB/s Jan 29 16:28:46.558709 kernel: raid6: avx2x1 gen() 25340 MB/s Jan 29 16:28:46.558811 kernel: raid6: using algorithm avx2x2 gen() 30647 MB/s Jan 29 16:28:46.577560 kernel: raid6: .... xor() 19875 MB/s, rmw enabled Jan 29 16:28:46.577667 kernel: raid6: using avx2x2 recovery algorithm Jan 29 16:28:46.599529 kernel: xor: automatically using best checksumming function avx Jan 29 16:28:46.798510 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:28:46.816596 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:28:46.829749 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:28:46.846863 systemd-udevd[408]: Using default interface naming scheme 'v255'. Jan 29 16:28:46.852682 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:28:46.861697 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:28:46.877608 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jan 29 16:28:46.917111 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:28:46.923644 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:28:47.000607 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:28:47.008736 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:28:47.023668 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:28:47.026119 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:28:47.028760 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:28:47.030155 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:28:47.038675 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:28:47.054981 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:28:47.102456 kernel: ACPI: bus type USB registered Jan 29 16:28:47.116570 kernel: usbcore: registered new interface driver usbfs Jan 29 16:28:47.116632 kernel: scsi host0: Virtio SCSI HBA Jan 29 16:28:47.117823 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:28:47.121012 kernel: usbcore: registered new interface driver hub Jan 29 16:28:47.121058 kernel: usbcore: registered new device driver usb Jan 29 16:28:47.134462 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 29 16:28:47.141274 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:28:47.141465 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:28:47.167350 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:28:47.169217 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:28:47.169381 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:28:47.170665 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:28:47.177663 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:28:47.179247 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:28:47.207463 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:28:47.207518 kernel: AES CTR mode by8 optimization enabled Jan 29 16:28:47.246394 kernel: libata version 3.00 loaded. Jan 29 16:28:47.257978 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:28:47.265319 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 16:28:47.326226 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 16:28:47.326259 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 16:28:47.326431 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 16:28:47.326596 kernel: scsi host1: ahci Jan 29 16:28:47.327052 kernel: scsi host2: ahci Jan 29 16:28:47.327285 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 16:28:47.327465 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 29 16:28:47.327663 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 29 16:28:47.327816 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 29 16:28:47.327976 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 16:28:47.328161 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 16:28:47.328333 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 16:28:47.328496 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 29 16:28:47.328662 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 29 16:28:47.328806 kernel: scsi host3: ahci Jan 29 16:28:47.328952 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 29 16:28:47.329095 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 16:28:47.329265 kernel: hub 1-0:1.0: USB hub found Jan 29 16:28:47.329508 kernel: hub 1-0:1.0: 4 ports detected Jan 29 16:28:47.329669 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:28:47.329685 kernel: GPT:17805311 != 80003071 Jan 29 16:28:47.329695 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:28:47.329705 kernel: GPT:17805311 != 80003071 Jan 29 16:28:47.329715 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:28:47.329725 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:28:47.329735 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 16:28:47.329946 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 16:28:47.330108 kernel: scsi host4: ahci Jan 29 16:28:47.330267 kernel: hub 2-0:1.0: USB hub found Jan 29 16:28:47.330486 kernel: scsi host5: ahci Jan 29 16:28:47.330667 kernel: hub 2-0:1.0: 4 ports detected Jan 29 16:28:47.330834 kernel: scsi host6: ahci Jan 29 16:28:47.330991 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Jan 29 16:28:47.331003 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Jan 29 16:28:47.331018 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Jan 29 16:28:47.331029 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Jan 29 16:28:47.331040 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Jan 29 16:28:47.331050 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Jan 29 16:28:47.271839 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:28:47.308456 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:28:47.356458 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (453) Jan 29 16:28:47.367450 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (458) Jan 29 16:28:47.376489 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 29 16:28:47.387094 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 29 16:28:47.397035 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 16:28:47.404920 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 29 16:28:47.405489 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 29 16:28:47.411551 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:28:47.418326 disk-uuid[574]: Primary Header is updated. Jan 29 16:28:47.418326 disk-uuid[574]: Secondary Entries is updated. Jan 29 16:28:47.418326 disk-uuid[574]: Secondary Header is updated. Jan 29 16:28:47.424460 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:28:47.432449 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:28:47.544736 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 16:28:47.637732 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 16:28:47.637796 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 16:28:47.637812 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 16:28:47.638451 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 16:28:47.639702 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 16:28:47.642849 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 16:28:47.642875 kernel: ata1.00: applying bridge limits Jan 29 16:28:47.644340 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 29 16:28:47.645593 kernel: ata1.00: configured for UDMA/100 Jan 29 16:28:47.647320 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 16:28:47.681315 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 16:28:47.696099 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 16:28:47.696121 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 16:28:47.696135 kernel: usbcore: registered new interface driver usbhid Jan 29 16:28:47.696150 kernel: usbhid: USB HID core driver Jan 29 16:28:47.696174 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 29 16:28:47.696189 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 29 16:28:47.696479 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 29 16:28:48.440671 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:28:48.442412 disk-uuid[575]: The operation has completed successfully. Jan 29 16:28:48.535710 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:28:48.535969 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:28:48.601572 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:28:48.607631 sh[594]: Success Jan 29 16:28:48.623471 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 16:28:48.719482 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:28:48.727724 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:28:48.729538 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:28:48.755606 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:28:48.755665 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:28:48.758498 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:28:48.758551 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:28:48.759799 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:28:48.769471 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 16:28:48.771390 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:28:48.772802 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:28:48.784696 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:28:48.787607 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:28:48.805394 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:28:48.805473 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:28:48.805491 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:28:48.811948 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:28:48.812017 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:28:48.825284 kernel: BTRFS info (device sda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:28:48.824921 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:28:48.832961 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:28:48.838588 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:28:48.918483 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:28:48.927644 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:28:48.930043 ignition[692]: Ignition 2.20.0 Jan 29 16:28:48.930623 ignition[692]: Stage: fetch-offline Jan 29 16:28:48.930661 ignition[692]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:28:48.930670 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:28:48.930761 ignition[692]: parsed url from cmdline: "" Jan 29 16:28:48.930765 ignition[692]: no config URL provided Jan 29 16:28:48.930770 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:28:48.930778 ignition[692]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:28:48.930784 ignition[692]: failed to fetch config: resource requires networking Jan 29 16:28:48.930938 ignition[692]: Ignition finished successfully Jan 29 16:28:48.935692 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:28:48.956688 systemd-networkd[780]: lo: Link UP Jan 29 16:28:48.956698 systemd-networkd[780]: lo: Gained carrier Jan 29 16:28:48.959784 systemd-networkd[780]: Enumeration completed Jan 29 16:28:48.959962 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:28:48.960820 systemd[1]: Reached target network.target - Network. Jan 29 16:28:48.960875 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:28:48.960879 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:28:48.961739 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:28:48.961743 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:28:48.965390 systemd-networkd[780]: eth0: Link UP Jan 29 16:28:48.965394 systemd-networkd[780]: eth0: Gained carrier Jan 29 16:28:48.965401 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:28:48.969888 systemd-networkd[780]: eth1: Link UP Jan 29 16:28:48.969893 systemd-networkd[780]: eth1: Gained carrier Jan 29 16:28:48.969904 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:28:48.971638 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:28:48.982645 ignition[785]: Ignition 2.20.0 Jan 29 16:28:48.983458 ignition[785]: Stage: fetch Jan 29 16:28:48.983634 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:28:48.983645 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:28:48.983739 ignition[785]: parsed url from cmdline: "" Jan 29 16:28:48.983743 ignition[785]: no config URL provided Jan 29 16:28:48.983748 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:28:48.983757 ignition[785]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:28:48.983784 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 29 16:28:48.983957 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 16:28:49.017526 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:28:49.041538 systemd-networkd[780]: eth0: DHCPv4 address 159.69.241.25/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 16:28:49.185171 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 29 16:28:49.190950 ignition[785]: GET result: OK Jan 29 16:28:49.191047 ignition[785]: parsing config with SHA512: 20f9f0848fb694d128799b1b2339c4a886f918fcb416811a6dc7d1b6f4bfa54333a9c452e6b88367845e5b29d0c4ca31e6695c4c72684d8e4329e52787f4fa3a Jan 29 16:28:49.199393 unknown[785]: fetched base config from "system" Jan 29 16:28:49.199414 unknown[785]: fetched base config from "system" Jan 29 16:28:49.201590 ignition[785]: fetch: fetch complete Jan 29 16:28:49.199428 unknown[785]: fetched user config from "hetzner" Jan 29 16:28:49.201602 ignition[785]: fetch: fetch passed Jan 29 16:28:49.201683 ignition[785]: Ignition finished successfully Jan 29 16:28:49.208178 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:28:49.214683 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:28:49.261420 ignition[793]: Ignition 2.20.0 Jan 29 16:28:49.261484 ignition[793]: Stage: kargs Jan 29 16:28:49.261784 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:28:49.266726 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:28:49.261807 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:28:49.263479 ignition[793]: kargs: kargs passed Jan 29 16:28:49.263563 ignition[793]: Ignition finished successfully Jan 29 16:28:49.281896 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:28:49.307359 ignition[800]: Ignition 2.20.0 Jan 29 16:28:49.307381 ignition[800]: Stage: disks Jan 29 16:28:49.307789 ignition[800]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:28:49.307813 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:28:49.309619 ignition[800]: disks: disks passed Jan 29 16:28:49.312721 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:28:49.309708 ignition[800]: Ignition finished successfully Jan 29 16:28:49.315976 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:28:49.317650 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:28:49.319770 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:28:49.322120 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:28:49.324522 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:28:49.333724 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:28:49.373001 systemd-fsck[808]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 16:28:49.377920 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:28:49.758606 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:28:49.916459 kernel: EXT4-fs (sda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:28:49.918140 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:28:49.920281 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:28:49.927547 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:28:49.938698 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:28:49.947782 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 16:28:49.949360 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:28:49.949432 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:28:49.958834 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:28:49.962979 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (816) Jan 29 16:28:49.965477 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:28:49.965543 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:28:49.965581 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:28:49.981689 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:28:49.981787 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:28:49.985318 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:28:49.994630 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:28:50.045090 coreos-metadata[818]: Jan 29 16:28:50.044 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 29 16:28:50.046458 coreos-metadata[818]: Jan 29 16:28:50.046 INFO Fetch successful Jan 29 16:28:50.049284 coreos-metadata[818]: Jan 29 16:28:50.047 INFO wrote hostname ci-4230-0-0-d-42684b3569 to /sysroot/etc/hostname Jan 29 16:28:50.052791 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:28:50.054524 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:28:50.060638 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:28:50.066362 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:28:50.073478 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:28:50.177796 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:28:50.182516 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:28:50.185581 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:28:50.193469 kernel: BTRFS info (device sda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:28:50.210627 systemd-networkd[780]: eth1: Gained IPv6LL Jan 29 16:28:50.218462 ignition[932]: INFO : Ignition 2.20.0 Jan 29 16:28:50.218462 ignition[932]: INFO : Stage: mount Jan 29 16:28:50.218462 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:28:50.218462 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:28:50.218462 ignition[932]: INFO : mount: mount passed Jan 29 16:28:50.218462 ignition[932]: INFO : Ignition finished successfully Jan 29 16:28:50.220152 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:28:50.226545 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:28:50.227241 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:28:50.466755 systemd-networkd[780]: eth0: Gained IPv6LL Jan 29 16:28:50.755913 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:28:50.762982 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:28:50.800545 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (945) Jan 29 16:28:50.807925 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:28:50.807974 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:28:50.811767 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:28:50.822313 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:28:50.822397 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:28:50.829755 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:28:50.868166 ignition[962]: INFO : Ignition 2.20.0 Jan 29 16:28:50.868166 ignition[962]: INFO : Stage: files Jan 29 16:28:50.870708 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:28:50.870708 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:28:50.870708 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:28:50.875119 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:28:50.875119 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:28:50.875119 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:28:50.875119 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:28:50.875119 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:28:50.874796 unknown[962]: wrote ssh authorized keys file for user: core Jan 29 16:28:50.884711 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:28:50.884711 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 16:28:51.507794 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:28:52.794930 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:28:52.799323 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:28:52.799323 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:28:52.799323 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:28:52.799323 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:28:52.799323 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:28:52.799323 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:28:52.799323 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:28:52.799323 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:28:52.799323 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:28:52.799323 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:28:52.799323 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:28:52.799323 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:28:52.799323 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:28:52.799323 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 16:28:53.451468 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 16:28:54.726993 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:28:54.726993 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 16:28:54.731495 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:28:54.731495 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:28:54.731495 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 16:28:54.731495 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 16:28:54.731495 ignition[962]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 16:28:54.731495 ignition[962]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 16:28:54.731495 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 16:28:54.731495 ignition[962]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:28:54.731495 ignition[962]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:28:54.731495 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:28:54.731495 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:28:54.731495 ignition[962]: INFO : files: files passed Jan 29 16:28:54.731495 ignition[962]: INFO : Ignition finished successfully Jan 29 16:28:54.733247 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:28:54.745777 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:28:54.750746 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:28:54.765811 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:28:54.765973 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:28:54.783709 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:28:54.783709 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:28:54.785996 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:28:54.789937 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:28:54.792312 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:28:54.800759 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:28:54.853358 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:28:54.853541 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:28:54.855594 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:28:54.857512 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:28:54.859527 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:28:54.871707 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:28:54.891323 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:28:54.900611 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:28:54.919534 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:28:54.920339 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:28:54.921130 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:28:54.921883 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:28:54.922050 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:28:54.924516 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:28:54.925499 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:28:54.926919 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:28:54.928541 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:28:54.930475 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:28:54.932126 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:28:54.933803 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:28:54.935709 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:28:54.937475 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:28:54.939123 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:28:54.940843 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:28:54.941029 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:28:54.943725 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:28:54.944756 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:28:54.946224 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:28:54.946385 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:28:54.947630 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:28:54.947787 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:28:54.950718 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:28:54.950885 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:28:54.951785 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:28:54.951983 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:28:54.953621 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 16:28:54.953826 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:28:54.963551 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:28:54.967738 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:28:54.971696 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:28:54.972081 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:28:54.974514 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:28:54.975407 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:28:54.985039 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:28:54.985252 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:28:54.997123 ignition[1015]: INFO : Ignition 2.20.0 Jan 29 16:28:54.997123 ignition[1015]: INFO : Stage: umount Jan 29 16:28:54.997123 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:28:54.997123 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:28:55.001734 ignition[1015]: INFO : umount: umount passed Jan 29 16:28:55.001734 ignition[1015]: INFO : Ignition finished successfully Jan 29 16:28:54.998884 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:28:54.999004 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:28:55.001656 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:28:55.001747 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:28:55.003757 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:28:55.003808 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:28:55.004256 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:28:55.004300 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:28:55.006022 systemd[1]: Stopped target network.target - Network. Jan 29 16:28:55.006748 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:28:55.006799 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:28:55.008603 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:28:55.009006 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:28:55.009088 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:28:55.011496 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:28:55.012125 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:28:55.012617 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:28:55.012667 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:28:55.013116 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:28:55.013153 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:28:55.014757 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:28:55.014817 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:28:55.015743 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:28:55.015791 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:28:55.016880 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:28:55.018751 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:28:55.021929 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:28:55.025004 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:28:55.025608 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:28:55.029739 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:28:55.030019 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:28:55.030140 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:28:55.032748 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:28:55.032858 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:28:55.034399 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:28:55.035551 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:28:55.035612 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:28:55.036689 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:28:55.036738 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:28:55.042542 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:28:55.043470 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:28:55.043523 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:28:55.044069 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:28:55.044122 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:28:55.045522 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:28:55.045576 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:28:55.046243 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:28:55.046288 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:28:55.047553 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:28:55.049668 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:28:55.049731 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:28:55.055762 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:28:55.055923 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:28:55.057818 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:28:55.057866 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:28:55.058640 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:28:55.058679 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:28:55.059700 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:28:55.059750 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:28:55.061223 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:28:55.061270 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:28:55.062287 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:28:55.062335 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:28:55.067578 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:28:55.068807 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:28:55.068861 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:28:55.070772 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 16:28:55.071466 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:28:55.072675 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:28:55.072723 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:28:55.073523 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:28:55.073568 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:28:55.075804 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:28:55.075863 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:28:55.076219 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:28:55.076318 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:28:55.077642 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:28:55.077734 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:28:55.079231 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:28:55.088797 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:28:55.095276 systemd[1]: Switching root. Jan 29 16:28:55.159633 systemd-journald[189]: Journal stopped Jan 29 16:28:56.604677 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). Jan 29 16:28:56.604766 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:28:56.604781 kernel: SELinux: policy capability open_perms=1 Jan 29 16:28:56.604792 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:28:56.604803 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:28:56.604814 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:28:56.604830 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:28:56.604842 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:28:56.604853 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:28:56.604867 kernel: audit: type=1403 audit(1738168135.361:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:28:56.604879 systemd[1]: Successfully loaded SELinux policy in 70.789ms. Jan 29 16:28:56.604898 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 26.640ms. Jan 29 16:28:56.604910 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:28:56.604928 systemd[1]: Detected virtualization kvm. Jan 29 16:28:56.604940 systemd[1]: Detected architecture x86-64. Jan 29 16:28:56.604951 systemd[1]: Detected first boot. Jan 29 16:28:56.604963 systemd[1]: Hostname set to . Jan 29 16:28:56.604975 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:28:56.604989 zram_generator::config[1060]: No configuration found. Jan 29 16:28:56.605003 kernel: Guest personality initialized and is inactive Jan 29 16:28:56.605014 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:28:56.605036 kernel: Initialized host personality Jan 29 16:28:56.605048 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:28:56.605060 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:28:56.605073 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:28:56.605085 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:28:56.605100 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:28:56.605112 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:28:56.605124 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:28:56.605138 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:28:56.605150 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:28:56.605161 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:28:56.605174 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:28:56.605186 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:28:56.605199 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:28:56.605217 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:28:56.605234 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:28:56.605250 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:28:56.605263 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:28:56.605291 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:28:56.605307 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:28:56.605328 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:28:56.605344 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:28:56.605360 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:28:56.605377 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:28:56.605390 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:28:56.605404 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:28:56.605416 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:28:56.605503 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:28:56.605521 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:28:56.605533 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:28:56.605544 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:28:56.605557 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:28:56.605570 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:28:56.605587 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:28:56.605602 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:28:56.605620 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:28:56.605634 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:28:56.605646 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:28:56.605659 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:28:56.605672 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:28:56.605684 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:28:56.605697 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:28:56.605710 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:28:56.605724 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:28:56.605737 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:28:56.605750 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:28:56.605766 systemd[1]: Reached target machines.target - Containers. Jan 29 16:28:56.605782 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:28:56.605799 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:28:56.605816 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:28:56.605829 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:28:56.605843 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:28:56.605862 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:28:56.605879 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:28:56.605895 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:28:56.605911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:28:56.605923 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:28:56.605935 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:28:56.605948 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:28:56.605960 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:28:56.605975 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:28:56.605988 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:28:56.606000 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:28:56.606012 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:28:56.608474 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:28:56.608513 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:28:56.608527 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:28:56.608538 kernel: loop: module loaded Jan 29 16:28:56.608551 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:28:56.608567 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:28:56.608582 systemd[1]: Stopped verity-setup.service. Jan 29 16:28:56.608596 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:28:56.608608 kernel: ACPI: bus type drm_connector registered Jan 29 16:28:56.608620 kernel: fuse: init (API version 7.39) Jan 29 16:28:56.608632 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:28:56.608644 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:28:56.608656 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:28:56.608669 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:28:56.608681 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:28:56.608695 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:28:56.608738 systemd-journald[1137]: Collecting audit messages is disabled. Jan 29 16:28:56.608765 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:28:56.608777 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:28:56.608789 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:28:56.608802 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:28:56.608814 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:28:56.608833 systemd-journald[1137]: Journal started Jan 29 16:28:56.608859 systemd-journald[1137]: Runtime Journal (/run/log/journal/3d722a5ce42e45c98255d4bbb0c3c36e) is 4.8M, max 38.3M, 33.5M free. Jan 29 16:28:56.200225 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:28:56.211171 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 16:28:56.212499 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:28:56.610632 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:28:56.620545 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:28:56.618277 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:28:56.619606 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:28:56.620372 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:28:56.621666 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:28:56.622609 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:28:56.623669 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:28:56.625509 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:28:56.625885 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:28:56.627084 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:28:56.627989 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:28:56.629131 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:28:56.630091 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:28:56.648123 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:28:56.658635 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:28:56.664579 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:28:56.665417 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:28:56.665467 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:28:56.669565 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:28:56.675664 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:28:56.678640 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:28:56.679454 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:28:56.686725 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:28:56.695624 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:28:56.696588 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:28:56.699351 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:28:56.699920 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:28:56.715143 systemd-journald[1137]: Time spent on flushing to /var/log/journal/3d722a5ce42e45c98255d4bbb0c3c36e is 72.300ms for 1146 entries. Jan 29 16:28:56.715143 systemd-journald[1137]: System Journal (/var/log/journal/3d722a5ce42e45c98255d4bbb0c3c36e) is 8M, max 584.8M, 576.8M free. Jan 29 16:28:56.817120 systemd-journald[1137]: Received client request to flush runtime journal. Jan 29 16:28:56.817188 kernel: loop0: detected capacity change from 0 to 147912 Jan 29 16:28:56.708636 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:28:56.710600 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:28:56.717609 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:28:56.721367 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:28:56.728572 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:28:56.731515 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:28:56.780902 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:28:56.781700 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:28:56.788766 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:28:56.799766 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:28:56.812634 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:28:56.818940 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:28:56.830506 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:28:56.834188 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 16:28:56.853186 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:28:56.854485 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:28:56.862608 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 29 16:28:56.862626 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Jan 29 16:28:56.869234 kernel: loop1: detected capacity change from 0 to 210664 Jan 29 16:28:56.872316 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:28:56.881584 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:28:56.927522 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:28:56.933680 kernel: loop2: detected capacity change from 0 to 138176 Jan 29 16:28:56.937664 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:28:56.954585 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 29 16:28:56.954959 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 29 16:28:56.963128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:28:56.991679 kernel: loop3: detected capacity change from 0 to 8 Jan 29 16:28:57.014537 kernel: loop4: detected capacity change from 0 to 147912 Jan 29 16:28:57.046572 kernel: loop5: detected capacity change from 0 to 210664 Jan 29 16:28:57.076489 kernel: loop6: detected capacity change from 0 to 138176 Jan 29 16:28:57.107746 kernel: loop7: detected capacity change from 0 to 8 Jan 29 16:28:57.108766 (sd-merge)[1215]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 29 16:28:57.109427 (sd-merge)[1215]: Merged extensions into '/usr'. Jan 29 16:28:57.113698 systemd[1]: Reload requested from client PID 1186 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:28:57.113716 systemd[1]: Reloading... Jan 29 16:28:57.241833 zram_generator::config[1243]: No configuration found. Jan 29 16:28:57.410520 ldconfig[1181]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:28:57.415642 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:28:57.490188 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:28:57.490461 systemd[1]: Reloading finished in 376 ms. Jan 29 16:28:57.511081 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:28:57.512362 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:28:57.529030 systemd[1]: Starting ensure-sysext.service... Jan 29 16:28:57.532574 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:28:57.558514 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:28:57.558651 systemd[1]: Reloading... Jan 29 16:28:57.589080 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:28:57.589354 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:28:57.590394 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:28:57.591154 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jan 29 16:28:57.591305 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jan 29 16:28:57.595979 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:28:57.596082 systemd-tmpfiles[1287]: Skipping /boot Jan 29 16:28:57.613430 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:28:57.613559 systemd-tmpfiles[1287]: Skipping /boot Jan 29 16:28:57.672210 zram_generator::config[1322]: No configuration found. Jan 29 16:28:57.786726 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:28:57.865161 systemd[1]: Reloading finished in 306 ms. Jan 29 16:28:57.879345 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:28:57.880324 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:28:57.899621 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:28:57.912130 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:28:57.916817 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:28:57.929175 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:28:57.943826 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:28:57.950058 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:28:57.963631 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:28:57.963899 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:28:57.975720 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:28:57.979364 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:28:57.983536 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:28:57.984536 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:28:57.984654 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:28:57.984744 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:28:57.995111 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:28:57.997498 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:28:57.997817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:28:57.997968 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:28:58.004780 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:28:58.005471 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:28:58.007903 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:28:58.009337 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:28:58.010496 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:28:58.031846 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:28:58.032122 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:28:58.041314 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:28:58.047786 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:28:58.048533 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:28:58.049651 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:28:58.049811 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:28:58.051174 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:28:58.051725 augenrules[1393]: No rules Jan 29 16:28:58.054207 systemd-udevd[1371]: Using default interface naming scheme 'v255'. Jan 29 16:28:58.054315 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:28:58.056655 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:28:58.057689 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:28:58.057902 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:28:58.061853 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:28:58.062072 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:28:58.062975 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:28:58.063195 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:28:58.064634 systemd[1]: Finished ensure-sysext.service. Jan 29 16:28:58.072304 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:28:58.072393 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:28:58.089701 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:28:58.096766 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:28:58.099465 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:28:58.099734 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:28:58.112286 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:28:58.122070 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:28:58.122847 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:28:58.124908 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:28:58.129607 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:28:58.144619 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:28:58.292570 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:28:58.293229 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:28:58.300490 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:28:58.316928 systemd-networkd[1420]: lo: Link UP Jan 29 16:28:58.316938 systemd-networkd[1420]: lo: Gained carrier Jan 29 16:28:58.318103 systemd-networkd[1420]: Enumeration completed Jan 29 16:28:58.318195 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:28:58.319115 systemd-resolved[1365]: Positive Trust Anchors: Jan 29 16:28:58.319124 systemd-resolved[1365]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:28:58.319156 systemd-resolved[1365]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:28:58.327484 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:28:58.327614 systemd-resolved[1365]: Using system hostname 'ci-4230-0-0-d-42684b3569'. Jan 29 16:28:58.336703 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:28:58.338120 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:28:58.339090 systemd[1]: Reached target network.target - Network. Jan 29 16:28:58.340541 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:28:58.370097 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:28:58.395676 systemd-networkd[1420]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:28:58.395693 systemd-networkd[1420]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:28:58.400576 systemd-networkd[1420]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:28:58.400586 systemd-networkd[1420]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:28:58.401936 systemd-networkd[1420]: eth0: Link UP Jan 29 16:28:58.401949 systemd-networkd[1420]: eth0: Gained carrier Jan 29 16:28:58.401962 systemd-networkd[1420]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:28:58.407933 systemd-networkd[1420]: eth1: Link UP Jan 29 16:28:58.407945 systemd-networkd[1420]: eth1: Gained carrier Jan 29 16:28:58.407959 systemd-networkd[1420]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:28:58.408595 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1423) Jan 29 16:28:58.448521 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 16:28:58.454666 systemd-networkd[1420]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:28:58.458473 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:28:58.459636 systemd-networkd[1420]: eth0: DHCPv4 address 159.69.241.25/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 16:28:58.461067 systemd-timesyncd[1405]: Network configuration changed, trying to establish connection. Jan 29 16:28:58.461620 systemd-timesyncd[1405]: Network configuration changed, trying to establish connection. Jan 29 16:28:58.462778 systemd-timesyncd[1405]: Network configuration changed, trying to establish connection. Jan 29 16:28:58.463491 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:28:58.488920 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 29 16:28:58.488986 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:28:58.489099 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:28:58.494880 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:28:58.498703 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:28:58.506616 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:28:58.507163 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:28:58.507195 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:28:58.507221 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:28:58.507233 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:28:58.512215 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:28:58.512485 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:28:58.517402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:28:58.519379 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:28:58.526398 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 16:28:58.528712 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:28:58.528939 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:28:58.539660 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:28:58.540607 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:28:58.540667 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:28:58.560215 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:28:58.564328 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 16:28:58.578158 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 29 16:28:58.578180 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 16:28:58.578374 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 16:28:58.578724 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 16:28:58.581454 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 29 16:28:58.588742 kernel: EDAC MC: Ver: 3.0.0 Jan 29 16:28:58.599491 kernel: Console: switching to colour dummy device 80x25 Jan 29 16:28:58.600756 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:28:58.602713 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 16:28:58.602750 kernel: [drm] features: -context_init Jan 29 16:28:58.606473 kernel: [drm] number of scanouts: 1 Jan 29 16:28:58.606526 kernel: [drm] number of cap sets: 0 Jan 29 16:28:58.614872 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 29 16:28:58.619870 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:28:58.620381 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:28:58.626728 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 16:28:58.626788 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 16:28:58.629805 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:28:58.634385 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 16:28:58.641888 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:28:58.642282 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:28:58.651662 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:28:58.733134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:28:58.765645 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:28:58.780814 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:28:58.802782 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:28:58.851370 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:28:58.853415 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:28:58.854670 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:28:58.854945 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:28:58.855148 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:28:58.855579 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:28:58.855894 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:28:58.856026 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:28:58.856131 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:28:58.856186 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:28:58.856312 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:28:58.858951 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:28:58.861711 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:28:58.868535 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:28:58.869772 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:28:58.870078 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:28:58.881676 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:28:58.883532 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:28:58.892705 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:28:58.896667 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:28:58.900832 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:28:58.904014 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:28:58.905127 lvm[1484]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:28:58.906041 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:28:58.907178 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:28:58.915717 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:28:58.927722 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:28:58.937725 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:28:58.950588 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:28:58.956352 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:28:58.957257 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:28:58.966683 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:28:58.972701 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:28:58.979572 jq[1490]: false Jan 29 16:28:58.983773 coreos-metadata[1486]: Jan 29 16:28:58.982 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 29 16:28:58.985175 coreos-metadata[1486]: Jan 29 16:28:58.984 INFO Fetch successful Jan 29 16:28:58.985175 coreos-metadata[1486]: Jan 29 16:28:58.984 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 29 16:28:58.984611 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 29 16:28:58.987096 coreos-metadata[1486]: Jan 29 16:28:58.985 INFO Fetch successful Jan 29 16:28:58.990605 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:28:59.000662 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:28:59.013554 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:28:59.017014 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:28:59.019394 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:28:59.020932 dbus-daemon[1487]: [system] SELinux support is enabled Jan 29 16:28:59.021618 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:28:59.035763 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:28:59.036520 extend-filesystems[1491]: Found loop4 Jan 29 16:28:59.038055 extend-filesystems[1491]: Found loop5 Jan 29 16:28:59.038055 extend-filesystems[1491]: Found loop6 Jan 29 16:28:59.038055 extend-filesystems[1491]: Found loop7 Jan 29 16:28:59.038055 extend-filesystems[1491]: Found sda Jan 29 16:28:59.038055 extend-filesystems[1491]: Found sda1 Jan 29 16:28:59.038055 extend-filesystems[1491]: Found sda2 Jan 29 16:28:59.038055 extend-filesystems[1491]: Found sda3 Jan 29 16:28:59.038055 extend-filesystems[1491]: Found usr Jan 29 16:28:59.038055 extend-filesystems[1491]: Found sda4 Jan 29 16:28:59.038055 extend-filesystems[1491]: Found sda6 Jan 29 16:28:59.038055 extend-filesystems[1491]: Found sda7 Jan 29 16:28:59.038055 extend-filesystems[1491]: Found sda9 Jan 29 16:28:59.038055 extend-filesystems[1491]: Checking size of /dev/sda9 Jan 29 16:28:59.041024 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:28:59.052508 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:28:59.067503 jq[1507]: true Jan 29 16:28:59.071762 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:28:59.072874 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:28:59.073297 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:28:59.073561 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:28:59.082477 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:28:59.084504 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:28:59.086403 extend-filesystems[1491]: Resized partition /dev/sda9 Jan 29 16:28:59.100507 extend-filesystems[1519]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:28:59.116618 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 29 16:28:59.116696 jq[1518]: true Jan 29 16:28:59.125212 update_engine[1506]: I20250129 16:28:59.122820 1506 main.cc:92] Flatcar Update Engine starting Jan 29 16:28:59.121376 (ntainerd)[1520]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:28:59.133412 update_engine[1506]: I20250129 16:28:59.130204 1506 update_check_scheduler.cc:74] Next update check in 5m34s Jan 29 16:28:59.147349 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:28:59.151380 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:28:59.151408 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:28:59.151989 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:28:59.152004 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:28:59.163716 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:28:59.173781 tar[1517]: linux-amd64/helm Jan 29 16:28:59.257775 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:28:59.259598 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:28:59.304840 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1423) Jan 29 16:28:59.384785 bash[1561]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:28:59.389528 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:28:59.397618 systemd-logind[1501]: New seat seat0. Jan 29 16:28:59.403662 systemd[1]: Starting sshkeys.service... Jan 29 16:28:59.414588 systemd-logind[1501]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 16:28:59.414618 systemd-logind[1501]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:28:59.421339 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:28:59.433629 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 29 16:28:59.441035 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 16:28:59.450974 locksmithd[1539]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:28:59.453814 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 16:28:59.482072 extend-filesystems[1519]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 16:28:59.482072 extend-filesystems[1519]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 29 16:28:59.482072 extend-filesystems[1519]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 29 16:28:59.495199 extend-filesystems[1491]: Resized filesystem in /dev/sda9 Jan 29 16:28:59.495199 extend-filesystems[1491]: Found sr0 Jan 29 16:28:59.504264 coreos-metadata[1567]: Jan 29 16:28:59.487 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 29 16:28:59.504264 coreos-metadata[1567]: Jan 29 16:28:59.489 INFO Fetch successful Jan 29 16:28:59.484887 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:28:59.485159 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:28:59.493920 unknown[1567]: wrote ssh authorized keys file for user: core Jan 29 16:28:59.546218 update-ssh-keys[1574]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:28:59.549849 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 16:28:59.553150 systemd[1]: Finished sshkeys.service. Jan 29 16:28:59.561180 containerd[1520]: time="2025-01-29T16:28:59.561104416Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:28:59.624462 containerd[1520]: time="2025-01-29T16:28:59.623501087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:28:59.627680 containerd[1520]: time="2025-01-29T16:28:59.627652043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:28:59.627680 containerd[1520]: time="2025-01-29T16:28:59.627678953Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:28:59.627733 containerd[1520]: time="2025-01-29T16:28:59.627693320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:28:59.627877 containerd[1520]: time="2025-01-29T16:28:59.627860183Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:28:59.627901 containerd[1520]: time="2025-01-29T16:28:59.627880511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:28:59.627977 containerd[1520]: time="2025-01-29T16:28:59.627947346Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:28:59.628000 containerd[1520]: time="2025-01-29T16:28:59.627976792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:28:59.628230 containerd[1520]: time="2025-01-29T16:28:59.628211031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:28:59.628261 containerd[1520]: time="2025-01-29T16:28:59.628228914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:28:59.628261 containerd[1520]: time="2025-01-29T16:28:59.628241999Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:28:59.628261 containerd[1520]: time="2025-01-29T16:28:59.628250646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:28:59.628351 containerd[1520]: time="2025-01-29T16:28:59.628334933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:28:59.628593 containerd[1520]: time="2025-01-29T16:28:59.628574774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:28:59.628749 containerd[1520]: time="2025-01-29T16:28:59.628731688Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:28:59.628778 containerd[1520]: time="2025-01-29T16:28:59.628747798Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:28:59.628853 containerd[1520]: time="2025-01-29T16:28:59.628837576Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:28:59.629638 containerd[1520]: time="2025-01-29T16:28:59.628897468Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:28:59.634967 containerd[1520]: time="2025-01-29T16:28:59.633874123Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:28:59.634967 containerd[1520]: time="2025-01-29T16:28:59.633920009Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:28:59.634967 containerd[1520]: time="2025-01-29T16:28:59.633934085Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:28:59.634967 containerd[1520]: time="2025-01-29T16:28:59.633959102Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:28:59.634967 containerd[1520]: time="2025-01-29T16:28:59.634002543Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:28:59.634967 containerd[1520]: time="2025-01-29T16:28:59.634117790Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:28:59.634967 containerd[1520]: time="2025-01-29T16:28:59.634396963Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:28:59.634967 containerd[1520]: time="2025-01-29T16:28:59.634512811Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:28:59.634967 containerd[1520]: time="2025-01-29T16:28:59.634527498Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:28:59.634967 containerd[1520]: time="2025-01-29T16:28:59.634542476Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:28:59.634967 containerd[1520]: time="2025-01-29T16:28:59.634557184Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:28:59.634967 containerd[1520]: time="2025-01-29T16:28:59.634571992Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:28:59.634967 containerd[1520]: time="2025-01-29T16:28:59.634586358Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:28:59.634967 containerd[1520]: time="2025-01-29T16:28:59.634601437Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:28:59.635228 containerd[1520]: time="2025-01-29T16:28:59.634617608Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:28:59.635228 containerd[1520]: time="2025-01-29T16:28:59.634633778Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:28:59.635228 containerd[1520]: time="2025-01-29T16:28:59.634647604Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:28:59.635228 containerd[1520]: time="2025-01-29T16:28:59.634660357Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:28:59.635228 containerd[1520]: time="2025-01-29T16:28:59.634681076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:28:59.635228 containerd[1520]: time="2025-01-29T16:28:59.634692999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:28:59.635228 containerd[1520]: time="2025-01-29T16:28:59.634704700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:28:59.635228 containerd[1520]: time="2025-01-29T16:28:59.634716894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:28:59.635228 containerd[1520]: time="2025-01-29T16:28:59.634731531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:28:59.635228 containerd[1520]: time="2025-01-29T16:28:59.634743714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:28:59.635228 containerd[1520]: time="2025-01-29T16:28:59.634754374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:28:59.635228 containerd[1520]: time="2025-01-29T16:28:59.634765585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:28:59.635228 containerd[1520]: time="2025-01-29T16:28:59.634777717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:28:59.635228 containerd[1520]: time="2025-01-29T16:28:59.634792115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:28:59.635495 containerd[1520]: time="2025-01-29T16:28:59.634803516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:28:59.635495 containerd[1520]: time="2025-01-29T16:28:59.634813816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:28:59.635495 containerd[1520]: time="2025-01-29T16:28:59.634824295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:28:59.635495 containerd[1520]: time="2025-01-29T16:28:59.634838952Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:28:59.635495 containerd[1520]: time="2025-01-29T16:28:59.634856145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:28:59.635495 containerd[1520]: time="2025-01-29T16:28:59.634867766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:28:59.635495 containerd[1520]: time="2025-01-29T16:28:59.634877575Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:28:59.637078 containerd[1520]: time="2025-01-29T16:28:59.635639293Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:28:59.637078 containerd[1520]: time="2025-01-29T16:28:59.635661836Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:28:59.637078 containerd[1520]: time="2025-01-29T16:28:59.635673328Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:28:59.637078 containerd[1520]: time="2025-01-29T16:28:59.635740574Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:28:59.637078 containerd[1520]: time="2025-01-29T16:28:59.635751003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:28:59.637078 containerd[1520]: time="2025-01-29T16:28:59.635764168Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:28:59.637078 containerd[1520]: time="2025-01-29T16:28:59.635774197Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:28:59.637078 containerd[1520]: time="2025-01-29T16:28:59.635786901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:28:59.637248 containerd[1520]: time="2025-01-29T16:28:59.636046387Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:28:59.637248 containerd[1520]: time="2025-01-29T16:28:59.636084048Z" level=info msg="Connect containerd service" Jan 29 16:28:59.637248 containerd[1520]: time="2025-01-29T16:28:59.636112060Z" level=info msg="using legacy CRI server" Jan 29 16:28:59.637248 containerd[1520]: time="2025-01-29T16:28:59.636118071Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:28:59.637248 containerd[1520]: time="2025-01-29T16:28:59.636205455Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:28:59.638213 containerd[1520]: time="2025-01-29T16:28:59.637873454Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:28:59.638447 containerd[1520]: time="2025-01-29T16:28:59.638403358Z" level=info msg="Start subscribing containerd event" Jan 29 16:28:59.638926 containerd[1520]: time="2025-01-29T16:28:59.638747263Z" level=info msg="Start recovering state" Jan 29 16:28:59.638926 containerd[1520]: time="2025-01-29T16:28:59.638800212Z" level=info msg="Start event monitor" Jan 29 16:28:59.638926 containerd[1520]: time="2025-01-29T16:28:59.638818727Z" level=info msg="Start snapshots syncer" Jan 29 16:28:59.638926 containerd[1520]: time="2025-01-29T16:28:59.638826111Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:28:59.638926 containerd[1520]: time="2025-01-29T16:28:59.638832923Z" level=info msg="Start streaming server" Jan 29 16:28:59.639593 containerd[1520]: time="2025-01-29T16:28:59.639578372Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:28:59.639795 containerd[1520]: time="2025-01-29T16:28:59.639687517Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:28:59.640337 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:28:59.646028 containerd[1520]: time="2025-01-29T16:28:59.645150683Z" level=info msg="containerd successfully booted in 0.086909s" Jan 29 16:28:59.663336 sshd_keygen[1524]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:28:59.687187 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:28:59.701702 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:28:59.709476 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:28:59.709729 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:28:59.721286 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:28:59.735665 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:28:59.749038 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:28:59.759025 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:28:59.761724 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:28:59.810623 systemd-networkd[1420]: eth1: Gained IPv6LL Jan 29 16:28:59.813012 systemd-timesyncd[1405]: Network configuration changed, trying to establish connection. Jan 29 16:28:59.817689 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:28:59.820830 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:28:59.835694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:28:59.842548 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:28:59.890023 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:28:59.922322 tar[1517]: linux-amd64/LICENSE Jan 29 16:28:59.922564 tar[1517]: linux-amd64/README.md Jan 29 16:28:59.936788 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:29:00.131148 systemd-networkd[1420]: eth0: Gained IPv6LL Jan 29 16:29:00.133199 systemd-timesyncd[1405]: Network configuration changed, trying to establish connection. Jan 29 16:29:00.906636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:29:00.911361 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:29:00.917374 systemd[1]: Startup finished in 1.490s (kernel) + 9.600s (initrd) + 5.624s (userspace) = 16.715s. Jan 29 16:29:00.920370 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:29:01.814474 kubelet[1617]: E0129 16:29:01.814365 1617 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:29:01.821618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:29:01.821869 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:29:01.822572 systemd[1]: kubelet.service: Consumed 1.399s CPU time, 245.3M memory peak. Jan 29 16:29:12.033030 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:29:12.039792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:29:12.216038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:29:12.226864 (kubelet)[1637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:29:12.279433 kubelet[1637]: E0129 16:29:12.279297 1637 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:29:12.288917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:29:12.289294 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:29:12.290043 systemd[1]: kubelet.service: Consumed 217ms CPU time, 97.8M memory peak. Jan 29 16:29:22.533174 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:29:22.541976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:29:22.771516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:29:22.788920 (kubelet)[1653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:29:22.873724 kubelet[1653]: E0129 16:29:22.873606 1653 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:29:22.882111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:29:22.882348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:29:22.882961 systemd[1]: kubelet.service: Consumed 279ms CPU time, 97.8M memory peak. Jan 29 16:29:31.018832 systemd-timesyncd[1405]: Contacted time server 213.172.105.106:123 (2.flatcar.pool.ntp.org). Jan 29 16:29:31.018936 systemd-timesyncd[1405]: Initial clock synchronization to Wed 2025-01-29 16:29:31.018638 UTC. Jan 29 16:29:31.019112 systemd-resolved[1365]: Clock change detected. Flushing caches. Jan 29 16:29:33.591685 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 16:29:33.598420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:29:33.814866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:29:33.825379 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:29:33.866227 kubelet[1668]: E0129 16:29:33.866028 1668 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:29:33.874209 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:29:33.874415 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:29:33.874873 systemd[1]: kubelet.service: Consumed 238ms CPU time, 96.3M memory peak. Jan 29 16:29:44.091842 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 16:29:44.100385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:29:44.291117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:29:44.295705 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:29:44.335868 kubelet[1686]: E0129 16:29:44.335793 1686 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:29:44.341691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:29:44.341984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:29:44.342532 systemd[1]: kubelet.service: Consumed 213ms CPU time, 95.8M memory peak. Jan 29 16:29:44.805663 update_engine[1506]: I20250129 16:29:44.805377 1506 update_attempter.cc:509] Updating boot flags... Jan 29 16:29:44.910113 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1703) Jan 29 16:29:44.989109 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1706) Jan 29 16:29:54.591573 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 16:29:54.598455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:29:54.819445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:29:54.824564 (kubelet)[1720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:29:54.869476 kubelet[1720]: E0129 16:29:54.869373 1720 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:29:54.875385 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:29:54.875607 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:29:54.875985 systemd[1]: kubelet.service: Consumed 247ms CPU time, 95.5M memory peak. Jan 29 16:30:05.091729 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 16:30:05.098410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:30:05.279766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:30:05.283727 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:30:05.325532 kubelet[1737]: E0129 16:30:05.325449 1737 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:30:05.332187 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:30:05.332398 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:30:05.332821 systemd[1]: kubelet.service: Consumed 201ms CPU time, 97.2M memory peak. Jan 29 16:30:15.342031 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 16:30:15.354570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:30:15.560360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:30:15.560435 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:30:15.631194 kubelet[1753]: E0129 16:30:15.630992 1753 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:30:15.634578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:30:15.634963 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:30:15.635686 systemd[1]: kubelet.service: Consumed 241ms CPU time, 97.8M memory peak. Jan 29 16:30:25.841389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 29 16:30:25.847345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:30:26.058990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:30:26.063892 (kubelet)[1769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:30:26.103458 kubelet[1769]: E0129 16:30:26.103252 1769 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:30:26.110034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:30:26.110304 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:30:26.110795 systemd[1]: kubelet.service: Consumed 221ms CPU time, 97.7M memory peak. Jan 29 16:30:36.342188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 29 16:30:36.354555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:30:36.574706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:30:36.587602 (kubelet)[1786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:30:36.632266 kubelet[1786]: E0129 16:30:36.632110 1786 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:30:36.639937 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:30:36.640220 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:30:36.640634 systemd[1]: kubelet.service: Consumed 246ms CPU time, 97M memory peak. Jan 29 16:30:46.841581 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 29 16:30:46.850416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:30:47.044281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:30:47.055514 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:30:47.101882 kubelet[1803]: E0129 16:30:47.101642 1803 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:30:47.109174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:30:47.109410 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:30:47.109837 systemd[1]: kubelet.service: Consumed 235ms CPU time, 95.8M memory peak. Jan 29 16:30:56.515045 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:30:56.522553 systemd[1]: Started sshd@0-159.69.241.25:22-147.75.109.163:38924.service - OpenSSH per-connection server daemon (147.75.109.163:38924). Jan 29 16:30:57.341705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 29 16:30:57.353750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:30:57.527844 sshd[1812]: Accepted publickey for core from 147.75.109.163 port 38924 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:30:57.534354 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:30:57.557982 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:30:57.562531 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:30:57.571308 systemd-logind[1501]: New session 1 of user core. Jan 29 16:30:57.586500 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:30:57.591323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:30:57.593211 (kubelet)[1822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:30:57.607486 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:30:57.616621 (systemd)[1829]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:30:57.619958 systemd-logind[1501]: New session c1 of user core. Jan 29 16:30:57.638322 kubelet[1822]: E0129 16:30:57.638288 1822 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:30:57.643427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:30:57.643637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:30:57.644411 systemd[1]: kubelet.service: Consumed 236ms CPU time, 95.4M memory peak. Jan 29 16:30:57.804114 systemd[1829]: Queued start job for default target default.target. Jan 29 16:30:57.815306 systemd[1829]: Created slice app.slice - User Application Slice. Jan 29 16:30:57.815329 systemd[1829]: Reached target paths.target - Paths. Jan 29 16:30:57.815368 systemd[1829]: Reached target timers.target - Timers. Jan 29 16:30:57.816978 systemd[1829]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:30:57.851046 systemd[1829]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:30:57.851379 systemd[1829]: Reached target sockets.target - Sockets. Jan 29 16:30:57.851473 systemd[1829]: Reached target basic.target - Basic System. Jan 29 16:30:57.851565 systemd[1829]: Reached target default.target - Main User Target. Jan 29 16:30:57.851630 systemd[1829]: Startup finished in 218ms. Jan 29 16:30:57.852510 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:30:57.864386 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:30:58.564648 systemd[1]: Started sshd@1-159.69.241.25:22-147.75.109.163:60910.service - OpenSSH per-connection server daemon (147.75.109.163:60910). Jan 29 16:30:59.590785 sshd[1843]: Accepted publickey for core from 147.75.109.163 port 60910 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:30:59.594001 sshd-session[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:30:59.603565 systemd-logind[1501]: New session 2 of user core. Jan 29 16:30:59.615436 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:31:00.285127 sshd[1845]: Connection closed by 147.75.109.163 port 60910 Jan 29 16:31:00.286333 sshd-session[1843]: pam_unix(sshd:session): session closed for user core Jan 29 16:31:00.290848 systemd[1]: sshd@1-159.69.241.25:22-147.75.109.163:60910.service: Deactivated successfully. Jan 29 16:31:00.294300 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:31:00.296653 systemd-logind[1501]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:31:00.298671 systemd-logind[1501]: Removed session 2. Jan 29 16:31:00.466526 systemd[1]: Started sshd@2-159.69.241.25:22-147.75.109.163:60916.service - OpenSSH per-connection server daemon (147.75.109.163:60916). Jan 29 16:31:01.480459 sshd[1851]: Accepted publickey for core from 147.75.109.163 port 60916 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:31:01.483148 sshd-session[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:31:01.492573 systemd-logind[1501]: New session 3 of user core. Jan 29 16:31:01.500377 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:31:02.166051 sshd[1853]: Connection closed by 147.75.109.163 port 60916 Jan 29 16:31:02.167521 sshd-session[1851]: pam_unix(sshd:session): session closed for user core Jan 29 16:31:02.173506 systemd[1]: sshd@2-159.69.241.25:22-147.75.109.163:60916.service: Deactivated successfully. Jan 29 16:31:02.178687 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:31:02.181980 systemd-logind[1501]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:31:02.185226 systemd-logind[1501]: Removed session 3. Jan 29 16:31:02.347544 systemd[1]: Started sshd@3-159.69.241.25:22-147.75.109.163:60924.service - OpenSSH per-connection server daemon (147.75.109.163:60924). Jan 29 16:31:03.355821 sshd[1859]: Accepted publickey for core from 147.75.109.163 port 60924 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:31:03.358908 sshd-session[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:31:03.367650 systemd-logind[1501]: New session 4 of user core. Jan 29 16:31:03.375384 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:31:04.050950 sshd[1861]: Connection closed by 147.75.109.163 port 60924 Jan 29 16:31:04.052263 sshd-session[1859]: pam_unix(sshd:session): session closed for user core Jan 29 16:31:04.061193 systemd-logind[1501]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:31:04.062645 systemd[1]: sshd@3-159.69.241.25:22-147.75.109.163:60924.service: Deactivated successfully. Jan 29 16:31:04.067703 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:31:04.070026 systemd-logind[1501]: Removed session 4. Jan 29 16:31:04.230684 systemd[1]: Started sshd@4-159.69.241.25:22-147.75.109.163:60932.service - OpenSSH per-connection server daemon (147.75.109.163:60932). Jan 29 16:31:05.227970 sshd[1867]: Accepted publickey for core from 147.75.109.163 port 60932 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:31:05.231121 sshd-session[1867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:31:05.241975 systemd-logind[1501]: New session 5 of user core. Jan 29 16:31:05.253417 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:31:05.771963 sudo[1870]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:31:05.772748 sudo[1870]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:31:06.300457 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:31:06.302000 (dockerd)[1888]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:31:06.765350 dockerd[1888]: time="2025-01-29T16:31:06.765167659Z" level=info msg="Starting up" Jan 29 16:31:06.926881 dockerd[1888]: time="2025-01-29T16:31:06.926823103Z" level=info msg="Loading containers: start." Jan 29 16:31:07.141113 kernel: Initializing XFRM netlink socket Jan 29 16:31:07.285004 systemd-networkd[1420]: docker0: Link UP Jan 29 16:31:07.320060 dockerd[1888]: time="2025-01-29T16:31:07.319999476Z" level=info msg="Loading containers: done." Jan 29 16:31:07.336974 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck904986904-merged.mount: Deactivated successfully. Jan 29 16:31:07.350002 dockerd[1888]: time="2025-01-29T16:31:07.349903535Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:31:07.350256 dockerd[1888]: time="2025-01-29T16:31:07.350094274Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:31:07.350365 dockerd[1888]: time="2025-01-29T16:31:07.350321000Z" level=info msg="Daemon has completed initialization" Jan 29 16:31:07.399049 dockerd[1888]: time="2025-01-29T16:31:07.398824491Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:31:07.399689 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:31:07.841312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 29 16:31:07.848352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:31:08.030633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:31:08.035568 (kubelet)[2085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:31:08.084628 kubelet[2085]: E0129 16:31:08.084503 2085 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:31:08.089429 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:31:08.089703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:31:08.090213 systemd[1]: kubelet.service: Consumed 218ms CPU time, 96M memory peak. Jan 29 16:31:08.814970 containerd[1520]: time="2025-01-29T16:31:08.814325553Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 16:31:09.473988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3328186983.mount: Deactivated successfully. Jan 29 16:31:10.595967 containerd[1520]: time="2025-01-29T16:31:10.595894091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:10.597100 containerd[1520]: time="2025-01-29T16:31:10.597056545Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677104" Jan 29 16:31:10.598287 containerd[1520]: time="2025-01-29T16:31:10.598240140Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:10.600946 containerd[1520]: time="2025-01-29T16:31:10.600892549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:10.602659 containerd[1520]: time="2025-01-29T16:31:10.601856612Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.787493386s" Jan 29 16:31:10.602659 containerd[1520]: time="2025-01-29T16:31:10.601885998Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 16:31:10.625340 containerd[1520]: time="2025-01-29T16:31:10.625306867Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 16:31:12.064206 containerd[1520]: time="2025-01-29T16:31:12.064131822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:12.065594 containerd[1520]: time="2025-01-29T16:31:12.065540816Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605765" Jan 29 16:31:12.067004 containerd[1520]: time="2025-01-29T16:31:12.066950562Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:12.072093 containerd[1520]: time="2025-01-29T16:31:12.070991930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:12.074341 containerd[1520]: time="2025-01-29T16:31:12.074317414Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.44883494s" Jan 29 16:31:12.074423 containerd[1520]: time="2025-01-29T16:31:12.074409140Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 16:31:12.108919 containerd[1520]: time="2025-01-29T16:31:12.108846839Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 16:31:13.269118 containerd[1520]: time="2025-01-29T16:31:13.269016366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:13.270294 containerd[1520]: time="2025-01-29T16:31:13.270228932Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783084" Jan 29 16:31:13.271065 containerd[1520]: time="2025-01-29T16:31:13.271006916Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:13.274155 containerd[1520]: time="2025-01-29T16:31:13.274112743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:13.275446 containerd[1520]: time="2025-01-29T16:31:13.275321252Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.166433374s" Jan 29 16:31:13.275446 containerd[1520]: time="2025-01-29T16:31:13.275359386Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 16:31:13.300707 containerd[1520]: time="2025-01-29T16:31:13.300668589Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 16:31:14.393908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3862000442.mount: Deactivated successfully. Jan 29 16:31:14.766626 containerd[1520]: time="2025-01-29T16:31:14.766329577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:14.767414 containerd[1520]: time="2025-01-29T16:31:14.767367708Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058363" Jan 29 16:31:14.768729 containerd[1520]: time="2025-01-29T16:31:14.768675875Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:14.770798 containerd[1520]: time="2025-01-29T16:31:14.770778897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:14.771601 containerd[1520]: time="2025-01-29T16:31:14.771448982Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.470558959s" Jan 29 16:31:14.771601 containerd[1520]: time="2025-01-29T16:31:14.771484471Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 16:31:14.795747 containerd[1520]: time="2025-01-29T16:31:14.795476510Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 16:31:15.396340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2837886631.mount: Deactivated successfully. Jan 29 16:31:16.213415 containerd[1520]: time="2025-01-29T16:31:16.213354412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:16.214790 containerd[1520]: time="2025-01-29T16:31:16.214749894Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Jan 29 16:31:16.216409 containerd[1520]: time="2025-01-29T16:31:16.216224207Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:16.220137 containerd[1520]: time="2025-01-29T16:31:16.219602086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:16.222178 containerd[1520]: time="2025-01-29T16:31:16.222150506Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.426634591s" Jan 29 16:31:16.222254 containerd[1520]: time="2025-01-29T16:31:16.222181465Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 16:31:16.246134 containerd[1520]: time="2025-01-29T16:31:16.246091965Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 16:31:16.797138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2472088355.mount: Deactivated successfully. Jan 29 16:31:16.807487 containerd[1520]: time="2025-01-29T16:31:16.807377876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:16.809273 containerd[1520]: time="2025-01-29T16:31:16.809189585Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Jan 29 16:31:16.810718 containerd[1520]: time="2025-01-29T16:31:16.810599185Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:16.814031 containerd[1520]: time="2025-01-29T16:31:16.813954561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:16.815239 containerd[1520]: time="2025-01-29T16:31:16.815052834Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 568.928878ms" Jan 29 16:31:16.815239 containerd[1520]: time="2025-01-29T16:31:16.815127086Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 16:31:16.850766 containerd[1520]: time="2025-01-29T16:31:16.850682446Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 16:31:17.471801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1976726497.mount: Deactivated successfully. Jan 29 16:31:18.091720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 29 16:31:18.100663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:31:18.299345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:31:18.300788 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:31:18.376362 kubelet[2297]: E0129 16:31:18.374433 2297 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:31:18.379953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:31:18.380377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:31:18.381165 systemd[1]: kubelet.service: Consumed 193ms CPU time, 95.4M memory peak. Jan 29 16:31:19.057450 containerd[1520]: time="2025-01-29T16:31:19.057370640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:19.058856 containerd[1520]: time="2025-01-29T16:31:19.058808670Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238651" Jan 29 16:31:19.060150 containerd[1520]: time="2025-01-29T16:31:19.060110759Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:19.063482 containerd[1520]: time="2025-01-29T16:31:19.063426098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:19.064991 containerd[1520]: time="2025-01-29T16:31:19.064940123Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.214201189s" Jan 29 16:31:19.064991 containerd[1520]: time="2025-01-29T16:31:19.064971563Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 16:31:21.729204 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:31:21.729843 systemd[1]: kubelet.service: Consumed 193ms CPU time, 95.4M memory peak. Jan 29 16:31:21.741534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:31:21.786023 systemd[1]: Reload requested from client PID 2373 ('systemctl') (unit session-5.scope)... Jan 29 16:31:21.786052 systemd[1]: Reloading... Jan 29 16:31:21.950098 zram_generator::config[2427]: No configuration found. Jan 29 16:31:22.065008 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:31:22.180762 systemd[1]: Reloading finished in 393 ms. Jan 29 16:31:22.246413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:31:22.248389 (kubelet)[2464]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:31:22.257674 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:31:22.259195 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:31:22.259596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:31:22.259655 systemd[1]: kubelet.service: Consumed 156ms CPU time, 85.2M memory peak. Jan 29 16:31:22.267563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:31:22.423631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:31:22.432751 (kubelet)[2478]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:31:22.478438 kubelet[2478]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:31:22.478438 kubelet[2478]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:31:22.478438 kubelet[2478]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:31:22.478988 kubelet[2478]: I0129 16:31:22.478486 2478 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:31:23.162445 kubelet[2478]: I0129 16:31:23.162341 2478 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 16:31:23.162445 kubelet[2478]: I0129 16:31:23.162399 2478 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:31:23.162807 kubelet[2478]: I0129 16:31:23.162744 2478 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 16:31:23.215137 kubelet[2478]: I0129 16:31:23.214527 2478 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:31:23.215738 kubelet[2478]: E0129 16:31:23.215438 2478 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://159.69.241.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 159.69.241.25:6443: connect: connection refused Jan 29 16:31:23.238363 kubelet[2478]: I0129 16:31:23.238286 2478 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:31:23.241902 kubelet[2478]: I0129 16:31:23.241819 2478 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:31:23.242283 kubelet[2478]: I0129 16:31:23.241888 2478 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-d-42684b3569","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 16:31:23.242444 kubelet[2478]: I0129 16:31:23.242300 2478 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:31:23.242444 kubelet[2478]: I0129 16:31:23.242319 2478 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 16:31:23.242569 kubelet[2478]: I0129 16:31:23.242550 2478 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:31:23.243798 kubelet[2478]: I0129 16:31:23.243759 2478 kubelet.go:400] "Attempting to sync node with API server" Jan 29 16:31:23.243798 kubelet[2478]: I0129 16:31:23.243793 2478 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:31:23.243952 kubelet[2478]: I0129 16:31:23.243835 2478 kubelet.go:312] "Adding apiserver pod source" Jan 29 16:31:23.243952 kubelet[2478]: I0129 16:31:23.243870 2478 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:31:23.249105 kubelet[2478]: W0129 16:31:23.248020 2478 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.69.241.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 159.69.241.25:6443: connect: connection refused Jan 29 16:31:23.249105 kubelet[2478]: E0129 16:31:23.248216 2478 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://159.69.241.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 159.69.241.25:6443: connect: connection refused Jan 29 16:31:23.249105 kubelet[2478]: W0129 16:31:23.248318 2478 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.69.241.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-d-42684b3569&limit=500&resourceVersion=0": dial tcp 159.69.241.25:6443: connect: connection refused Jan 29 16:31:23.249105 kubelet[2478]: E0129 16:31:23.248377 2478 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://159.69.241.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-d-42684b3569&limit=500&resourceVersion=0": dial tcp 159.69.241.25:6443: connect: connection refused Jan 29 16:31:23.249105 kubelet[2478]: I0129 16:31:23.248909 2478 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:31:23.251343 kubelet[2478]: I0129 16:31:23.251307 2478 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:31:23.251436 kubelet[2478]: W0129 16:31:23.251406 2478 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:31:23.252890 kubelet[2478]: I0129 16:31:23.252676 2478 server.go:1264] "Started kubelet" Jan 29 16:31:23.259975 kubelet[2478]: I0129 16:31:23.258899 2478 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:31:23.260162 kubelet[2478]: I0129 16:31:23.260060 2478 server.go:455] "Adding debug handlers to kubelet server" Jan 29 16:31:23.263440 kubelet[2478]: I0129 16:31:23.261389 2478 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:31:23.263440 kubelet[2478]: I0129 16:31:23.261754 2478 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:31:23.263440 kubelet[2478]: I0129 16:31:23.262882 2478 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:31:23.264224 kubelet[2478]: E0129 16:31:23.263691 2478 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://159.69.241.25:6443/api/v1/namespaces/default/events\": dial tcp 159.69.241.25:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-0-d-42684b3569.181f36d82b1fea49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-d-42684b3569,UID:ci-4230-0-0-d-42684b3569,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-d-42684b3569,},FirstTimestamp:2025-01-29 16:31:23.252648521 +0000 UTC m=+0.814161403,LastTimestamp:2025-01-29 16:31:23.252648521 +0000 UTC m=+0.814161403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-d-42684b3569,}" Jan 29 16:31:23.267415 kubelet[2478]: I0129 16:31:23.267380 2478 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 16:31:23.272271 kubelet[2478]: I0129 16:31:23.271786 2478 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:31:23.272271 kubelet[2478]: I0129 16:31:23.271862 2478 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:31:23.272775 kubelet[2478]: W0129 16:31:23.272730 2478 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.69.241.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.69.241.25:6443: connect: connection refused Jan 29 16:31:23.272878 kubelet[2478]: E0129 16:31:23.272857 2478 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://159.69.241.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.69.241.25:6443: connect: connection refused Jan 29 16:31:23.273029 kubelet[2478]: E0129 16:31:23.272994 2478 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.241.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-d-42684b3569?timeout=10s\": dial tcp 159.69.241.25:6443: connect: connection refused" interval="200ms" Jan 29 16:31:23.273465 kubelet[2478]: I0129 16:31:23.273445 2478 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:31:23.273699 kubelet[2478]: I0129 16:31:23.273659 2478 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:31:23.277391 kubelet[2478]: I0129 16:31:23.276790 2478 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:31:23.296444 kubelet[2478]: E0129 16:31:23.296410 2478 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:31:23.296560 kubelet[2478]: I0129 16:31:23.296489 2478 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:31:23.297690 kubelet[2478]: I0129 16:31:23.297672 2478 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:31:23.298104 kubelet[2478]: I0129 16:31:23.297769 2478 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:31:23.298104 kubelet[2478]: I0129 16:31:23.297792 2478 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 16:31:23.298104 kubelet[2478]: E0129 16:31:23.297834 2478 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:31:23.305660 kubelet[2478]: W0129 16:31:23.305622 2478 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.69.241.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.69.241.25:6443: connect: connection refused Jan 29 16:31:23.305770 kubelet[2478]: E0129 16:31:23.305759 2478 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://159.69.241.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.69.241.25:6443: connect: connection refused Jan 29 16:31:23.323736 kubelet[2478]: I0129 16:31:23.323697 2478 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:31:23.323736 kubelet[2478]: I0129 16:31:23.323716 2478 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:31:23.323736 kubelet[2478]: I0129 16:31:23.323733 2478 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:31:23.325913 kubelet[2478]: I0129 16:31:23.325889 2478 policy_none.go:49] "None policy: Start" Jan 29 16:31:23.326752 kubelet[2478]: I0129 16:31:23.326484 2478 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:31:23.326752 kubelet[2478]: I0129 16:31:23.326504 2478 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:31:23.332285 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:31:23.346205 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:31:23.351247 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:31:23.356273 kubelet[2478]: I0129 16:31:23.356241 2478 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:31:23.356482 kubelet[2478]: I0129 16:31:23.356438 2478 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:31:23.356567 kubelet[2478]: I0129 16:31:23.356547 2478 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:31:23.358335 kubelet[2478]: E0129 16:31:23.358223 2478 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-0-0-d-42684b3569\" not found" Jan 29 16:31:23.369986 kubelet[2478]: I0129 16:31:23.369934 2478 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-d-42684b3569" Jan 29 16:31:23.370334 kubelet[2478]: E0129 16:31:23.370277 2478 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://159.69.241.25:6443/api/v1/nodes\": dial tcp 159.69.241.25:6443: connect: connection refused" node="ci-4230-0-0-d-42684b3569" Jan 29 16:31:23.399102 kubelet[2478]: I0129 16:31:23.398802 2478 topology_manager.go:215] "Topology Admit Handler" podUID="9b33fa58ad12fed16e50e1e6c5e2e9f5" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-0-d-42684b3569" Jan 29 16:31:23.402055 kubelet[2478]: I0129 16:31:23.401990 2478 topology_manager.go:215] "Topology Admit Handler" podUID="83755f3207482317c03cb45dd83070db" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-0-d-42684b3569" Jan 29 16:31:23.405869 kubelet[2478]: I0129 16:31:23.405542 2478 topology_manager.go:215] "Topology Admit Handler" podUID="0dee9c6e4b445692c966bbc33ee1aac7" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-0-d-42684b3569" Jan 29 16:31:23.418692 systemd[1]: Created slice kubepods-burstable-pod9b33fa58ad12fed16e50e1e6c5e2e9f5.slice - libcontainer container kubepods-burstable-pod9b33fa58ad12fed16e50e1e6c5e2e9f5.slice. Jan 29 16:31:23.434270 systemd[1]: Created slice kubepods-burstable-pod83755f3207482317c03cb45dd83070db.slice - libcontainer container kubepods-burstable-pod83755f3207482317c03cb45dd83070db.slice. Jan 29 16:31:23.450447 systemd[1]: Created slice kubepods-burstable-pod0dee9c6e4b445692c966bbc33ee1aac7.slice - libcontainer container kubepods-burstable-pod0dee9c6e4b445692c966bbc33ee1aac7.slice. Jan 29 16:31:23.474006 kubelet[2478]: E0129 16:31:23.473884 2478 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.241.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-d-42684b3569?timeout=10s\": dial tcp 159.69.241.25:6443: connect: connection refused" interval="400ms" Jan 29 16:31:23.573183 kubelet[2478]: I0129 16:31:23.572656 2478 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b33fa58ad12fed16e50e1e6c5e2e9f5-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-d-42684b3569\" (UID: \"9b33fa58ad12fed16e50e1e6c5e2e9f5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-42684b3569" Jan 29 16:31:23.573183 kubelet[2478]: I0129 16:31:23.572753 2478 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b33fa58ad12fed16e50e1e6c5e2e9f5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-d-42684b3569\" (UID: \"9b33fa58ad12fed16e50e1e6c5e2e9f5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-42684b3569" Jan 29 16:31:23.573183 kubelet[2478]: I0129 16:31:23.572788 2478 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/83755f3207482317c03cb45dd83070db-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-d-42684b3569\" (UID: \"83755f3207482317c03cb45dd83070db\") " pod="kube-system/kube-scheduler-ci-4230-0-0-d-42684b3569" Jan 29 16:31:23.573183 kubelet[2478]: I0129 16:31:23.572817 2478 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0dee9c6e4b445692c966bbc33ee1aac7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-d-42684b3569\" (UID: \"0dee9c6e4b445692c966bbc33ee1aac7\") " pod="kube-system/kube-apiserver-ci-4230-0-0-d-42684b3569" Jan 29 16:31:23.573183 kubelet[2478]: I0129 16:31:23.572867 2478 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0dee9c6e4b445692c966bbc33ee1aac7-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-d-42684b3569\" (UID: \"0dee9c6e4b445692c966bbc33ee1aac7\") " pod="kube-system/kube-apiserver-ci-4230-0-0-d-42684b3569" Jan 29 16:31:23.573871 kubelet[2478]: I0129 16:31:23.572924 2478 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b33fa58ad12fed16e50e1e6c5e2e9f5-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-d-42684b3569\" (UID: \"9b33fa58ad12fed16e50e1e6c5e2e9f5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-42684b3569" Jan 29 16:31:23.573871 kubelet[2478]: I0129 16:31:23.572952 2478 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b33fa58ad12fed16e50e1e6c5e2e9f5-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-d-42684b3569\" (UID: \"9b33fa58ad12fed16e50e1e6c5e2e9f5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-42684b3569" Jan 29 16:31:23.573871 kubelet[2478]: I0129 16:31:23.572980 2478 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b33fa58ad12fed16e50e1e6c5e2e9f5-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-d-42684b3569\" (UID: \"9b33fa58ad12fed16e50e1e6c5e2e9f5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-42684b3569" Jan 29 16:31:23.573871 kubelet[2478]: I0129 16:31:23.573007 2478 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0dee9c6e4b445692c966bbc33ee1aac7-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-d-42684b3569\" (UID: \"0dee9c6e4b445692c966bbc33ee1aac7\") " pod="kube-system/kube-apiserver-ci-4230-0-0-d-42684b3569" Jan 29 16:31:23.573871 kubelet[2478]: I0129 16:31:23.573851 2478 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-d-42684b3569" Jan 29 16:31:23.574466 kubelet[2478]: E0129 16:31:23.574421 2478 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://159.69.241.25:6443/api/v1/nodes\": dial tcp 159.69.241.25:6443: connect: connection refused" node="ci-4230-0-0-d-42684b3569" Jan 29 16:31:23.729455 containerd[1520]: time="2025-01-29T16:31:23.729217989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-d-42684b3569,Uid:9b33fa58ad12fed16e50e1e6c5e2e9f5,Namespace:kube-system,Attempt:0,}" Jan 29 16:31:23.748641 containerd[1520]: time="2025-01-29T16:31:23.748554768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-d-42684b3569,Uid:83755f3207482317c03cb45dd83070db,Namespace:kube-system,Attempt:0,}" Jan 29 16:31:23.754155 containerd[1520]: time="2025-01-29T16:31:23.754054724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-d-42684b3569,Uid:0dee9c6e4b445692c966bbc33ee1aac7,Namespace:kube-system,Attempt:0,}" Jan 29 16:31:23.874799 kubelet[2478]: E0129 16:31:23.874731 2478 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.241.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-d-42684b3569?timeout=10s\": dial tcp 159.69.241.25:6443: connect: connection refused" interval="800ms" Jan 29 16:31:23.978432 kubelet[2478]: I0129 16:31:23.978356 2478 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-d-42684b3569" Jan 29 16:31:23.978847 kubelet[2478]: E0129 16:31:23.978774 2478 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://159.69.241.25:6443/api/v1/nodes\": dial tcp 159.69.241.25:6443: connect: connection refused" node="ci-4230-0-0-d-42684b3569" Jan 29 16:31:24.268146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1651950789.mount: Deactivated successfully. Jan 29 16:31:24.275518 containerd[1520]: time="2025-01-29T16:31:24.275319350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:31:24.277994 containerd[1520]: time="2025-01-29T16:31:24.277916444Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Jan 29 16:31:24.283967 containerd[1520]: time="2025-01-29T16:31:24.283854220Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:31:24.286051 containerd[1520]: time="2025-01-29T16:31:24.285871637Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:31:24.287733 containerd[1520]: time="2025-01-29T16:31:24.287663396Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:31:24.288736 containerd[1520]: time="2025-01-29T16:31:24.288637534Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:31:24.289475 containerd[1520]: time="2025-01-29T16:31:24.289374970Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:31:24.294373 containerd[1520]: time="2025-01-29T16:31:24.294285698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:31:24.296146 containerd[1520]: time="2025-01-29T16:31:24.295612237Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 546.94264ms" Jan 29 16:31:24.298916 containerd[1520]: time="2025-01-29T16:31:24.297705621Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 543.458869ms" Jan 29 16:31:24.304588 containerd[1520]: time="2025-01-29T16:31:24.304510150Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 571.962506ms" Jan 29 16:31:24.450715 containerd[1520]: time="2025-01-29T16:31:24.449909859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:31:24.450715 containerd[1520]: time="2025-01-29T16:31:24.449957560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:31:24.450715 containerd[1520]: time="2025-01-29T16:31:24.449967429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:31:24.450715 containerd[1520]: time="2025-01-29T16:31:24.450035649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:31:24.451241 containerd[1520]: time="2025-01-29T16:31:24.449438790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:31:24.451241 containerd[1520]: time="2025-01-29T16:31:24.451032781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:31:24.451241 containerd[1520]: time="2025-01-29T16:31:24.451049233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:31:24.451241 containerd[1520]: time="2025-01-29T16:31:24.451125407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:31:24.453242 containerd[1520]: time="2025-01-29T16:31:24.452960078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:31:24.453242 containerd[1520]: time="2025-01-29T16:31:24.453022647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:31:24.453242 containerd[1520]: time="2025-01-29T16:31:24.453036372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:31:24.453242 containerd[1520]: time="2025-01-29T16:31:24.453132736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:31:24.474902 systemd[1]: Started cri-containerd-5b37bc02ffb6caa9e6028cc99b0cf8acf0617411ff353d67ad84d187e75a2b98.scope - libcontainer container 5b37bc02ffb6caa9e6028cc99b0cf8acf0617411ff353d67ad84d187e75a2b98. Jan 29 16:31:24.483142 systemd[1]: Started cri-containerd-74abf674b9afb71eb7fdb0b34888d337430c78a1456d4716bb1d8d18062aff8e.scope - libcontainer container 74abf674b9afb71eb7fdb0b34888d337430c78a1456d4716bb1d8d18062aff8e. Jan 29 16:31:24.505339 systemd[1]: Started cri-containerd-2a767de7b697941826969da80866d0547335681f0f042c7697b044ea654e8f1c.scope - libcontainer container 2a767de7b697941826969da80866d0547335681f0f042c7697b044ea654e8f1c. Jan 29 16:31:24.542182 kubelet[2478]: W0129 16:31:24.541429 2478 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.69.241.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.69.241.25:6443: connect: connection refused Jan 29 16:31:24.542182 kubelet[2478]: E0129 16:31:24.541468 2478 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://159.69.241.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.69.241.25:6443: connect: connection refused Jan 29 16:31:24.552024 containerd[1520]: time="2025-01-29T16:31:24.551983859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-d-42684b3569,Uid:0dee9c6e4b445692c966bbc33ee1aac7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b37bc02ffb6caa9e6028cc99b0cf8acf0617411ff353d67ad84d187e75a2b98\"" Jan 29 16:31:24.561110 containerd[1520]: time="2025-01-29T16:31:24.560497698Z" level=info msg="CreateContainer within sandbox \"5b37bc02ffb6caa9e6028cc99b0cf8acf0617411ff353d67ad84d187e75a2b98\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:31:24.567989 containerd[1520]: time="2025-01-29T16:31:24.567957848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-d-42684b3569,Uid:83755f3207482317c03cb45dd83070db,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a767de7b697941826969da80866d0547335681f0f042c7697b044ea654e8f1c\"" Jan 29 16:31:24.570724 containerd[1520]: time="2025-01-29T16:31:24.570680952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-d-42684b3569,Uid:9b33fa58ad12fed16e50e1e6c5e2e9f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"74abf674b9afb71eb7fdb0b34888d337430c78a1456d4716bb1d8d18062aff8e\"" Jan 29 16:31:24.571221 containerd[1520]: time="2025-01-29T16:31:24.571203709Z" level=info msg="CreateContainer within sandbox \"2a767de7b697941826969da80866d0547335681f0f042c7697b044ea654e8f1c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:31:24.574660 containerd[1520]: time="2025-01-29T16:31:24.574603453Z" level=info msg="CreateContainer within sandbox \"74abf674b9afb71eb7fdb0b34888d337430c78a1456d4716bb1d8d18062aff8e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:31:24.584439 containerd[1520]: time="2025-01-29T16:31:24.584286002Z" level=info msg="CreateContainer within sandbox \"5b37bc02ffb6caa9e6028cc99b0cf8acf0617411ff353d67ad84d187e75a2b98\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"05ea0c6f6f2a5d9bdaca9c5bf28cef4bb780c3f37acf10b031169876e460b4c0\"" Jan 29 16:31:24.584991 containerd[1520]: time="2025-01-29T16:31:24.584962382Z" level=info msg="StartContainer for \"05ea0c6f6f2a5d9bdaca9c5bf28cef4bb780c3f37acf10b031169876e460b4c0\"" Jan 29 16:31:24.601761 containerd[1520]: time="2025-01-29T16:31:24.601723022Z" level=info msg="CreateContainer within sandbox \"2a767de7b697941826969da80866d0547335681f0f042c7697b044ea654e8f1c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"61745e0ae4d99f4f15f9a4cde3928c632b143b14d44783c2f5839a90bcafebcb\"" Jan 29 16:31:24.604260 containerd[1520]: time="2025-01-29T16:31:24.603064391Z" level=info msg="CreateContainer within sandbox \"74abf674b9afb71eb7fdb0b34888d337430c78a1456d4716bb1d8d18062aff8e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bf8813fe945b688eb9eb405eb34ad89c452143e3664c792b759cf1f34f8bb6f4\"" Jan 29 16:31:24.604329 kubelet[2478]: W0129 16:31:24.604191 2478 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.69.241.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.69.241.25:6443: connect: connection refused Jan 29 16:31:24.604329 kubelet[2478]: E0129 16:31:24.604244 2478 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://159.69.241.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.69.241.25:6443: connect: connection refused Jan 29 16:31:24.604978 containerd[1520]: time="2025-01-29T16:31:24.604962080Z" level=info msg="StartContainer for \"bf8813fe945b688eb9eb405eb34ad89c452143e3664c792b759cf1f34f8bb6f4\"" Jan 29 16:31:24.606009 containerd[1520]: time="2025-01-29T16:31:24.605992005Z" level=info msg="StartContainer for \"61745e0ae4d99f4f15f9a4cde3928c632b143b14d44783c2f5839a90bcafebcb\"" Jan 29 16:31:24.616278 systemd[1]: Started cri-containerd-05ea0c6f6f2a5d9bdaca9c5bf28cef4bb780c3f37acf10b031169876e460b4c0.scope - libcontainer container 05ea0c6f6f2a5d9bdaca9c5bf28cef4bb780c3f37acf10b031169876e460b4c0. Jan 29 16:31:24.641801 systemd[1]: Started cri-containerd-bf8813fe945b688eb9eb405eb34ad89c452143e3664c792b759cf1f34f8bb6f4.scope - libcontainer container bf8813fe945b688eb9eb405eb34ad89c452143e3664c792b759cf1f34f8bb6f4. Jan 29 16:31:24.653633 systemd[1]: Started cri-containerd-61745e0ae4d99f4f15f9a4cde3928c632b143b14d44783c2f5839a90bcafebcb.scope - libcontainer container 61745e0ae4d99f4f15f9a4cde3928c632b143b14d44783c2f5839a90bcafebcb. Jan 29 16:31:24.676735 kubelet[2478]: E0129 16:31:24.675581 2478 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.241.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-d-42684b3569?timeout=10s\": dial tcp 159.69.241.25:6443: connect: connection refused" interval="1.6s" Jan 29 16:31:24.704354 containerd[1520]: time="2025-01-29T16:31:24.704313115Z" level=info msg="StartContainer for \"05ea0c6f6f2a5d9bdaca9c5bf28cef4bb780c3f37acf10b031169876e460b4c0\" returns successfully" Jan 29 16:31:24.726919 containerd[1520]: time="2025-01-29T16:31:24.726547286Z" level=info msg="StartContainer for \"bf8813fe945b688eb9eb405eb34ad89c452143e3664c792b759cf1f34f8bb6f4\" returns successfully" Jan 29 16:31:24.746813 containerd[1520]: time="2025-01-29T16:31:24.746778706Z" level=info msg="StartContainer for \"61745e0ae4d99f4f15f9a4cde3928c632b143b14d44783c2f5839a90bcafebcb\" returns successfully" Jan 29 16:31:24.785001 kubelet[2478]: I0129 16:31:24.784946 2478 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-d-42684b3569" Jan 29 16:31:24.786389 kubelet[2478]: E0129 16:31:24.786204 2478 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://159.69.241.25:6443/api/v1/nodes\": dial tcp 159.69.241.25:6443: connect: connection refused" node="ci-4230-0-0-d-42684b3569" Jan 29 16:31:24.794720 kubelet[2478]: W0129 16:31:24.794584 2478 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.69.241.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-d-42684b3569&limit=500&resourceVersion=0": dial tcp 159.69.241.25:6443: connect: connection refused Jan 29 16:31:24.794720 kubelet[2478]: E0129 16:31:24.794647 2478 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://159.69.241.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-d-42684b3569&limit=500&resourceVersion=0": dial tcp 159.69.241.25:6443: connect: connection refused Jan 29 16:31:24.800105 kubelet[2478]: W0129 16:31:24.800050 2478 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.69.241.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 159.69.241.25:6443: connect: connection refused Jan 29 16:31:24.800105 kubelet[2478]: E0129 16:31:24.800108 2478 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://159.69.241.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 159.69.241.25:6443: connect: connection refused Jan 29 16:31:26.389186 kubelet[2478]: I0129 16:31:26.388051 2478 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-d-42684b3569" Jan 29 16:31:26.626043 kubelet[2478]: I0129 16:31:26.625888 2478 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-0-d-42684b3569" Jan 29 16:31:26.644507 kubelet[2478]: E0129 16:31:26.644155 2478 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-0-0-d-42684b3569\" not found" Jan 29 16:31:26.744795 kubelet[2478]: E0129 16:31:26.744728 2478 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-0-0-d-42684b3569\" not found" Jan 29 16:31:26.845262 kubelet[2478]: E0129 16:31:26.845202 2478 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-0-0-d-42684b3569\" not found" Jan 29 16:31:26.946270 kubelet[2478]: E0129 16:31:26.946013 2478 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-0-0-d-42684b3569\" not found" Jan 29 16:31:27.047099 kubelet[2478]: E0129 16:31:27.046989 2478 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-0-0-d-42684b3569\" not found" Jan 29 16:31:27.147770 kubelet[2478]: E0129 16:31:27.147696 2478 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-0-0-d-42684b3569\" not found" Jan 29 16:31:27.248928 kubelet[2478]: E0129 16:31:27.248720 2478 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-0-0-d-42684b3569\" not found" Jan 29 16:31:27.349155 kubelet[2478]: E0129 16:31:27.349057 2478 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-0-0-d-42684b3569\" not found" Jan 29 16:31:27.449767 kubelet[2478]: E0129 16:31:27.449689 2478 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-0-0-d-42684b3569\" not found" Jan 29 16:31:27.550959 kubelet[2478]: E0129 16:31:27.550725 2478 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-0-0-d-42684b3569\" not found" Jan 29 16:31:27.651995 kubelet[2478]: E0129 16:31:27.651521 2478 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-0-0-d-42684b3569\" not found" Jan 29 16:31:27.752405 kubelet[2478]: E0129 16:31:27.752353 2478 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-0-0-d-42684b3569\" not found" Jan 29 16:31:28.249042 kubelet[2478]: I0129 16:31:28.248964 2478 apiserver.go:52] "Watching apiserver" Jan 29 16:31:28.272277 kubelet[2478]: I0129 16:31:28.272142 2478 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:31:28.903757 systemd[1]: Reload requested from client PID 2757 ('systemctl') (unit session-5.scope)... Jan 29 16:31:28.903784 systemd[1]: Reloading... Jan 29 16:31:29.081103 zram_generator::config[2814]: No configuration found. Jan 29 16:31:29.194338 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:31:29.334879 systemd[1]: Reloading finished in 430 ms. Jan 29 16:31:29.365404 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:31:29.384485 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:31:29.385023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:31:29.385127 systemd[1]: kubelet.service: Consumed 1.323s CPU time, 111.5M memory peak. Jan 29 16:31:29.393684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:31:29.589294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:31:29.598000 (kubelet)[2853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:31:29.690523 kubelet[2853]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:31:29.690523 kubelet[2853]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:31:29.690523 kubelet[2853]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:31:29.691004 kubelet[2853]: I0129 16:31:29.690564 2853 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:31:29.695989 kubelet[2853]: I0129 16:31:29.695958 2853 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 16:31:29.695989 kubelet[2853]: I0129 16:31:29.695981 2853 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:31:29.696274 kubelet[2853]: I0129 16:31:29.696247 2853 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 16:31:29.698946 kubelet[2853]: I0129 16:31:29.698919 2853 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:31:29.701526 kubelet[2853]: I0129 16:31:29.701168 2853 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:31:29.714896 kubelet[2853]: I0129 16:31:29.714860 2853 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:31:29.715239 kubelet[2853]: I0129 16:31:29.715168 2853 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:31:29.715386 kubelet[2853]: I0129 16:31:29.715212 2853 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-d-42684b3569","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 16:31:29.715468 kubelet[2853]: I0129 16:31:29.715393 2853 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:31:29.715468 kubelet[2853]: I0129 16:31:29.715404 2853 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 16:31:29.715468 kubelet[2853]: I0129 16:31:29.715444 2853 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:31:29.715572 kubelet[2853]: I0129 16:31:29.715549 2853 kubelet.go:400] "Attempting to sync node with API server" Jan 29 16:31:29.715572 kubelet[2853]: I0129 16:31:29.715567 2853 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:31:29.715979 kubelet[2853]: I0129 16:31:29.715936 2853 kubelet.go:312] "Adding apiserver pod source" Jan 29 16:31:29.715979 kubelet[2853]: I0129 16:31:29.715959 2853 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:31:29.719580 kubelet[2853]: I0129 16:31:29.719562 2853 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:31:29.724012 kubelet[2853]: I0129 16:31:29.723315 2853 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:31:29.724012 kubelet[2853]: I0129 16:31:29.723757 2853 server.go:1264] "Started kubelet" Jan 29 16:31:29.727762 kubelet[2853]: I0129 16:31:29.727728 2853 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:31:29.740718 kubelet[2853]: E0129 16:31:29.740144 2853 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:31:29.740868 kubelet[2853]: I0129 16:31:29.740833 2853 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:31:29.741002 kubelet[2853]: I0129 16:31:29.740981 2853 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 16:31:29.742970 kubelet[2853]: I0129 16:31:29.742943 2853 server.go:455] "Adding debug handlers to kubelet server" Jan 29 16:31:29.743493 kubelet[2853]: I0129 16:31:29.743476 2853 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:31:29.743788 kubelet[2853]: I0129 16:31:29.743762 2853 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:31:29.744657 kubelet[2853]: I0129 16:31:29.744588 2853 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:31:29.746043 kubelet[2853]: I0129 16:31:29.745617 2853 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:31:29.747875 kubelet[2853]: I0129 16:31:29.747759 2853 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:31:29.747875 kubelet[2853]: I0129 16:31:29.747840 2853 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:31:29.750382 kubelet[2853]: I0129 16:31:29.750359 2853 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:31:29.753167 kubelet[2853]: I0129 16:31:29.753055 2853 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:31:29.753167 kubelet[2853]: I0129 16:31:29.753117 2853 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:31:29.753167 kubelet[2853]: I0129 16:31:29.753133 2853 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 16:31:29.753283 kubelet[2853]: E0129 16:31:29.753175 2853 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:31:29.753283 kubelet[2853]: I0129 16:31:29.753059 2853 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:31:29.802109 kubelet[2853]: I0129 16:31:29.802006 2853 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:31:29.802109 kubelet[2853]: I0129 16:31:29.802035 2853 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:31:29.802109 kubelet[2853]: I0129 16:31:29.802053 2853 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:31:29.802485 kubelet[2853]: I0129 16:31:29.802272 2853 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:31:29.802485 kubelet[2853]: I0129 16:31:29.802285 2853 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:31:29.802485 kubelet[2853]: I0129 16:31:29.802303 2853 policy_none.go:49] "None policy: Start" Jan 29 16:31:29.803273 kubelet[2853]: I0129 16:31:29.803174 2853 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:31:29.803273 kubelet[2853]: I0129 16:31:29.803218 2853 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:31:29.803443 kubelet[2853]: I0129 16:31:29.803411 2853 state_mem.go:75] "Updated machine memory state" Jan 29 16:31:29.808862 kubelet[2853]: I0129 16:31:29.808372 2853 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:31:29.808862 kubelet[2853]: I0129 16:31:29.808560 2853 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:31:29.808862 kubelet[2853]: I0129 16:31:29.808659 2853 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:31:29.844947 kubelet[2853]: I0129 16:31:29.844926 2853 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-d-42684b3569" Jan 29 16:31:29.852502 kubelet[2853]: I0129 16:31:29.851738 2853 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230-0-0-d-42684b3569" Jan 29 16:31:29.852502 kubelet[2853]: I0129 16:31:29.851829 2853 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-0-d-42684b3569" Jan 29 16:31:29.854729 kubelet[2853]: I0129 16:31:29.853896 2853 topology_manager.go:215] "Topology Admit Handler" podUID="0dee9c6e4b445692c966bbc33ee1aac7" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-0-d-42684b3569" Jan 29 16:31:29.854729 kubelet[2853]: I0129 16:31:29.854007 2853 topology_manager.go:215] "Topology Admit Handler" podUID="9b33fa58ad12fed16e50e1e6c5e2e9f5" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-0-d-42684b3569" Jan 29 16:31:29.854729 kubelet[2853]: I0129 16:31:29.854119 2853 topology_manager.go:215] "Topology Admit Handler" podUID="83755f3207482317c03cb45dd83070db" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-0-d-42684b3569" Jan 29 16:31:30.045298 kubelet[2853]: I0129 16:31:30.045225 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0dee9c6e4b445692c966bbc33ee1aac7-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-d-42684b3569\" (UID: \"0dee9c6e4b445692c966bbc33ee1aac7\") " pod="kube-system/kube-apiserver-ci-4230-0-0-d-42684b3569" Jan 29 16:31:30.045298 kubelet[2853]: I0129 16:31:30.045287 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b33fa58ad12fed16e50e1e6c5e2e9f5-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-d-42684b3569\" (UID: \"9b33fa58ad12fed16e50e1e6c5e2e9f5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-42684b3569" Jan 29 16:31:30.045477 kubelet[2853]: I0129 16:31:30.045322 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b33fa58ad12fed16e50e1e6c5e2e9f5-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-d-42684b3569\" (UID: \"9b33fa58ad12fed16e50e1e6c5e2e9f5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-42684b3569" Jan 29 16:31:30.045477 kubelet[2853]: I0129 16:31:30.045345 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b33fa58ad12fed16e50e1e6c5e2e9f5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-d-42684b3569\" (UID: \"9b33fa58ad12fed16e50e1e6c5e2e9f5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-42684b3569" Jan 29 16:31:30.045477 kubelet[2853]: I0129 16:31:30.045394 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0dee9c6e4b445692c966bbc33ee1aac7-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-d-42684b3569\" (UID: \"0dee9c6e4b445692c966bbc33ee1aac7\") " pod="kube-system/kube-apiserver-ci-4230-0-0-d-42684b3569" Jan 29 16:31:30.045477 kubelet[2853]: I0129 16:31:30.045421 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0dee9c6e4b445692c966bbc33ee1aac7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-d-42684b3569\" (UID: \"0dee9c6e4b445692c966bbc33ee1aac7\") " pod="kube-system/kube-apiserver-ci-4230-0-0-d-42684b3569" Jan 29 16:31:30.045477 kubelet[2853]: I0129 16:31:30.045458 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b33fa58ad12fed16e50e1e6c5e2e9f5-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-d-42684b3569\" (UID: \"9b33fa58ad12fed16e50e1e6c5e2e9f5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-42684b3569" Jan 29 16:31:30.045608 kubelet[2853]: I0129 16:31:30.045483 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b33fa58ad12fed16e50e1e6c5e2e9f5-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-d-42684b3569\" (UID: \"9b33fa58ad12fed16e50e1e6c5e2e9f5\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-d-42684b3569" Jan 29 16:31:30.045608 kubelet[2853]: I0129 16:31:30.045522 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/83755f3207482317c03cb45dd83070db-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-d-42684b3569\" (UID: \"83755f3207482317c03cb45dd83070db\") " pod="kube-system/kube-scheduler-ci-4230-0-0-d-42684b3569" Jan 29 16:31:30.719065 kubelet[2853]: I0129 16:31:30.718009 2853 apiserver.go:52] "Watching apiserver" Jan 29 16:31:30.744390 kubelet[2853]: I0129 16:31:30.744291 2853 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:31:30.810445 kubelet[2853]: I0129 16:31:30.810379 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-0-0-d-42684b3569" podStartSLOduration=1.810359311 podStartE2EDuration="1.810359311s" podCreationTimestamp="2025-01-29 16:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:31:30.807723756 +0000 UTC m=+1.181665257" watchObservedRunningTime="2025-01-29 16:31:30.810359311 +0000 UTC m=+1.184300812" Jan 29 16:31:30.820148 kubelet[2853]: I0129 16:31:30.820089 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-0-0-d-42684b3569" podStartSLOduration=1.820061793 podStartE2EDuration="1.820061793s" podCreationTimestamp="2025-01-29 16:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:31:30.819243416 +0000 UTC m=+1.193184947" watchObservedRunningTime="2025-01-29 16:31:30.820061793 +0000 UTC m=+1.194003295" Jan 29 16:31:30.841595 kubelet[2853]: I0129 16:31:30.841536 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-0-0-d-42684b3569" podStartSLOduration=1.84152245 podStartE2EDuration="1.84152245s" podCreationTimestamp="2025-01-29 16:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:31:30.831961437 +0000 UTC m=+1.205902938" watchObservedRunningTime="2025-01-29 16:31:30.84152245 +0000 UTC m=+1.215463941" Jan 29 16:31:30.926311 sudo[1870]: pam_unix(sudo:session): session closed for user root Jan 29 16:31:31.087461 sshd[1869]: Connection closed by 147.75.109.163 port 60932 Jan 29 16:31:31.089946 sshd-session[1867]: pam_unix(sshd:session): session closed for user core Jan 29 16:31:31.101206 systemd[1]: sshd@4-159.69.241.25:22-147.75.109.163:60932.service: Deactivated successfully. Jan 29 16:31:31.107272 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:31:31.107698 systemd[1]: session-5.scope: Consumed 4.288s CPU time, 192.2M memory peak. Jan 29 16:31:31.110652 systemd-logind[1501]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:31:31.113633 systemd-logind[1501]: Removed session 5. Jan 29 16:31:42.007971 kubelet[2853]: I0129 16:31:42.007822 2853 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:31:42.009529 containerd[1520]: time="2025-01-29T16:31:42.008747686Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:31:42.011484 kubelet[2853]: I0129 16:31:42.011422 2853 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:31:42.949735 kubelet[2853]: I0129 16:31:42.949682 2853 topology_manager.go:215] "Topology Admit Handler" podUID="2579386b-af2c-4187-8cf2-5d5204b8b673" podNamespace="kube-system" podName="kube-proxy-n7c5n" Jan 29 16:31:42.956881 kubelet[2853]: I0129 16:31:42.956216 2853 topology_manager.go:215] "Topology Admit Handler" podUID="3a77e230-8803-4cb0-a20b-a3d5ab39f76c" podNamespace="kube-flannel" podName="kube-flannel-ds-465sp" Jan 29 16:31:42.965145 kubelet[2853]: W0129 16:31:42.964113 2853 reflector.go:547] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-4230-0-0-d-42684b3569" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4230-0-0-d-42684b3569' and this object Jan 29 16:31:42.972006 systemd[1]: Created slice kubepods-besteffort-pod2579386b_af2c_4187_8cf2_5d5204b8b673.slice - libcontainer container kubepods-besteffort-pod2579386b_af2c_4187_8cf2_5d5204b8b673.slice. Jan 29 16:31:42.973410 kubelet[2853]: E0129 16:31:42.972913 2853 reflector.go:150] object-"kube-flannel"/"kube-flannel-cfg": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-4230-0-0-d-42684b3569" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4230-0-0-d-42684b3569' and this object Jan 29 16:31:42.973410 kubelet[2853]: W0129 16:31:42.969227 2853 reflector.go:547] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230-0-0-d-42684b3569" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4230-0-0-d-42684b3569' and this object Jan 29 16:31:42.973410 kubelet[2853]: E0129 16:31:42.972959 2853 reflector.go:150] object-"kube-flannel"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230-0-0-d-42684b3569" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4230-0-0-d-42684b3569' and this object Jan 29 16:31:42.989900 systemd[1]: Created slice kubepods-burstable-pod3a77e230_8803_4cb0_a20b_a3d5ab39f76c.slice - libcontainer container kubepods-burstable-pod3a77e230_8803_4cb0_a20b_a3d5ab39f76c.slice. Jan 29 16:31:43.126997 kubelet[2853]: I0129 16:31:43.126939 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/3a77e230-8803-4cb0-a20b-a3d5ab39f76c-cni-plugin\") pod \"kube-flannel-ds-465sp\" (UID: \"3a77e230-8803-4cb0-a20b-a3d5ab39f76c\") " pod="kube-flannel/kube-flannel-ds-465sp" Jan 29 16:31:43.126997 kubelet[2853]: I0129 16:31:43.127005 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/3a77e230-8803-4cb0-a20b-a3d5ab39f76c-flannel-cfg\") pod \"kube-flannel-ds-465sp\" (UID: \"3a77e230-8803-4cb0-a20b-a3d5ab39f76c\") " pod="kube-flannel/kube-flannel-ds-465sp" Jan 29 16:31:43.127676 kubelet[2853]: I0129 16:31:43.127036 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2579386b-af2c-4187-8cf2-5d5204b8b673-kube-proxy\") pod \"kube-proxy-n7c5n\" (UID: \"2579386b-af2c-4187-8cf2-5d5204b8b673\") " pod="kube-system/kube-proxy-n7c5n" Jan 29 16:31:43.127676 kubelet[2853]: I0129 16:31:43.127094 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2579386b-af2c-4187-8cf2-5d5204b8b673-xtables-lock\") pod \"kube-proxy-n7c5n\" (UID: \"2579386b-af2c-4187-8cf2-5d5204b8b673\") " pod="kube-system/kube-proxy-n7c5n" Jan 29 16:31:43.127676 kubelet[2853]: I0129 16:31:43.127126 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3a77e230-8803-4cb0-a20b-a3d5ab39f76c-run\") pod \"kube-flannel-ds-465sp\" (UID: \"3a77e230-8803-4cb0-a20b-a3d5ab39f76c\") " pod="kube-flannel/kube-flannel-ds-465sp" Jan 29 16:31:43.127676 kubelet[2853]: I0129 16:31:43.127171 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a77e230-8803-4cb0-a20b-a3d5ab39f76c-xtables-lock\") pod \"kube-flannel-ds-465sp\" (UID: \"3a77e230-8803-4cb0-a20b-a3d5ab39f76c\") " pod="kube-flannel/kube-flannel-ds-465sp" Jan 29 16:31:43.127676 kubelet[2853]: I0129 16:31:43.127216 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkjbx\" (UniqueName: \"kubernetes.io/projected/3a77e230-8803-4cb0-a20b-a3d5ab39f76c-kube-api-access-bkjbx\") pod \"kube-flannel-ds-465sp\" (UID: \"3a77e230-8803-4cb0-a20b-a3d5ab39f76c\") " pod="kube-flannel/kube-flannel-ds-465sp" Jan 29 16:31:43.127858 kubelet[2853]: I0129 16:31:43.127246 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/3a77e230-8803-4cb0-a20b-a3d5ab39f76c-cni\") pod \"kube-flannel-ds-465sp\" (UID: \"3a77e230-8803-4cb0-a20b-a3d5ab39f76c\") " pod="kube-flannel/kube-flannel-ds-465sp" Jan 29 16:31:43.127858 kubelet[2853]: I0129 16:31:43.127281 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2579386b-af2c-4187-8cf2-5d5204b8b673-lib-modules\") pod \"kube-proxy-n7c5n\" (UID: \"2579386b-af2c-4187-8cf2-5d5204b8b673\") " pod="kube-system/kube-proxy-n7c5n" Jan 29 16:31:43.127858 kubelet[2853]: I0129 16:31:43.127341 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn8ff\" (UniqueName: \"kubernetes.io/projected/2579386b-af2c-4187-8cf2-5d5204b8b673-kube-api-access-bn8ff\") pod \"kube-proxy-n7c5n\" (UID: \"2579386b-af2c-4187-8cf2-5d5204b8b673\") " pod="kube-system/kube-proxy-n7c5n" Jan 29 16:31:43.284404 containerd[1520]: time="2025-01-29T16:31:43.283945246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n7c5n,Uid:2579386b-af2c-4187-8cf2-5d5204b8b673,Namespace:kube-system,Attempt:0,}" Jan 29 16:31:43.323822 containerd[1520]: time="2025-01-29T16:31:43.323612203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:31:43.324050 containerd[1520]: time="2025-01-29T16:31:43.323777527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:31:43.324050 containerd[1520]: time="2025-01-29T16:31:43.323817091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:31:43.324050 containerd[1520]: time="2025-01-29T16:31:43.324001081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:31:43.347253 systemd[1]: Started cri-containerd-2f145751e5d45f7774a4144d7be38828741667a408716642c822f4554affa14d.scope - libcontainer container 2f145751e5d45f7774a4144d7be38828741667a408716642c822f4554affa14d. Jan 29 16:31:43.383606 containerd[1520]: time="2025-01-29T16:31:43.383516181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n7c5n,Uid:2579386b-af2c-4187-8cf2-5d5204b8b673,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f145751e5d45f7774a4144d7be38828741667a408716642c822f4554affa14d\"" Jan 29 16:31:43.388182 containerd[1520]: time="2025-01-29T16:31:43.388047991Z" level=info msg="CreateContainer within sandbox \"2f145751e5d45f7774a4144d7be38828741667a408716642c822f4554affa14d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:31:43.405587 containerd[1520]: time="2025-01-29T16:31:43.405542121Z" level=info msg="CreateContainer within sandbox \"2f145751e5d45f7774a4144d7be38828741667a408716642c822f4554affa14d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"82bebeb3b88e0914be9ede049f6c84d45ebe8180e970bbc7ad337984c4749241\"" Jan 29 16:31:43.406994 containerd[1520]: time="2025-01-29T16:31:43.406459871Z" level=info msg="StartContainer for \"82bebeb3b88e0914be9ede049f6c84d45ebe8180e970bbc7ad337984c4749241\"" Jan 29 16:31:43.440293 systemd[1]: Started cri-containerd-82bebeb3b88e0914be9ede049f6c84d45ebe8180e970bbc7ad337984c4749241.scope - libcontainer container 82bebeb3b88e0914be9ede049f6c84d45ebe8180e970bbc7ad337984c4749241. Jan 29 16:31:43.481261 containerd[1520]: time="2025-01-29T16:31:43.481159924Z" level=info msg="StartContainer for \"82bebeb3b88e0914be9ede049f6c84d45ebe8180e970bbc7ad337984c4749241\" returns successfully" Jan 29 16:31:44.248114 kubelet[2853]: E0129 16:31:44.246314 2853 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:31:44.248114 kubelet[2853]: E0129 16:31:44.246382 2853 projected.go:200] Error preparing data for projected volume kube-api-access-bkjbx for pod kube-flannel/kube-flannel-ds-465sp: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:31:44.248114 kubelet[2853]: E0129 16:31:44.246487 2853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3a77e230-8803-4cb0-a20b-a3d5ab39f76c-kube-api-access-bkjbx podName:3a77e230-8803-4cb0-a20b-a3d5ab39f76c nodeName:}" failed. No retries permitted until 2025-01-29 16:31:44.746455154 +0000 UTC m=+15.120396696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bkjbx" (UniqueName: "kubernetes.io/projected/3a77e230-8803-4cb0-a20b-a3d5ab39f76c-kube-api-access-bkjbx") pod "kube-flannel-ds-465sp" (UID: "3a77e230-8803-4cb0-a20b-a3d5ab39f76c") : failed to sync configmap cache: timed out waiting for the condition Jan 29 16:31:44.256998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2630201786.mount: Deactivated successfully. Jan 29 16:31:45.096588 containerd[1520]: time="2025-01-29T16:31:45.096418719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-465sp,Uid:3a77e230-8803-4cb0-a20b-a3d5ab39f76c,Namespace:kube-flannel,Attempt:0,}" Jan 29 16:31:45.154163 containerd[1520]: time="2025-01-29T16:31:45.153888141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:31:45.154163 containerd[1520]: time="2025-01-29T16:31:45.154155647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:31:45.154650 containerd[1520]: time="2025-01-29T16:31:45.154256519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:31:45.154650 containerd[1520]: time="2025-01-29T16:31:45.154502154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:31:45.202427 systemd[1]: Started cri-containerd-d7b75cb2e34f03bd08a37517ff0c38ca699f24b7247c208f4c1ebfc2167f36be.scope - libcontainer container d7b75cb2e34f03bd08a37517ff0c38ca699f24b7247c208f4c1ebfc2167f36be. Jan 29 16:31:45.272720 containerd[1520]: time="2025-01-29T16:31:45.272689612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-465sp,Uid:3a77e230-8803-4cb0-a20b-a3d5ab39f76c,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"d7b75cb2e34f03bd08a37517ff0c38ca699f24b7247c208f4c1ebfc2167f36be\"" Jan 29 16:31:45.276774 containerd[1520]: time="2025-01-29T16:31:45.276477579Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 29 16:31:47.818717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount989353430.mount: Deactivated successfully. Jan 29 16:31:47.867699 containerd[1520]: time="2025-01-29T16:31:47.867608325Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:47.869153 containerd[1520]: time="2025-01-29T16:31:47.869109247Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852936" Jan 29 16:31:47.870319 containerd[1520]: time="2025-01-29T16:31:47.870204000Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:47.872620 containerd[1520]: time="2025-01-29T16:31:47.872558520Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:47.873532 containerd[1520]: time="2025-01-29T16:31:47.873359806Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.596838965s" Jan 29 16:31:47.873532 containerd[1520]: time="2025-01-29T16:31:47.873403549Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 29 16:31:47.880532 containerd[1520]: time="2025-01-29T16:31:47.880434103Z" level=info msg="CreateContainer within sandbox \"d7b75cb2e34f03bd08a37517ff0c38ca699f24b7247c208f4c1ebfc2167f36be\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 29 16:31:47.902056 containerd[1520]: time="2025-01-29T16:31:47.901997410Z" level=info msg="CreateContainer within sandbox \"d7b75cb2e34f03bd08a37517ff0c38ca699f24b7247c208f4c1ebfc2167f36be\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"b9244b0aa9c6e0fefd11426ac81c0af522d95e8e5175856681c04bef1f91419f\"" Jan 29 16:31:47.903269 containerd[1520]: time="2025-01-29T16:31:47.903213413Z" level=info msg="StartContainer for \"b9244b0aa9c6e0fefd11426ac81c0af522d95e8e5175856681c04bef1f91419f\"" Jan 29 16:31:47.947372 systemd[1]: Started cri-containerd-b9244b0aa9c6e0fefd11426ac81c0af522d95e8e5175856681c04bef1f91419f.scope - libcontainer container b9244b0aa9c6e0fefd11426ac81c0af522d95e8e5175856681c04bef1f91419f. Jan 29 16:31:47.996110 containerd[1520]: time="2025-01-29T16:31:47.994249735Z" level=info msg="StartContainer for \"b9244b0aa9c6e0fefd11426ac81c0af522d95e8e5175856681c04bef1f91419f\" returns successfully" Jan 29 16:31:47.998475 systemd[1]: cri-containerd-b9244b0aa9c6e0fefd11426ac81c0af522d95e8e5175856681c04bef1f91419f.scope: Deactivated successfully. Jan 29 16:31:48.067241 containerd[1520]: time="2025-01-29T16:31:48.067131951Z" level=info msg="shim disconnected" id=b9244b0aa9c6e0fefd11426ac81c0af522d95e8e5175856681c04bef1f91419f namespace=k8s.io Jan 29 16:31:48.067904 containerd[1520]: time="2025-01-29T16:31:48.067610908Z" level=warning msg="cleaning up after shim disconnected" id=b9244b0aa9c6e0fefd11426ac81c0af522d95e8e5175856681c04bef1f91419f namespace=k8s.io Jan 29 16:31:48.067904 containerd[1520]: time="2025-01-29T16:31:48.067637277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:31:48.091399 containerd[1520]: time="2025-01-29T16:31:48.091313726Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:31:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:31:48.693276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9244b0aa9c6e0fefd11426ac81c0af522d95e8e5175856681c04bef1f91419f-rootfs.mount: Deactivated successfully. Jan 29 16:31:48.823784 containerd[1520]: time="2025-01-29T16:31:48.823698577Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 29 16:31:48.848635 kubelet[2853]: I0129 16:31:48.848542 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n7c5n" podStartSLOduration=6.84851311 podStartE2EDuration="6.84851311s" podCreationTimestamp="2025-01-29 16:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:31:43.824702988 +0000 UTC m=+14.198644508" watchObservedRunningTime="2025-01-29 16:31:48.84851311 +0000 UTC m=+19.222454641" Jan 29 16:31:51.419819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3347293462.mount: Deactivated successfully. Jan 29 16:31:53.136263 containerd[1520]: time="2025-01-29T16:31:53.136209251Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:53.137353 containerd[1520]: time="2025-01-29T16:31:53.137315193Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 29 16:31:53.138639 containerd[1520]: time="2025-01-29T16:31:53.138591607Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:53.141304 containerd[1520]: time="2025-01-29T16:31:53.141243814Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:31:53.142764 containerd[1520]: time="2025-01-29T16:31:53.142196697Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 4.318431935s" Jan 29 16:31:53.142764 containerd[1520]: time="2025-01-29T16:31:53.142223196Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 29 16:31:53.144557 containerd[1520]: time="2025-01-29T16:31:53.144532194Z" level=info msg="CreateContainer within sandbox \"d7b75cb2e34f03bd08a37517ff0c38ca699f24b7247c208f4c1ebfc2167f36be\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 16:31:53.160779 containerd[1520]: time="2025-01-29T16:31:53.160706569Z" level=info msg="CreateContainer within sandbox \"d7b75cb2e34f03bd08a37517ff0c38ca699f24b7247c208f4c1ebfc2167f36be\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a9b8953116f4314d40b91c5332f2f653c775a6a4dd1481ceaef82a0ff3b6fb93\"" Jan 29 16:31:53.162776 containerd[1520]: time="2025-01-29T16:31:53.161585121Z" level=info msg="StartContainer for \"a9b8953116f4314d40b91c5332f2f653c775a6a4dd1481ceaef82a0ff3b6fb93\"" Jan 29 16:31:53.191658 systemd[1]: run-containerd-runc-k8s.io-a9b8953116f4314d40b91c5332f2f653c775a6a4dd1481ceaef82a0ff3b6fb93-runc.vXFKwZ.mount: Deactivated successfully. Jan 29 16:31:53.202266 systemd[1]: Started cri-containerd-a9b8953116f4314d40b91c5332f2f653c775a6a4dd1481ceaef82a0ff3b6fb93.scope - libcontainer container a9b8953116f4314d40b91c5332f2f653c775a6a4dd1481ceaef82a0ff3b6fb93. Jan 29 16:31:53.236129 systemd[1]: cri-containerd-a9b8953116f4314d40b91c5332f2f653c775a6a4dd1481ceaef82a0ff3b6fb93.scope: Deactivated successfully. Jan 29 16:31:53.238825 containerd[1520]: time="2025-01-29T16:31:53.238771204Z" level=info msg="StartContainer for \"a9b8953116f4314d40b91c5332f2f653c775a6a4dd1481ceaef82a0ff3b6fb93\" returns successfully" Jan 29 16:31:53.333626 containerd[1520]: time="2025-01-29T16:31:53.333559473Z" level=info msg="shim disconnected" id=a9b8953116f4314d40b91c5332f2f653c775a6a4dd1481ceaef82a0ff3b6fb93 namespace=k8s.io Jan 29 16:31:53.333626 containerd[1520]: time="2025-01-29T16:31:53.333615900Z" level=warning msg="cleaning up after shim disconnected" id=a9b8953116f4314d40b91c5332f2f653c775a6a4dd1481ceaef82a0ff3b6fb93 namespace=k8s.io Jan 29 16:31:53.333626 containerd[1520]: time="2025-01-29T16:31:53.333624536Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:31:53.336698 kubelet[2853]: I0129 16:31:53.336256 2853 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 16:31:53.379006 kubelet[2853]: I0129 16:31:53.378929 2853 topology_manager.go:215] "Topology Admit Handler" podUID="e9fcf7af-f1f8-4df5-b3b9-85500fde7736" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sskrg" Jan 29 16:31:53.379223 kubelet[2853]: I0129 16:31:53.379120 2853 topology_manager.go:215] "Topology Admit Handler" podUID="55717c82-7459-4b4a-8843-32f760fdb784" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tc589" Jan 29 16:31:53.407737 systemd[1]: Created slice kubepods-burstable-pod55717c82_7459_4b4a_8843_32f760fdb784.slice - libcontainer container kubepods-burstable-pod55717c82_7459_4b4a_8843_32f760fdb784.slice. Jan 29 16:31:53.412525 kubelet[2853]: I0129 16:31:53.412472 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9fcf7af-f1f8-4df5-b3b9-85500fde7736-config-volume\") pod \"coredns-7db6d8ff4d-sskrg\" (UID: \"e9fcf7af-f1f8-4df5-b3b9-85500fde7736\") " pod="kube-system/coredns-7db6d8ff4d-sskrg" Jan 29 16:31:53.412525 kubelet[2853]: I0129 16:31:53.412517 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hblsd\" (UniqueName: \"kubernetes.io/projected/e9fcf7af-f1f8-4df5-b3b9-85500fde7736-kube-api-access-hblsd\") pod \"coredns-7db6d8ff4d-sskrg\" (UID: \"e9fcf7af-f1f8-4df5-b3b9-85500fde7736\") " pod="kube-system/coredns-7db6d8ff4d-sskrg" Jan 29 16:31:53.412744 kubelet[2853]: I0129 16:31:53.412546 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55717c82-7459-4b4a-8843-32f760fdb784-config-volume\") pod \"coredns-7db6d8ff4d-tc589\" (UID: \"55717c82-7459-4b4a-8843-32f760fdb784\") " pod="kube-system/coredns-7db6d8ff4d-tc589" Jan 29 16:31:53.412744 kubelet[2853]: I0129 16:31:53.412565 2853 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdj72\" (UniqueName: \"kubernetes.io/projected/55717c82-7459-4b4a-8843-32f760fdb784-kube-api-access-tdj72\") pod \"coredns-7db6d8ff4d-tc589\" (UID: \"55717c82-7459-4b4a-8843-32f760fdb784\") " pod="kube-system/coredns-7db6d8ff4d-tc589" Jan 29 16:31:53.420678 systemd[1]: Created slice kubepods-burstable-pode9fcf7af_f1f8_4df5_b3b9_85500fde7736.slice - libcontainer container kubepods-burstable-pode9fcf7af_f1f8_4df5_b3b9_85500fde7736.slice. Jan 29 16:31:53.718746 containerd[1520]: time="2025-01-29T16:31:53.718544800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tc589,Uid:55717c82-7459-4b4a-8843-32f760fdb784,Namespace:kube-system,Attempt:0,}" Jan 29 16:31:53.725278 containerd[1520]: time="2025-01-29T16:31:53.725058381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sskrg,Uid:e9fcf7af-f1f8-4df5-b3b9-85500fde7736,Namespace:kube-system,Attempt:0,}" Jan 29 16:31:53.777876 containerd[1520]: time="2025-01-29T16:31:53.777772392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tc589,Uid:55717c82-7459-4b4a-8843-32f760fdb784,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f70ab8e3f2df54582f8179b6dac053c24db0e2970ceb29d8df20bec36a7961af\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 16:31:53.779342 kubelet[2853]: E0129 16:31:53.778932 2853 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70ab8e3f2df54582f8179b6dac053c24db0e2970ceb29d8df20bec36a7961af\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 16:31:53.779342 kubelet[2853]: E0129 16:31:53.779165 2853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70ab8e3f2df54582f8179b6dac053c24db0e2970ceb29d8df20bec36a7961af\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-tc589" Jan 29 16:31:53.779342 kubelet[2853]: E0129 16:31:53.779202 2853 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70ab8e3f2df54582f8179b6dac053c24db0e2970ceb29d8df20bec36a7961af\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-tc589" Jan 29 16:31:53.779342 kubelet[2853]: E0129 16:31:53.779280 2853 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tc589_kube-system(55717c82-7459-4b4a-8843-32f760fdb784)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tc589_kube-system(55717c82-7459-4b4a-8843-32f760fdb784)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f70ab8e3f2df54582f8179b6dac053c24db0e2970ceb29d8df20bec36a7961af\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-tc589" podUID="55717c82-7459-4b4a-8843-32f760fdb784" Jan 29 16:31:53.785970 containerd[1520]: time="2025-01-29T16:31:53.785895247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sskrg,Uid:e9fcf7af-f1f8-4df5-b3b9-85500fde7736,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"08c4799f959e76974aa287d89db8a3ddfa2a81eb2626b5ce845305b6d89f4ab2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 16:31:53.786230 kubelet[2853]: E0129 16:31:53.786172 2853 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08c4799f959e76974aa287d89db8a3ddfa2a81eb2626b5ce845305b6d89f4ab2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 16:31:53.786230 kubelet[2853]: E0129 16:31:53.786218 2853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08c4799f959e76974aa287d89db8a3ddfa2a81eb2626b5ce845305b6d89f4ab2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-sskrg" Jan 29 16:31:53.786448 kubelet[2853]: E0129 16:31:53.786239 2853 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08c4799f959e76974aa287d89db8a3ddfa2a81eb2626b5ce845305b6d89f4ab2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-sskrg" Jan 29 16:31:53.786448 kubelet[2853]: E0129 16:31:53.786275 2853 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-sskrg_kube-system(e9fcf7af-f1f8-4df5-b3b9-85500fde7736)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-sskrg_kube-system(e9fcf7af-f1f8-4df5-b3b9-85500fde7736)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08c4799f959e76974aa287d89db8a3ddfa2a81eb2626b5ce845305b6d89f4ab2\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-sskrg" podUID="e9fcf7af-f1f8-4df5-b3b9-85500fde7736" Jan 29 16:31:53.844205 containerd[1520]: time="2025-01-29T16:31:53.844064990Z" level=info msg="CreateContainer within sandbox \"d7b75cb2e34f03bd08a37517ff0c38ca699f24b7247c208f4c1ebfc2167f36be\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 29 16:31:53.875579 containerd[1520]: time="2025-01-29T16:31:53.875412676Z" level=info msg="CreateContainer within sandbox \"d7b75cb2e34f03bd08a37517ff0c38ca699f24b7247c208f4c1ebfc2167f36be\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"563b9ad6ac612d67364176d5a7d8ee91de3de833169dfe9c2ad796babf60a76c\"" Jan 29 16:31:53.877828 containerd[1520]: time="2025-01-29T16:31:53.876053849Z" level=info msg="StartContainer for \"563b9ad6ac612d67364176d5a7d8ee91de3de833169dfe9c2ad796babf60a76c\"" Jan 29 16:31:53.931916 systemd[1]: Started cri-containerd-563b9ad6ac612d67364176d5a7d8ee91de3de833169dfe9c2ad796babf60a76c.scope - libcontainer container 563b9ad6ac612d67364176d5a7d8ee91de3de833169dfe9c2ad796babf60a76c. Jan 29 16:31:53.968813 containerd[1520]: time="2025-01-29T16:31:53.968628470Z" level=info msg="StartContainer for \"563b9ad6ac612d67364176d5a7d8ee91de3de833169dfe9c2ad796babf60a76c\" returns successfully" Jan 29 16:31:54.163526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9b8953116f4314d40b91c5332f2f653c775a6a4dd1481ceaef82a0ff3b6fb93-rootfs.mount: Deactivated successfully. Jan 29 16:31:55.046600 systemd-networkd[1420]: flannel.1: Link UP Jan 29 16:31:55.046613 systemd-networkd[1420]: flannel.1: Gained carrier Jan 29 16:31:56.177312 systemd-networkd[1420]: flannel.1: Gained IPv6LL Jan 29 16:32:07.755595 containerd[1520]: time="2025-01-29T16:32:07.754914950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sskrg,Uid:e9fcf7af-f1f8-4df5-b3b9-85500fde7736,Namespace:kube-system,Attempt:0,}" Jan 29 16:32:07.805647 systemd-networkd[1420]: cni0: Link UP Jan 29 16:32:07.805666 systemd-networkd[1420]: cni0: Gained carrier Jan 29 16:32:07.821483 systemd-networkd[1420]: cni0: Lost carrier Jan 29 16:32:07.828840 systemd-networkd[1420]: vethd9a68830: Link UP Jan 29 16:32:07.834474 kernel: cni0: port 1(vethd9a68830) entered blocking state Jan 29 16:32:07.834627 kernel: cni0: port 1(vethd9a68830) entered disabled state Jan 29 16:32:07.836245 kernel: vethd9a68830: entered allmulticast mode Jan 29 16:32:07.838983 kernel: vethd9a68830: entered promiscuous mode Jan 29 16:32:07.842217 kernel: cni0: port 1(vethd9a68830) entered blocking state Jan 29 16:32:07.842333 kernel: cni0: port 1(vethd9a68830) entered forwarding state Jan 29 16:32:07.842370 kernel: cni0: port 1(vethd9a68830) entered disabled state Jan 29 16:32:07.858045 kernel: cni0: port 1(vethd9a68830) entered blocking state Jan 29 16:32:07.858177 kernel: cni0: port 1(vethd9a68830) entered forwarding state Jan 29 16:32:07.858875 systemd-networkd[1420]: vethd9a68830: Gained carrier Jan 29 16:32:07.859191 systemd-networkd[1420]: cni0: Gained carrier Jan 29 16:32:07.866229 containerd[1520]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001e938), "name":"cbr0", "type":"bridge"} Jan 29 16:32:07.866229 containerd[1520]: delegateAdd: netconf sent to delegate plugin: Jan 29 16:32:07.884597 containerd[1520]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T16:32:07.884304962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:32:07.884597 containerd[1520]: time="2025-01-29T16:32:07.884484120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:32:07.884597 containerd[1520]: time="2025-01-29T16:32:07.884498548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:32:07.885022 containerd[1520]: time="2025-01-29T16:32:07.884681634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:32:07.910235 systemd[1]: Started cri-containerd-54200ef067ee13e696d43187ea98ddf93efb97494f70aefc5ad526caed09c7a4.scope - libcontainer container 54200ef067ee13e696d43187ea98ddf93efb97494f70aefc5ad526caed09c7a4. Jan 29 16:32:07.950508 containerd[1520]: time="2025-01-29T16:32:07.950414805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sskrg,Uid:e9fcf7af-f1f8-4df5-b3b9-85500fde7736,Namespace:kube-system,Attempt:0,} returns sandbox id \"54200ef067ee13e696d43187ea98ddf93efb97494f70aefc5ad526caed09c7a4\"" Jan 29 16:32:07.958309 containerd[1520]: time="2025-01-29T16:32:07.957663569Z" level=info msg="CreateContainer within sandbox \"54200ef067ee13e696d43187ea98ddf93efb97494f70aefc5ad526caed09c7a4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:32:07.976214 containerd[1520]: time="2025-01-29T16:32:07.976181687Z" level=info msg="CreateContainer within sandbox \"54200ef067ee13e696d43187ea98ddf93efb97494f70aefc5ad526caed09c7a4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4f48d12ea61dc1203fce44671368cde4f7abbc7fc3d1b099804a7ee7a8da6b9\"" Jan 29 16:32:07.977089 containerd[1520]: time="2025-01-29T16:32:07.976803882Z" level=info msg="StartContainer for \"d4f48d12ea61dc1203fce44671368cde4f7abbc7fc3d1b099804a7ee7a8da6b9\"" Jan 29 16:32:08.005354 systemd[1]: Started cri-containerd-d4f48d12ea61dc1203fce44671368cde4f7abbc7fc3d1b099804a7ee7a8da6b9.scope - libcontainer container d4f48d12ea61dc1203fce44671368cde4f7abbc7fc3d1b099804a7ee7a8da6b9. Jan 29 16:32:08.035493 containerd[1520]: time="2025-01-29T16:32:08.035396657Z" level=info msg="StartContainer for \"d4f48d12ea61dc1203fce44671368cde4f7abbc7fc3d1b099804a7ee7a8da6b9\" returns successfully" Jan 29 16:32:08.773522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount87152246.mount: Deactivated successfully. Jan 29 16:32:08.830125 containerd[1520]: time="2025-01-29T16:32:08.830045741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tc589,Uid:55717c82-7459-4b4a-8843-32f760fdb784,Namespace:kube-system,Attempt:0,}" Jan 29 16:32:08.908540 systemd-networkd[1420]: veth6204d00d: Link UP Jan 29 16:32:08.914602 kernel: cni0: port 2(veth6204d00d) entered blocking state Jan 29 16:32:08.914706 kernel: cni0: port 2(veth6204d00d) entered disabled state Jan 29 16:32:08.919388 kernel: veth6204d00d: entered allmulticast mode Jan 29 16:32:08.919479 kernel: veth6204d00d: entered promiscuous mode Jan 29 16:32:08.937961 kernel: cni0: port 2(veth6204d00d) entered blocking state Jan 29 16:32:08.938224 kernel: cni0: port 2(veth6204d00d) entered forwarding state Jan 29 16:32:08.938862 systemd-networkd[1420]: veth6204d00d: Gained carrier Jan 29 16:32:08.941923 containerd[1520]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 29 16:32:08.941923 containerd[1520]: delegateAdd: netconf sent to delegate plugin: Jan 29 16:32:08.953028 kubelet[2853]: I0129 16:32:08.950835 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-465sp" podStartSLOduration=19.082867648 podStartE2EDuration="26.950818872s" podCreationTimestamp="2025-01-29 16:31:42 +0000 UTC" firstStartedPulling="2025-01-29 16:31:45.274971687 +0000 UTC m=+15.648913187" lastFinishedPulling="2025-01-29 16:31:53.142922911 +0000 UTC m=+23.516864411" observedRunningTime="2025-01-29 16:31:54.859261966 +0000 UTC m=+25.233203507" watchObservedRunningTime="2025-01-29 16:32:08.950818872 +0000 UTC m=+39.324760393" Jan 29 16:32:08.969092 kubelet[2853]: I0129 16:32:08.968950 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-sskrg" podStartSLOduration=25.9689308 podStartE2EDuration="25.9689308s" podCreationTimestamp="2025-01-29 16:31:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:32:08.953711111 +0000 UTC m=+39.327652613" watchObservedRunningTime="2025-01-29 16:32:08.9689308 +0000 UTC m=+39.342872302" Jan 29 16:32:08.989549 containerd[1520]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T16:32:08.989332280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:32:08.989933 containerd[1520]: time="2025-01-29T16:32:08.989900873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:32:08.990202 containerd[1520]: time="2025-01-29T16:32:08.990122531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:32:08.992181 containerd[1520]: time="2025-01-29T16:32:08.990901872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:32:09.030777 systemd[1]: Started cri-containerd-14b191c3fed01db789fa3db3ad8eb006e703b6ec4139393e7953c3966d444651.scope - libcontainer container 14b191c3fed01db789fa3db3ad8eb006e703b6ec4139393e7953c3966d444651. Jan 29 16:32:09.078669 containerd[1520]: time="2025-01-29T16:32:09.078614883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tc589,Uid:55717c82-7459-4b4a-8843-32f760fdb784,Namespace:kube-system,Attempt:0,} returns sandbox id \"14b191c3fed01db789fa3db3ad8eb006e703b6ec4139393e7953c3966d444651\"" Jan 29 16:32:09.086412 containerd[1520]: time="2025-01-29T16:32:09.086092576Z" level=info msg="CreateContainer within sandbox \"14b191c3fed01db789fa3db3ad8eb006e703b6ec4139393e7953c3966d444651\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:32:09.109958 containerd[1520]: time="2025-01-29T16:32:09.109890222Z" level=info msg="CreateContainer within sandbox \"14b191c3fed01db789fa3db3ad8eb006e703b6ec4139393e7953c3966d444651\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4c31868cb3bf8b4ec5606edf9fdf0a45c3a71d900f0b6197be0cc5a06f8b4618\"" Jan 29 16:32:09.110643 containerd[1520]: time="2025-01-29T16:32:09.110607766Z" level=info msg="StartContainer for \"4c31868cb3bf8b4ec5606edf9fdf0a45c3a71d900f0b6197be0cc5a06f8b4618\"" Jan 29 16:32:09.147354 systemd[1]: Started cri-containerd-4c31868cb3bf8b4ec5606edf9fdf0a45c3a71d900f0b6197be0cc5a06f8b4618.scope - libcontainer container 4c31868cb3bf8b4ec5606edf9fdf0a45c3a71d900f0b6197be0cc5a06f8b4618. Jan 29 16:32:09.170707 systemd-networkd[1420]: vethd9a68830: Gained IPv6LL Jan 29 16:32:09.184551 containerd[1520]: time="2025-01-29T16:32:09.184485663Z" level=info msg="StartContainer for \"4c31868cb3bf8b4ec5606edf9fdf0a45c3a71d900f0b6197be0cc5a06f8b4618\" returns successfully" Jan 29 16:32:09.681381 systemd-networkd[1420]: cni0: Gained IPv6LL Jan 29 16:32:09.775749 systemd[1]: run-containerd-runc-k8s.io-14b191c3fed01db789fa3db3ad8eb006e703b6ec4139393e7953c3966d444651-runc.defNBw.mount: Deactivated successfully. Jan 29 16:32:09.972378 kubelet[2853]: I0129 16:32:09.972221 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tc589" podStartSLOduration=26.972195449 podStartE2EDuration="26.972195449s" podCreationTimestamp="2025-01-29 16:31:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:32:09.939947605 +0000 UTC m=+40.313889136" watchObservedRunningTime="2025-01-29 16:32:09.972195449 +0000 UTC m=+40.346136990" Jan 29 16:32:10.257420 systemd-networkd[1420]: veth6204d00d: Gained IPv6LL Jan 29 16:34:33.856338 update_engine[1506]: I20250129 16:34:33.856217 1506 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 16:34:33.856338 update_engine[1506]: I20250129 16:34:33.856300 1506 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 16:34:33.860424 update_engine[1506]: I20250129 16:34:33.856716 1506 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 16:34:33.860424 update_engine[1506]: I20250129 16:34:33.858618 1506 omaha_request_params.cc:62] Current group set to alpha Jan 29 16:34:33.860424 update_engine[1506]: I20250129 16:34:33.860036 1506 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 16:34:33.860424 update_engine[1506]: I20250129 16:34:33.860066 1506 update_attempter.cc:643] Scheduling an action processor start. Jan 29 16:34:33.860424 update_engine[1506]: I20250129 16:34:33.860162 1506 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 16:34:33.860424 update_engine[1506]: I20250129 16:34:33.860232 1506 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 16:34:33.860424 update_engine[1506]: I20250129 16:34:33.860385 1506 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 16:34:33.860424 update_engine[1506]: I20250129 16:34:33.860407 1506 omaha_request_action.cc:272] Request: Jan 29 16:34:33.860424 update_engine[1506]: Jan 29 16:34:33.860424 update_engine[1506]: Jan 29 16:34:33.860424 update_engine[1506]: Jan 29 16:34:33.860424 update_engine[1506]: Jan 29 16:34:33.860424 update_engine[1506]: Jan 29 16:34:33.860424 update_engine[1506]: Jan 29 16:34:33.860424 update_engine[1506]: Jan 29 16:34:33.860424 update_engine[1506]: Jan 29 16:34:33.860424 update_engine[1506]: I20250129 16:34:33.860422 1506 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:34:33.862437 locksmithd[1539]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 16:34:33.868179 update_engine[1506]: I20250129 16:34:33.868026 1506 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:34:33.868790 update_engine[1506]: I20250129 16:34:33.868716 1506 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:34:33.870181 update_engine[1506]: E20250129 16:34:33.870041 1506 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:34:33.870305 update_engine[1506]: I20250129 16:34:33.870230 1506 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 16:34:43.769835 update_engine[1506]: I20250129 16:34:43.769688 1506 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:34:43.770630 update_engine[1506]: I20250129 16:34:43.770238 1506 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:34:43.770813 update_engine[1506]: I20250129 16:34:43.770717 1506 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:34:43.771149 update_engine[1506]: E20250129 16:34:43.771036 1506 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:34:43.771222 update_engine[1506]: I20250129 16:34:43.771157 1506 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 29 16:34:53.770021 update_engine[1506]: I20250129 16:34:53.769884 1506 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:34:53.770764 update_engine[1506]: I20250129 16:34:53.770447 1506 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:34:53.771023 update_engine[1506]: I20250129 16:34:53.770910 1506 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:34:53.771429 update_engine[1506]: E20250129 16:34:53.771354 1506 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:34:53.771513 update_engine[1506]: I20250129 16:34:53.771457 1506 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 29 16:35:03.769830 update_engine[1506]: I20250129 16:35:03.769608 1506 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:35:03.770662 update_engine[1506]: I20250129 16:35:03.770030 1506 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:35:03.770662 update_engine[1506]: I20250129 16:35:03.770485 1506 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:35:03.771464 update_engine[1506]: E20250129 16:35:03.771399 1506 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:35:03.771739 update_engine[1506]: I20250129 16:35:03.771483 1506 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 16:35:03.771739 update_engine[1506]: I20250129 16:35:03.771502 1506 omaha_request_action.cc:617] Omaha request response: Jan 29 16:35:03.771739 update_engine[1506]: E20250129 16:35:03.771654 1506 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 29 16:35:03.771739 update_engine[1506]: I20250129 16:35:03.771690 1506 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 29 16:35:03.771739 update_engine[1506]: I20250129 16:35:03.771704 1506 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 16:35:03.771739 update_engine[1506]: I20250129 16:35:03.771718 1506 update_attempter.cc:306] Processing Done. Jan 29 16:35:03.771739 update_engine[1506]: E20250129 16:35:03.771742 1506 update_attempter.cc:619] Update failed. Jan 29 16:35:03.772568 update_engine[1506]: I20250129 16:35:03.771759 1506 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 29 16:35:03.772568 update_engine[1506]: I20250129 16:35:03.771772 1506 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 29 16:35:03.772568 update_engine[1506]: I20250129 16:35:03.771787 1506 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 29 16:35:03.772568 update_engine[1506]: I20250129 16:35:03.771903 1506 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 16:35:03.772568 update_engine[1506]: I20250129 16:35:03.771937 1506 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 16:35:03.772568 update_engine[1506]: I20250129 16:35:03.771951 1506 omaha_request_action.cc:272] Request: Jan 29 16:35:03.772568 update_engine[1506]: Jan 29 16:35:03.772568 update_engine[1506]: Jan 29 16:35:03.772568 update_engine[1506]: Jan 29 16:35:03.772568 update_engine[1506]: Jan 29 16:35:03.772568 update_engine[1506]: Jan 29 16:35:03.772568 update_engine[1506]: Jan 29 16:35:03.772568 update_engine[1506]: I20250129 16:35:03.771966 1506 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:35:03.772568 update_engine[1506]: I20250129 16:35:03.772401 1506 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:35:03.773395 update_engine[1506]: I20250129 16:35:03.772759 1506 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:35:03.773395 update_engine[1506]: E20250129 16:35:03.773339 1506 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:35:03.773496 locksmithd[1539]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 29 16:35:03.774270 update_engine[1506]: I20250129 16:35:03.773403 1506 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 16:35:03.774270 update_engine[1506]: I20250129 16:35:03.773419 1506 omaha_request_action.cc:617] Omaha request response: Jan 29 16:35:03.774270 update_engine[1506]: I20250129 16:35:03.773434 1506 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 16:35:03.774270 update_engine[1506]: I20250129 16:35:03.773448 1506 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 16:35:03.774270 update_engine[1506]: I20250129 16:35:03.773463 1506 update_attempter.cc:306] Processing Done. Jan 29 16:35:03.774270 update_engine[1506]: I20250129 16:35:03.773478 1506 update_attempter.cc:310] Error event sent. Jan 29 16:35:03.774270 update_engine[1506]: I20250129 16:35:03.773496 1506 update_check_scheduler.cc:74] Next update check in 49m46s Jan 29 16:35:03.774648 locksmithd[1539]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 29 16:36:12.232698 systemd[1]: Started sshd@5-159.69.241.25:22-147.75.109.163:49310.service - OpenSSH per-connection server daemon (147.75.109.163:49310). Jan 29 16:36:13.229226 sshd[4785]: Accepted publickey for core from 147.75.109.163 port 49310 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:13.231851 sshd-session[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:13.239680 systemd-logind[1501]: New session 6 of user core. Jan 29 16:36:13.249321 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:36:14.034112 sshd[4802]: Connection closed by 147.75.109.163 port 49310 Jan 29 16:36:14.035207 sshd-session[4785]: pam_unix(sshd:session): session closed for user core Jan 29 16:36:14.041431 systemd[1]: sshd@5-159.69.241.25:22-147.75.109.163:49310.service: Deactivated successfully. Jan 29 16:36:14.046463 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:36:14.049908 systemd-logind[1501]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:36:14.052940 systemd-logind[1501]: Removed session 6. Jan 29 16:36:19.223840 systemd[1]: Started sshd@6-159.69.241.25:22-147.75.109.163:56222.service - OpenSSH per-connection server daemon (147.75.109.163:56222). Jan 29 16:36:20.242966 sshd[4839]: Accepted publickey for core from 147.75.109.163 port 56222 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:20.245964 sshd-session[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:20.254753 systemd-logind[1501]: New session 7 of user core. Jan 29 16:36:20.260468 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:36:21.057715 sshd[4841]: Connection closed by 147.75.109.163 port 56222 Jan 29 16:36:21.059051 sshd-session[4839]: pam_unix(sshd:session): session closed for user core Jan 29 16:36:21.067592 systemd-logind[1501]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:36:21.068896 systemd[1]: sshd@6-159.69.241.25:22-147.75.109.163:56222.service: Deactivated successfully. Jan 29 16:36:21.074184 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:36:21.076586 systemd-logind[1501]: Removed session 7. Jan 29 16:36:26.238598 systemd[1]: Started sshd@7-159.69.241.25:22-147.75.109.163:56238.service - OpenSSH per-connection server daemon (147.75.109.163:56238). Jan 29 16:36:27.248833 sshd[4875]: Accepted publickey for core from 147.75.109.163 port 56238 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:27.252013 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:27.261573 systemd-logind[1501]: New session 8 of user core. Jan 29 16:36:27.267318 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:36:28.069120 sshd[4883]: Connection closed by 147.75.109.163 port 56238 Jan 29 16:36:28.070185 sshd-session[4875]: pam_unix(sshd:session): session closed for user core Jan 29 16:36:28.078588 systemd[1]: sshd@7-159.69.241.25:22-147.75.109.163:56238.service: Deactivated successfully. Jan 29 16:36:28.084586 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:36:28.086949 systemd-logind[1501]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:36:28.089741 systemd-logind[1501]: Removed session 8. Jan 29 16:36:28.253625 systemd[1]: Started sshd@8-159.69.241.25:22-147.75.109.163:56382.service - OpenSSH per-connection server daemon (147.75.109.163:56382). Jan 29 16:36:29.261382 sshd[4911]: Accepted publickey for core from 147.75.109.163 port 56382 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:29.263987 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:29.273231 systemd-logind[1501]: New session 9 of user core. Jan 29 16:36:29.282360 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:36:30.075921 sshd[4913]: Connection closed by 147.75.109.163 port 56382 Jan 29 16:36:30.077304 sshd-session[4911]: pam_unix(sshd:session): session closed for user core Jan 29 16:36:30.086747 systemd[1]: sshd@8-159.69.241.25:22-147.75.109.163:56382.service: Deactivated successfully. Jan 29 16:36:30.091490 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:36:30.093896 systemd-logind[1501]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:36:30.096027 systemd-logind[1501]: Removed session 9. Jan 29 16:36:30.255610 systemd[1]: Started sshd@9-159.69.241.25:22-147.75.109.163:56390.service - OpenSSH per-connection server daemon (147.75.109.163:56390). Jan 29 16:36:31.273719 sshd[4925]: Accepted publickey for core from 147.75.109.163 port 56390 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:31.276785 sshd-session[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:31.286205 systemd-logind[1501]: New session 10 of user core. Jan 29 16:36:31.295360 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:36:32.070176 sshd[4928]: Connection closed by 147.75.109.163 port 56390 Jan 29 16:36:32.071400 sshd-session[4925]: pam_unix(sshd:session): session closed for user core Jan 29 16:36:32.076220 systemd[1]: sshd@9-159.69.241.25:22-147.75.109.163:56390.service: Deactivated successfully. Jan 29 16:36:32.079382 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:36:32.081619 systemd-logind[1501]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:36:32.084221 systemd-logind[1501]: Removed session 10. Jan 29 16:36:37.253870 systemd[1]: Started sshd@10-159.69.241.25:22-147.75.109.163:56396.service - OpenSSH per-connection server daemon (147.75.109.163:56396). Jan 29 16:36:38.269808 sshd[4966]: Accepted publickey for core from 147.75.109.163 port 56396 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:38.272541 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:38.280779 systemd-logind[1501]: New session 11 of user core. Jan 29 16:36:38.290390 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:36:39.032126 sshd[4984]: Connection closed by 147.75.109.163 port 56396 Jan 29 16:36:39.033378 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Jan 29 16:36:39.041150 systemd[1]: sshd@10-159.69.241.25:22-147.75.109.163:56396.service: Deactivated successfully. Jan 29 16:36:39.045608 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:36:39.047435 systemd-logind[1501]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:36:39.049550 systemd-logind[1501]: Removed session 11. Jan 29 16:36:39.221502 systemd[1]: Started sshd@11-159.69.241.25:22-147.75.109.163:48602.service - OpenSSH per-connection server daemon (147.75.109.163:48602). Jan 29 16:36:40.246604 sshd[4996]: Accepted publickey for core from 147.75.109.163 port 48602 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:40.249752 sshd-session[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:40.257549 systemd-logind[1501]: New session 12 of user core. Jan 29 16:36:40.266326 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:36:41.274185 sshd[4998]: Connection closed by 147.75.109.163 port 48602 Jan 29 16:36:41.276185 sshd-session[4996]: pam_unix(sshd:session): session closed for user core Jan 29 16:36:41.283582 systemd[1]: sshd@11-159.69.241.25:22-147.75.109.163:48602.service: Deactivated successfully. Jan 29 16:36:41.288995 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:36:41.294187 systemd-logind[1501]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:36:41.297211 systemd-logind[1501]: Removed session 12. Jan 29 16:36:41.454444 systemd[1]: Started sshd@12-159.69.241.25:22-147.75.109.163:48608.service - OpenSSH per-connection server daemon (147.75.109.163:48608). Jan 29 16:36:42.481393 sshd[5014]: Accepted publickey for core from 147.75.109.163 port 48608 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:42.484940 sshd-session[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:42.493402 systemd-logind[1501]: New session 13 of user core. Jan 29 16:36:42.502396 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:36:44.942686 sshd[5016]: Connection closed by 147.75.109.163 port 48608 Jan 29 16:36:44.943226 sshd-session[5014]: pam_unix(sshd:session): session closed for user core Jan 29 16:36:44.950857 systemd-logind[1501]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:36:44.951758 systemd[1]: sshd@12-159.69.241.25:22-147.75.109.163:48608.service: Deactivated successfully. Jan 29 16:36:44.956160 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:36:44.958038 systemd-logind[1501]: Removed session 13. Jan 29 16:36:45.123546 systemd[1]: Started sshd@13-159.69.241.25:22-147.75.109.163:48612.service - OpenSSH per-connection server daemon (147.75.109.163:48612). Jan 29 16:36:46.149189 sshd[5050]: Accepted publickey for core from 147.75.109.163 port 48612 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:46.151869 sshd-session[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:46.161855 systemd-logind[1501]: New session 14 of user core. Jan 29 16:36:46.167514 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:36:47.108798 sshd[5052]: Connection closed by 147.75.109.163 port 48612 Jan 29 16:36:47.109555 sshd-session[5050]: pam_unix(sshd:session): session closed for user core Jan 29 16:36:47.113671 systemd[1]: sshd@13-159.69.241.25:22-147.75.109.163:48612.service: Deactivated successfully. Jan 29 16:36:47.116156 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:36:47.119506 systemd-logind[1501]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:36:47.121331 systemd-logind[1501]: Removed session 14. Jan 29 16:36:47.290628 systemd[1]: Started sshd@14-159.69.241.25:22-147.75.109.163:48616.service - OpenSSH per-connection server daemon (147.75.109.163:48616). Jan 29 16:36:48.306531 sshd[5068]: Accepted publickey for core from 147.75.109.163 port 48616 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:48.310051 sshd-session[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:48.323573 systemd-logind[1501]: New session 15 of user core. Jan 29 16:36:48.329474 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:36:49.092930 sshd[5085]: Connection closed by 147.75.109.163 port 48616 Jan 29 16:36:49.094040 sshd-session[5068]: pam_unix(sshd:session): session closed for user core Jan 29 16:36:49.101955 systemd[1]: sshd@14-159.69.241.25:22-147.75.109.163:48616.service: Deactivated successfully. Jan 29 16:36:49.109845 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:36:49.112954 systemd-logind[1501]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:36:49.114675 systemd-logind[1501]: Removed session 15. Jan 29 16:36:54.275535 systemd[1]: Started sshd@15-159.69.241.25:22-147.75.109.163:34044.service - OpenSSH per-connection server daemon (147.75.109.163:34044). Jan 29 16:36:55.292215 sshd[5121]: Accepted publickey for core from 147.75.109.163 port 34044 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:36:55.294718 sshd-session[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:36:55.302190 systemd-logind[1501]: New session 16 of user core. Jan 29 16:36:55.312467 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:36:56.096261 sshd[5123]: Connection closed by 147.75.109.163 port 34044 Jan 29 16:36:56.097415 sshd-session[5121]: pam_unix(sshd:session): session closed for user core Jan 29 16:36:56.104356 systemd-logind[1501]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:36:56.106700 systemd[1]: sshd@15-159.69.241.25:22-147.75.109.163:34044.service: Deactivated successfully. Jan 29 16:36:56.113329 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:36:56.115841 systemd-logind[1501]: Removed session 16. Jan 29 16:37:01.272755 systemd[1]: Started sshd@16-159.69.241.25:22-147.75.109.163:56562.service - OpenSSH per-connection server daemon (147.75.109.163:56562). Jan 29 16:37:02.261519 sshd[5156]: Accepted publickey for core from 147.75.109.163 port 56562 ssh2: RSA SHA256:3p2XIZ6XbehxXZ7YoSsCUQZKn2FU+S4NlKIzAU0p2ME Jan 29 16:37:02.264516 sshd-session[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:37:02.275426 systemd-logind[1501]: New session 17 of user core. Jan 29 16:37:02.282513 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:37:03.028847 sshd[5164]: Connection closed by 147.75.109.163 port 56562 Jan 29 16:37:03.029317 sshd-session[5156]: pam_unix(sshd:session): session closed for user core Jan 29 16:37:03.034583 systemd[1]: sshd@16-159.69.241.25:22-147.75.109.163:56562.service: Deactivated successfully. Jan 29 16:37:03.040677 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:37:03.045557 systemd-logind[1501]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:37:03.046815 systemd-logind[1501]: Removed session 17. Jan 29 16:37:18.290851 kubelet[2853]: E0129 16:37:18.290650 2853 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58170->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-0-0-d-42684b3569.181f3729647e632a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-0-0-d-42684b3569,UID:0dee9c6e4b445692c966bbc33ee1aac7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-d-42684b3569,},FirstTimestamp:2025-01-29 16:37:12.107492138 +0000 UTC m=+342.481433679,LastTimestamp:2025-01-29 16:37:12.107492138 +0000 UTC m=+342.481433679,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-d-42684b3569,}" Jan 29 16:37:19.633390 systemd[1]: cri-containerd-bf8813fe945b688eb9eb405eb34ad89c452143e3664c792b759cf1f34f8bb6f4.scope: Deactivated successfully. Jan 29 16:37:19.634975 systemd[1]: cri-containerd-bf8813fe945b688eb9eb405eb34ad89c452143e3664c792b759cf1f34f8bb6f4.scope: Consumed 7.649s CPU time, 65.8M memory peak, 11.7M read from disk. Jan 29 16:37:19.670032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf8813fe945b688eb9eb405eb34ad89c452143e3664c792b759cf1f34f8bb6f4-rootfs.mount: Deactivated successfully. Jan 29 16:37:19.680790 containerd[1520]: time="2025-01-29T16:37:19.680655128Z" level=info msg="shim disconnected" id=bf8813fe945b688eb9eb405eb34ad89c452143e3664c792b759cf1f34f8bb6f4 namespace=k8s.io Jan 29 16:37:19.680790 containerd[1520]: time="2025-01-29T16:37:19.680740409Z" level=warning msg="cleaning up after shim disconnected" id=bf8813fe945b688eb9eb405eb34ad89c452143e3664c792b759cf1f34f8bb6f4 namespace=k8s.io Jan 29 16:37:19.680790 containerd[1520]: time="2025-01-29T16:37:19.680752501Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:37:20.083651 kubelet[2853]: E0129 16:37:20.083468 2853 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58322->10.0.0.2:2379: read: connection timed out" Jan 29 16:37:20.095206 systemd[1]: cri-containerd-61745e0ae4d99f4f15f9a4cde3928c632b143b14d44783c2f5839a90bcafebcb.scope: Deactivated successfully. Jan 29 16:37:20.096249 systemd[1]: cri-containerd-61745e0ae4d99f4f15f9a4cde3928c632b143b14d44783c2f5839a90bcafebcb.scope: Consumed 2.045s CPU time, 22M memory peak, 2M read from disk. Jan 29 16:37:20.146465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61745e0ae4d99f4f15f9a4cde3928c632b143b14d44783c2f5839a90bcafebcb-rootfs.mount: Deactivated successfully. Jan 29 16:37:20.154305 containerd[1520]: time="2025-01-29T16:37:20.154175776Z" level=info msg="shim disconnected" id=61745e0ae4d99f4f15f9a4cde3928c632b143b14d44783c2f5839a90bcafebcb namespace=k8s.io Jan 29 16:37:20.154305 containerd[1520]: time="2025-01-29T16:37:20.154258432Z" level=warning msg="cleaning up after shim disconnected" id=61745e0ae4d99f4f15f9a4cde3928c632b143b14d44783c2f5839a90bcafebcb namespace=k8s.io Jan 29 16:37:20.154305 containerd[1520]: time="2025-01-29T16:37:20.154277959Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:37:20.640284 kubelet[2853]: I0129 16:37:20.640226 2853 scope.go:117] "RemoveContainer" containerID="bf8813fe945b688eb9eb405eb34ad89c452143e3664c792b759cf1f34f8bb6f4" Jan 29 16:37:20.643156 containerd[1520]: time="2025-01-29T16:37:20.642931720Z" level=info msg="CreateContainer within sandbox \"74abf674b9afb71eb7fdb0b34888d337430c78a1456d4716bb1d8d18062aff8e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 29 16:37:20.643543 kubelet[2853]: I0129 16:37:20.643380 2853 scope.go:117] "RemoveContainer" containerID="61745e0ae4d99f4f15f9a4cde3928c632b143b14d44783c2f5839a90bcafebcb" Jan 29 16:37:20.645410 containerd[1520]: time="2025-01-29T16:37:20.645358521Z" level=info msg="CreateContainer within sandbox \"2a767de7b697941826969da80866d0547335681f0f042c7697b044ea654e8f1c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 29 16:37:20.682256 containerd[1520]: time="2025-01-29T16:37:20.682116899Z" level=info msg="CreateContainer within sandbox \"2a767de7b697941826969da80866d0547335681f0f042c7697b044ea654e8f1c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"52f4784c5b17069f1f469021f277bd635040dc67ddaf38f01936c5b907e64656\"" Jan 29 16:37:20.683365 containerd[1520]: time="2025-01-29T16:37:20.683138818Z" level=info msg="StartContainer for \"52f4784c5b17069f1f469021f277bd635040dc67ddaf38f01936c5b907e64656\"" Jan 29 16:37:20.722464 containerd[1520]: time="2025-01-29T16:37:20.722402807Z" level=info msg="CreateContainer within sandbox \"74abf674b9afb71eb7fdb0b34888d337430c78a1456d4716bb1d8d18062aff8e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"354447eca47b267c667a4aefa2b18215021a6b1971bbcd55cd5b96399d56f4d6\"" Jan 29 16:37:20.727339 containerd[1520]: time="2025-01-29T16:37:20.724754958Z" level=info msg="StartContainer for \"354447eca47b267c667a4aefa2b18215021a6b1971bbcd55cd5b96399d56f4d6\"" Jan 29 16:37:20.730213 systemd[1]: run-containerd-runc-k8s.io-52f4784c5b17069f1f469021f277bd635040dc67ddaf38f01936c5b907e64656-runc.zNvicZ.mount: Deactivated successfully. Jan 29 16:37:20.739361 systemd[1]: Started cri-containerd-52f4784c5b17069f1f469021f277bd635040dc67ddaf38f01936c5b907e64656.scope - libcontainer container 52f4784c5b17069f1f469021f277bd635040dc67ddaf38f01936c5b907e64656. Jan 29 16:37:20.767262 systemd[1]: Started cri-containerd-354447eca47b267c667a4aefa2b18215021a6b1971bbcd55cd5b96399d56f4d6.scope - libcontainer container 354447eca47b267c667a4aefa2b18215021a6b1971bbcd55cd5b96399d56f4d6. Jan 29 16:37:20.797449 containerd[1520]: time="2025-01-29T16:37:20.797402984Z" level=info msg="StartContainer for \"52f4784c5b17069f1f469021f277bd635040dc67ddaf38f01936c5b907e64656\" returns successfully" Jan 29 16:37:20.832049 containerd[1520]: time="2025-01-29T16:37:20.831994212Z" level=info msg="StartContainer for \"354447eca47b267c667a4aefa2b18215021a6b1971bbcd55cd5b96399d56f4d6\" returns successfully"