Dec 13 01:29:19.908323 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:29:19.908345 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:19.908353 kernel: BIOS-provided physical RAM map: Dec 13 01:29:19.908358 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:29:19.908363 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:29:19.908368 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:29:19.908374 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Dec 13 01:29:19.908379 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Dec 13 01:29:19.908387 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:29:19.908392 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:29:19.908397 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:29:19.908402 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:29:19.908407 kernel: NX (Execute Disable) protection: active Dec 13 01:29:19.908412 kernel: APIC: Static calls initialized Dec 13 01:29:19.908421 kernel: SMBIOS 2.8 present. Dec 13 01:29:19.908427 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Dec 13 01:29:19.908432 kernel: Hypervisor detected: KVM Dec 13 01:29:19.908437 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:29:19.908443 kernel: kvm-clock: using sched offset of 2607053823 cycles Dec 13 01:29:19.908448 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:29:19.908454 kernel: tsc: Detected 2445.406 MHz processor Dec 13 01:29:19.908460 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:29:19.908466 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:29:19.908474 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Dec 13 01:29:19.908480 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:29:19.908485 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:29:19.908491 kernel: Using GB pages for direct mapping Dec 13 01:29:19.908496 kernel: ACPI: Early table checksum verification disabled Dec 13 01:29:19.908502 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Dec 13 01:29:19.908507 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:19.908513 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:19.908519 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:19.908527 kernel: ACPI: FACS 0x000000007CFE0000 000040 Dec 13 01:29:19.908532 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:19.908538 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:19.908543 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:19.908549 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:19.908555 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Dec 13 01:29:19.908560 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Dec 13 01:29:19.908566 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Dec 13 01:29:19.908577 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Dec 13 01:29:19.908583 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Dec 13 01:29:19.908589 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Dec 13 01:29:19.908595 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Dec 13 01:29:19.908600 kernel: No NUMA configuration found Dec 13 01:29:19.908607 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Dec 13 01:29:19.908615 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Dec 13 01:29:19.908620 kernel: Zone ranges: Dec 13 01:29:19.908626 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:29:19.908632 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Dec 13 01:29:19.908638 kernel: Normal empty Dec 13 01:29:19.908644 kernel: Movable zone start for each node Dec 13 01:29:19.908649 kernel: Early memory node ranges Dec 13 01:29:19.908655 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:29:19.908661 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Dec 13 01:29:19.908667 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Dec 13 01:29:19.908675 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:29:19.908681 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:29:19.908686 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:29:19.908692 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:29:19.908698 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:29:19.908704 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:29:19.908710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:29:19.908715 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:29:19.908721 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:29:19.908729 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:29:19.908735 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:29:19.908741 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:29:19.908746 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:29:19.908752 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:29:19.908758 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:29:19.908764 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:29:19.908770 kernel: Booting paravirtualized kernel on KVM Dec 13 01:29:19.908776 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:29:19.908784 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:29:19.908790 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:29:19.908796 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:29:19.908801 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:29:19.908807 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 01:29:19.908814 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:19.908820 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:29:19.908826 kernel: random: crng init done Dec 13 01:29:19.908834 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:29:19.908840 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:29:19.908845 kernel: Fallback order for Node 0: 0 Dec 13 01:29:19.908851 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Dec 13 01:29:19.908857 kernel: Policy zone: DMA32 Dec 13 01:29:19.908863 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:29:19.908869 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 01:29:19.908875 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:29:19.908881 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:29:19.908889 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:29:19.908895 kernel: Dynamic Preempt: voluntary Dec 13 01:29:19.908900 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:29:19.908907 kernel: rcu: RCU event tracing is enabled. Dec 13 01:29:19.908913 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:29:19.908919 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:29:19.908925 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:29:19.908931 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:29:19.908937 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:29:19.908943 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:29:19.908951 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:29:19.908957 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:29:19.908963 kernel: Console: colour VGA+ 80x25 Dec 13 01:29:19.908968 kernel: printk: console [tty0] enabled Dec 13 01:29:19.908974 kernel: printk: console [ttyS0] enabled Dec 13 01:29:19.908980 kernel: ACPI: Core revision 20230628 Dec 13 01:29:19.908986 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:29:19.908992 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:29:19.908998 kernel: x2apic enabled Dec 13 01:29:19.909006 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:29:19.909012 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:29:19.909017 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:29:19.909023 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) Dec 13 01:29:19.909029 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:29:19.909035 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:29:19.909041 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:29:19.909047 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:29:19.909105 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:29:19.909112 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:29:19.909118 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:29:19.909127 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:29:19.909133 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:29:19.909140 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:29:19.909146 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:29:19.909152 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:29:19.909159 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:29:19.909165 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:29:19.909171 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:29:19.909179 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:29:19.909185 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:29:19.909192 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:29:19.909198 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:29:19.909204 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:29:19.909212 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:29:19.909218 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:29:19.909224 kernel: landlock: Up and running. Dec 13 01:29:19.909230 kernel: SELinux: Initializing. Dec 13 01:29:19.909257 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:29:19.909263 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:29:19.909270 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:29:19.909276 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:19.909282 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:19.909291 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:19.909297 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:29:19.909303 kernel: ... version: 0 Dec 13 01:29:19.909310 kernel: ... bit width: 48 Dec 13 01:29:19.909316 kernel: ... generic registers: 6 Dec 13 01:29:19.909322 kernel: ... value mask: 0000ffffffffffff Dec 13 01:29:19.909328 kernel: ... max period: 00007fffffffffff Dec 13 01:29:19.909334 kernel: ... fixed-purpose events: 0 Dec 13 01:29:19.909340 kernel: ... event mask: 000000000000003f Dec 13 01:29:19.909349 kernel: signal: max sigframe size: 1776 Dec 13 01:29:19.909355 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:29:19.909361 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:29:19.909367 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:29:19.909374 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:29:19.909380 kernel: .... node #0, CPUs: #1 Dec 13 01:29:19.909386 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:29:19.909392 kernel: smpboot: Max logical packages: 1 Dec 13 01:29:19.909398 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) Dec 13 01:29:19.909404 kernel: devtmpfs: initialized Dec 13 01:29:19.909413 kernel: x86/mm: Memory block size: 128MB Dec 13 01:29:19.909419 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:29:19.909426 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:29:19.909432 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:29:19.909438 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:29:19.909444 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:29:19.909450 kernel: audit: type=2000 audit(1734053359.438:1): state=initialized audit_enabled=0 res=1 Dec 13 01:29:19.909456 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:29:19.909462 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:29:19.909471 kernel: cpuidle: using governor menu Dec 13 01:29:19.909477 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:29:19.909483 kernel: dca service started, version 1.12.1 Dec 13 01:29:19.909489 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:29:19.909495 kernel: PCI: Using configuration type 1 for base access Dec 13 01:29:19.909502 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:29:19.909508 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:29:19.909514 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:29:19.909520 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:29:19.909528 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:29:19.909534 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:29:19.909540 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:29:19.909547 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:29:19.909553 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:29:19.909559 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:29:19.909565 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:29:19.909571 kernel: ACPI: Interpreter enabled Dec 13 01:29:19.909577 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:29:19.909585 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:29:19.909592 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:29:19.909598 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:29:19.909604 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:29:19.909610 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:29:19.909772 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:29:19.909887 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:29:19.909998 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:29:19.910007 kernel: PCI host bridge to bus 0000:00 Dec 13 01:29:19.910145 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:29:19.910243 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:29:19.910338 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:29:19.910432 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Dec 13 01:29:19.910526 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:29:19.910625 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:29:19.910719 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:29:19.910842 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:29:19.910957 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Dec 13 01:29:19.914088 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Dec 13 01:29:19.914220 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Dec 13 01:29:19.914331 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Dec 13 01:29:19.914443 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Dec 13 01:29:19.914548 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:29:19.914661 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:19.914765 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Dec 13 01:29:19.914875 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:19.914978 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Dec 13 01:29:19.915137 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:19.915248 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Dec 13 01:29:19.915358 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:19.915462 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Dec 13 01:29:19.915572 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:19.915676 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Dec 13 01:29:19.915794 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:19.915899 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Dec 13 01:29:19.917152 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:19.917291 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Dec 13 01:29:19.917407 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:19.917512 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Dec 13 01:29:19.917628 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:19.917731 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Dec 13 01:29:19.917841 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:29:19.917943 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:29:19.918051 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:29:19.918563 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Dec 13 01:29:19.918678 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Dec 13 01:29:19.918790 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:29:19.918893 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:29:19.919010 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 01:29:19.919202 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Dec 13 01:29:19.919314 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 01:29:19.919437 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Dec 13 01:29:19.919619 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 01:29:19.920052 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:19.920242 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:19.920361 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 01:29:19.920470 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Dec 13 01:29:19.920574 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 01:29:19.920677 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:19.920785 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:19.920904 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 01:29:19.921012 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Dec 13 01:29:19.923172 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Dec 13 01:29:19.923289 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 01:29:19.923396 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:19.923498 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:19.923622 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 01:29:19.923733 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 01:29:19.923838 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 01:29:19.923942 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:19.924044 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:19.924181 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 01:29:19.924294 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Dec 13 01:29:19.924404 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 01:29:19.925127 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:19.925267 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:19.925425 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 01:29:19.926286 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Dec 13 01:29:19.926401 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Dec 13 01:29:19.926506 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 01:29:19.926615 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 01:29:19.926718 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:19.926727 kernel: acpiphp: Slot [0] registered Dec 13 01:29:19.926844 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 01:29:19.926951 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Dec 13 01:29:19.927596 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Dec 13 01:29:19.927725 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Dec 13 01:29:19.927830 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 01:29:19.927940 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:19.928043 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:19.928052 kernel: acpiphp: Slot [0-2] registered Dec 13 01:29:19.928179 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 01:29:19.928282 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:19.928385 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:19.928394 kernel: acpiphp: Slot [0-3] registered Dec 13 01:29:19.928495 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 01:29:19.928603 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:19.928705 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:19.928714 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:29:19.928720 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:29:19.928727 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:29:19.928733 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:29:19.928739 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:29:19.928745 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:29:19.928752 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:29:19.928761 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:29:19.928768 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:29:19.928774 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:29:19.928780 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:29:19.928786 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:29:19.928792 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:29:19.928798 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:29:19.928804 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:29:19.928811 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:29:19.928819 kernel: iommu: Default domain type: Translated Dec 13 01:29:19.928825 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:29:19.928831 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:29:19.928838 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:29:19.928844 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:29:19.928850 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Dec 13 01:29:19.928952 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:29:19.929054 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:29:19.929216 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:29:19.929229 kernel: vgaarb: loaded Dec 13 01:29:19.929253 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:29:19.929260 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:29:19.929266 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:29:19.929273 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:29:19.929279 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:29:19.929286 kernel: pnp: PnP ACPI init Dec 13 01:29:19.929403 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:29:19.929417 kernel: pnp: PnP ACPI: found 5 devices Dec 13 01:29:19.929424 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:29:19.929431 kernel: NET: Registered PF_INET protocol family Dec 13 01:29:19.929437 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:29:19.929443 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:29:19.929450 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:29:19.929457 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:29:19.929463 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:29:19.929469 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:29:19.929478 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:29:19.929484 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:29:19.929491 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:29:19.929497 kernel: NET: Registered PF_XDP protocol family Dec 13 01:29:19.929601 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 01:29:19.929705 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 01:29:19.929807 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 01:29:19.929915 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 01:29:19.930019 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 01:29:19.934100 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 01:29:19.934217 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 01:29:19.934322 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:19.934424 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:19.934526 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 01:29:19.934628 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:19.934734 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:19.934837 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 01:29:19.934939 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:19.935040 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:19.935209 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 01:29:19.935314 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:19.935414 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:19.935520 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 01:29:19.935641 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:19.935743 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:19.935844 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 01:29:19.935945 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 01:29:19.936046 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:19.936168 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 01:29:19.936272 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Dec 13 01:29:19.936373 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:19.936475 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:19.936581 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 01:29:19.936684 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Dec 13 01:29:19.936785 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:19.936887 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:19.936988 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 01:29:19.938991 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Dec 13 01:29:19.939162 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:19.939274 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:19.939375 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:29:19.939470 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:29:19.939563 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:29:19.939660 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Dec 13 01:29:19.939753 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:29:19.939845 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:29:19.939952 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:19.940051 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:19.940183 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:19.940289 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:19.940395 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:19.940493 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:19.940597 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:19.940696 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:19.940800 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:19.940903 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:19.941009 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 01:29:19.943139 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:19.943253 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Dec 13 01:29:19.943413 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:19.943515 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:19.943621 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Dec 13 01:29:19.943727 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:19.943908 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:19.944014 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Dec 13 01:29:19.946152 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:19.946260 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:19.946271 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:29:19.946279 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:29:19.946290 kernel: Initialise system trusted keyrings Dec 13 01:29:19.946296 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:29:19.946303 kernel: Key type asymmetric registered Dec 13 01:29:19.946310 kernel: Asymmetric key parser 'x509' registered Dec 13 01:29:19.946316 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:29:19.946323 kernel: io scheduler mq-deadline registered Dec 13 01:29:19.946329 kernel: io scheduler kyber registered Dec 13 01:29:19.946336 kernel: io scheduler bfq registered Dec 13 01:29:19.946440 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 01:29:19.946547 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 01:29:19.946648 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 01:29:19.946750 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 01:29:19.946973 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 01:29:19.947317 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 01:29:19.947546 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 01:29:19.949186 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 01:29:19.949316 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 01:29:19.949424 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 01:29:19.949534 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 01:29:19.949636 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 01:29:19.949739 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 01:29:19.949841 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 01:29:19.949943 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 01:29:19.950047 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 01:29:19.950070 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:29:19.951160 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Dec 13 01:29:19.951278 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Dec 13 01:29:19.951289 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:29:19.951297 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Dec 13 01:29:19.951307 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:29:19.951314 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:29:19.951320 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:29:19.951327 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:29:19.951334 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:29:19.951449 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 01:29:19.951466 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:29:19.951562 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 01:29:19.951657 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T01:29:19 UTC (1734053359) Dec 13 01:29:19.951753 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:29:19.951762 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:29:19.951769 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:29:19.951776 kernel: Segment Routing with IPv6 Dec 13 01:29:19.951786 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:29:19.951792 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:29:19.951799 kernel: Key type dns_resolver registered Dec 13 01:29:19.951806 kernel: IPI shorthand broadcast: enabled Dec 13 01:29:19.951812 kernel: sched_clock: Marking stable (1019007401, 129345877)->(1167712830, -19359552) Dec 13 01:29:19.951819 kernel: registered taskstats version 1 Dec 13 01:29:19.951826 kernel: Loading compiled-in X.509 certificates Dec 13 01:29:19.951832 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:29:19.951839 kernel: Key type .fscrypt registered Dec 13 01:29:19.951845 kernel: Key type fscrypt-provisioning registered Dec 13 01:29:19.951854 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:29:19.951860 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:29:19.951867 kernel: ima: No architecture policies found Dec 13 01:29:19.951874 kernel: clk: Disabling unused clocks Dec 13 01:29:19.951881 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:29:19.951887 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:29:19.951894 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:29:19.951901 kernel: Run /init as init process Dec 13 01:29:19.951909 kernel: with arguments: Dec 13 01:29:19.951917 kernel: /init Dec 13 01:29:19.951923 kernel: with environment: Dec 13 01:29:19.951930 kernel: HOME=/ Dec 13 01:29:19.951936 kernel: TERM=linux Dec 13 01:29:19.951943 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:29:19.951952 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:19.951961 systemd[1]: Detected virtualization kvm. Dec 13 01:29:19.951970 systemd[1]: Detected architecture x86-64. Dec 13 01:29:19.951977 systemd[1]: Running in initrd. Dec 13 01:29:19.951983 systemd[1]: No hostname configured, using default hostname. Dec 13 01:29:19.951990 systemd[1]: Hostname set to . Dec 13 01:29:19.951997 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:19.952004 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:29:19.952011 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:19.952018 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:19.952028 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:29:19.952035 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:19.952042 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:29:19.952050 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:29:19.952113 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:29:19.952122 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:29:19.952129 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:19.952140 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:19.952147 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:29:19.952154 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:19.952161 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:19.952168 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:29:19.952175 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:19.952182 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:19.952189 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:29:19.952198 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:29:19.952206 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:19.952212 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:19.952220 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:19.952226 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:29:19.952234 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:29:19.952240 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:19.952247 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:29:19.952254 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:29:19.952263 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:19.952270 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:19.952277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:19.952306 systemd-journald[187]: Collecting audit messages is disabled. Dec 13 01:29:19.952326 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:19.952334 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:19.952341 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:29:19.952348 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:29:19.952358 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:29:19.952365 kernel: Bridge firewalling registered Dec 13 01:29:19.952373 systemd-journald[187]: Journal started Dec 13 01:29:19.952389 systemd-journald[187]: Runtime Journal (/run/log/journal/e750f6a326854e58a029fcdfac572207) is 4.8M, max 38.4M, 33.6M free. Dec 13 01:29:19.921479 systemd-modules-load[188]: Inserted module 'overlay' Dec 13 01:29:19.952656 systemd-modules-load[188]: Inserted module 'br_netfilter' Dec 13 01:29:19.981076 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:19.981367 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:19.982685 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:19.985554 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:19.990212 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:19.992780 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:19.994315 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:19.999284 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:29:20.013302 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:20.014590 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:20.016303 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:20.021240 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:29:20.022528 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:20.026247 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:29:20.033876 dracut-cmdline[222]: dracut-dracut-053 Dec 13 01:29:20.038262 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:20.064543 systemd-resolved[224]: Positive Trust Anchors: Dec 13 01:29:20.065160 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:29:20.065189 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:29:20.071216 systemd-resolved[224]: Defaulting to hostname 'linux'. Dec 13 01:29:20.072248 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:29:20.073026 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:20.101105 kernel: SCSI subsystem initialized Dec 13 01:29:20.110081 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:29:20.119086 kernel: iscsi: registered transport (tcp) Dec 13 01:29:20.137457 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:29:20.137532 kernel: QLogic iSCSI HBA Driver Dec 13 01:29:20.181729 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:20.188190 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:29:20.211117 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:29:20.211198 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:29:20.211210 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:29:20.251079 kernel: raid6: avx2x4 gen() 36269 MB/s Dec 13 01:29:20.268124 kernel: raid6: avx2x2 gen() 33999 MB/s Dec 13 01:29:20.285174 kernel: raid6: avx2x1 gen() 28444 MB/s Dec 13 01:29:20.285253 kernel: raid6: using algorithm avx2x4 gen() 36269 MB/s Dec 13 01:29:20.303264 kernel: raid6: .... xor() 4954 MB/s, rmw enabled Dec 13 01:29:20.303322 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:29:20.322088 kernel: xor: automatically using best checksumming function avx Dec 13 01:29:20.466102 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:29:20.477656 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:20.488226 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:20.498534 systemd-udevd[407]: Using default interface naming scheme 'v255'. Dec 13 01:29:20.502310 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:20.508202 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:29:20.522638 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Dec 13 01:29:20.552682 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:20.557275 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:20.619620 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:20.629229 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:29:20.641765 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:20.643532 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:20.645294 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:20.646295 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:20.651193 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:29:20.668041 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:20.747644 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:29:20.749090 kernel: libata version 3.00 loaded. Dec 13 01:29:20.769573 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:29:20.769677 kernel: AES CTR mode by8 optimization enabled Dec 13 01:29:20.774558 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:29:20.818619 kernel: ACPI: bus type USB registered Dec 13 01:29:20.818646 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:29:20.818657 kernel: usbcore: registered new interface driver usbfs Dec 13 01:29:20.818665 kernel: usbcore: registered new interface driver hub Dec 13 01:29:20.818674 kernel: usbcore: registered new device driver usb Dec 13 01:29:20.818682 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:29:20.818873 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:29:20.819006 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:29:20.819161 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 01:29:20.819188 kernel: scsi host1: ahci Dec 13 01:29:20.819320 kernel: scsi host2: ahci Dec 13 01:29:20.819446 kernel: scsi host3: ahci Dec 13 01:29:20.819571 kernel: scsi host4: ahci Dec 13 01:29:20.819708 kernel: scsi host5: ahci Dec 13 01:29:20.819834 kernel: scsi host6: ahci Dec 13 01:29:20.819955 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 38 Dec 13 01:29:20.819965 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 38 Dec 13 01:29:20.819974 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 38 Dec 13 01:29:20.819982 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 38 Dec 13 01:29:20.819990 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 38 Dec 13 01:29:20.820002 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 38 Dec 13 01:29:20.790134 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:20.790249 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:20.794755 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:20.795216 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:20.800006 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:20.801305 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:20.811631 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:20.873099 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:20.885278 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:20.902762 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:21.136069 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:21.136147 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:21.136161 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:21.136172 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:21.136182 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:29:21.136193 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:21.136203 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:29:21.138685 kernel: ata1.00: applying bridge limits Dec 13 01:29:21.139113 kernel: ata1.00: configured for UDMA/100 Dec 13 01:29:21.143099 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:29:21.177105 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 01:29:21.202800 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 01:29:21.202945 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 01:29:21.203144 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 01:29:21.203387 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 01:29:21.203567 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 01:29:21.203727 kernel: hub 1-0:1.0: USB hub found Dec 13 01:29:21.203881 kernel: hub 1-0:1.0: 4 ports detected Dec 13 01:29:21.204008 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 01:29:21.204636 kernel: hub 2-0:1.0: USB hub found Dec 13 01:29:21.204819 kernel: hub 2-0:1.0: 4 ports detected Dec 13 01:29:21.209759 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:29:21.229778 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:29:21.229792 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 13 01:29:21.229952 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 01:29:21.230110 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:29:21.230246 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 13 01:29:21.230378 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:29:21.230519 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:29:21.230529 kernel: GPT:17805311 != 80003071 Dec 13 01:29:21.230537 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:29:21.230546 kernel: GPT:17805311 != 80003071 Dec 13 01:29:21.230554 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:29:21.230561 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:21.230569 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:29:21.231336 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:29:21.263079 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (454) Dec 13 01:29:21.272300 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (467) Dec 13 01:29:21.275300 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 01:29:21.276662 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 01:29:21.281655 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 01:29:21.286649 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 01:29:21.293897 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 01:29:21.307187 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:29:21.312880 disk-uuid[577]: Primary Header is updated. Dec 13 01:29:21.312880 disk-uuid[577]: Secondary Entries is updated. Dec 13 01:29:21.312880 disk-uuid[577]: Secondary Header is updated. Dec 13 01:29:21.318080 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:21.324320 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:21.437095 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 01:29:21.573151 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:29:21.577149 kernel: usbcore: registered new interface driver usbhid Dec 13 01:29:21.577198 kernel: usbhid: USB HID core driver Dec 13 01:29:21.583377 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 01:29:21.583426 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 01:29:22.328126 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:22.329354 disk-uuid[578]: The operation has completed successfully. Dec 13 01:29:22.378783 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:29:22.378899 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:29:22.391292 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:29:22.394404 sh[594]: Success Dec 13 01:29:22.407087 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:29:22.451548 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:29:22.466460 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:29:22.467130 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:29:22.484857 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:29:22.484914 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:22.484933 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:29:22.487693 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:29:22.487715 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:29:22.495100 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:29:22.497219 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:29:22.498262 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:29:22.503195 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:29:22.505737 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:29:22.516227 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:22.516251 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:22.516262 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:22.523411 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:22.523439 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:22.532814 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:29:22.535355 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:22.541212 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:29:22.549211 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:29:22.606300 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:22.615306 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:22.626006 ignition[692]: Ignition 2.19.0 Dec 13 01:29:22.626639 ignition[692]: Stage: fetch-offline Dec 13 01:29:22.626685 ignition[692]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:22.626699 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:22.626781 ignition[692]: parsed url from cmdline: "" Dec 13 01:29:22.626785 ignition[692]: no config URL provided Dec 13 01:29:22.626790 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:29:22.626798 ignition[692]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:29:22.626803 ignition[692]: failed to fetch config: resource requires networking Dec 13 01:29:22.626968 ignition[692]: Ignition finished successfully Dec 13 01:29:22.630281 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:22.639126 systemd-networkd[775]: lo: Link UP Dec 13 01:29:22.639136 systemd-networkd[775]: lo: Gained carrier Dec 13 01:29:22.641570 systemd-networkd[775]: Enumeration completed Dec 13 01:29:22.641661 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:22.642685 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:22.642690 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:22.644206 systemd[1]: Reached target network.target - Network. Dec 13 01:29:22.645553 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:22.645558 systemd-networkd[775]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:22.646208 systemd-networkd[775]: eth0: Link UP Dec 13 01:29:22.646211 systemd-networkd[775]: eth0: Gained carrier Dec 13 01:29:22.646218 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:22.651281 systemd-networkd[775]: eth1: Link UP Dec 13 01:29:22.651285 systemd-networkd[775]: eth1: Gained carrier Dec 13 01:29:22.651291 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:22.651652 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:29:22.665325 ignition[782]: Ignition 2.19.0 Dec 13 01:29:22.665336 ignition[782]: Stage: fetch Dec 13 01:29:22.665479 ignition[782]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:22.665490 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:22.665565 ignition[782]: parsed url from cmdline: "" Dec 13 01:29:22.665568 ignition[782]: no config URL provided Dec 13 01:29:22.665573 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:29:22.665581 ignition[782]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:29:22.665604 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 01:29:22.665758 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 01:29:22.682092 systemd-networkd[775]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:29:22.787117 systemd-networkd[775]: eth0: DHCPv4 address 78.46.150.92/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 01:29:22.865910 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 01:29:22.875391 ignition[782]: GET result: OK Dec 13 01:29:22.875454 ignition[782]: parsing config with SHA512: e53a7d19705c4f8dd9697432fe6557269e824031d3ae966910e1652f7d077fa00760417dadd5b5fcb5ef28f99710dea3af00a59e31cd80244b97de3336a39e41 Dec 13 01:29:22.879654 unknown[782]: fetched base config from "system" Dec 13 01:29:22.879668 unknown[782]: fetched base config from "system" Dec 13 01:29:22.880225 ignition[782]: fetch: fetch complete Dec 13 01:29:22.879675 unknown[782]: fetched user config from "hetzner" Dec 13 01:29:22.880236 ignition[782]: fetch: fetch passed Dec 13 01:29:22.884937 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:29:22.880292 ignition[782]: Ignition finished successfully Dec 13 01:29:22.891335 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:29:22.908435 ignition[789]: Ignition 2.19.0 Dec 13 01:29:22.908449 ignition[789]: Stage: kargs Dec 13 01:29:22.908603 ignition[789]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:22.908617 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:22.911201 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:29:22.909443 ignition[789]: kargs: kargs passed Dec 13 01:29:22.920185 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:29:22.909489 ignition[789]: Ignition finished successfully Dec 13 01:29:22.942810 ignition[795]: Ignition 2.19.0 Dec 13 01:29:22.942832 ignition[795]: Stage: disks Dec 13 01:29:22.943167 ignition[795]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:22.943190 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:22.944503 ignition[795]: disks: disks passed Dec 13 01:29:22.946418 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:29:22.944581 ignition[795]: Ignition finished successfully Dec 13 01:29:22.948598 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:22.949998 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:29:22.950898 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:22.952180 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:22.953376 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:22.959220 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:29:22.976117 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:29:22.978100 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:29:22.985209 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:29:23.070087 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:29:23.070536 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:29:23.071548 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:23.077130 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:23.079372 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:29:23.083236 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:29:23.083746 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:29:23.083773 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:23.088009 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:29:23.098393 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (811) Dec 13 01:29:23.098413 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:23.098423 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:23.098438 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:23.091897 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:29:23.103803 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:23.103827 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:23.106269 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:23.138242 coreos-metadata[813]: Dec 13 01:29:23.138 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 01:29:23.141485 coreos-metadata[813]: Dec 13 01:29:23.140 INFO Fetch successful Dec 13 01:29:23.141485 coreos-metadata[813]: Dec 13 01:29:23.140 INFO wrote hostname ci-4081-2-1-2-9d1d8cb240 to /sysroot/etc/hostname Dec 13 01:29:23.143824 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:29:23.141523 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:29:23.147395 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:29:23.152023 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:29:23.156660 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:29:23.239379 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:23.244146 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:29:23.247794 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:29:23.253155 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:23.274654 ignition[928]: INFO : Ignition 2.19.0 Dec 13 01:29:23.275506 ignition[928]: INFO : Stage: mount Dec 13 01:29:23.276415 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:23.276415 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:23.278782 ignition[928]: INFO : mount: mount passed Dec 13 01:29:23.278782 ignition[928]: INFO : Ignition finished successfully Dec 13 01:29:23.276899 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:29:23.280258 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:29:23.285153 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:29:23.481915 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:29:23.485334 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:23.514690 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (940) Dec 13 01:29:23.514732 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:23.518207 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:23.522552 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:23.529142 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:23.529167 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:23.533931 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:23.555710 ignition[956]: INFO : Ignition 2.19.0 Dec 13 01:29:23.556514 ignition[956]: INFO : Stage: files Dec 13 01:29:23.556514 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:23.556514 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:23.558451 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:29:23.558451 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:29:23.558451 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:29:23.560490 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:29:23.561140 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:29:23.561140 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:29:23.561017 unknown[956]: wrote ssh authorized keys file for user: core Dec 13 01:29:23.563076 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:29:23.563076 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:29:23.660506 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:29:23.825164 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:29:23.825164 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:23.829093 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:23.829093 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:23.829093 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:23.829093 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:23.829093 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:23.829093 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:23.829093 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:23.829093 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:23.829093 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:23.829093 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:29:23.829093 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:29:23.829093 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:29:23.829093 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 01:29:24.385351 systemd-networkd[775]: eth0: Gained IPv6LL Dec 13 01:29:24.389757 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:29:24.514258 systemd-networkd[775]: eth1: Gained IPv6LL Dec 13 01:29:24.667794 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:29:24.667794 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:29:24.672290 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:24.672290 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:24.672290 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:29:24.672290 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 01:29:24.672290 ignition[956]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 01:29:24.672290 ignition[956]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 01:29:24.672290 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 01:29:24.672290 ignition[956]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:24.672290 ignition[956]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:24.672290 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:24.672290 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:24.672290 ignition[956]: INFO : files: files passed Dec 13 01:29:24.672290 ignition[956]: INFO : Ignition finished successfully Dec 13 01:29:24.671217 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:29:24.681767 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:29:24.686280 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:29:24.687980 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:29:24.688198 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:29:24.705161 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:24.706008 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:24.707122 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:24.708296 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:24.709345 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:29:24.714172 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:29:24.735510 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:29:24.735669 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:29:24.737186 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:29:24.738394 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:29:24.738874 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:29:24.744199 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:29:24.755881 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:24.762210 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:29:24.770544 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:24.771189 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:24.772274 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:29:24.773282 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:29:24.773417 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:24.774481 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:29:24.775195 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:29:24.776176 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:29:24.777118 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:24.778043 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:24.779072 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:29:24.780112 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:24.781132 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:29:24.782112 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:29:24.783125 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:29:24.783991 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:29:24.784114 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:24.785199 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:24.785841 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:24.786735 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:29:24.786824 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:24.787797 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:29:24.787892 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:24.789229 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:29:24.789334 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:24.789973 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:29:24.790143 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:29:24.790853 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:29:24.790967 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:29:24.801481 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:29:24.804248 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:29:24.804709 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:29:24.804869 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:24.806223 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:29:24.806315 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:24.815630 ignition[1010]: INFO : Ignition 2.19.0 Dec 13 01:29:24.817861 ignition[1010]: INFO : Stage: umount Dec 13 01:29:24.818337 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:29:24.818435 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:29:24.821116 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:24.821116 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:24.821116 ignition[1010]: INFO : umount: umount passed Dec 13 01:29:24.821116 ignition[1010]: INFO : Ignition finished successfully Dec 13 01:29:24.822193 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:29:24.822309 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:29:24.828391 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:29:24.828476 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:29:24.828941 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:29:24.828983 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:29:24.831432 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:29:24.831475 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:29:24.832347 systemd[1]: Stopped target network.target - Network. Dec 13 01:29:24.832722 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:29:24.832767 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:24.833291 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:29:24.833692 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:29:24.835107 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:24.835615 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:29:24.836230 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:29:24.836645 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:29:24.836686 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:24.837407 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:29:24.837465 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:24.839939 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:29:24.839996 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:29:24.840972 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:29:24.841016 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:24.842254 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:29:24.843094 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:29:24.844871 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:29:24.845445 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:29:24.845545 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:29:24.846288 systemd-networkd[775]: eth1: DHCPv6 lease lost Dec 13 01:29:24.847573 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:29:24.847633 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:24.850196 systemd-networkd[775]: eth0: DHCPv6 lease lost Dec 13 01:29:24.852884 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:29:24.853029 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:29:24.853983 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:29:24.854022 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:24.865199 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:29:24.866235 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:29:24.866841 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:24.868150 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:24.869834 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:29:24.869942 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:29:24.876930 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:29:24.877016 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:24.878499 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:29:24.878554 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:24.880430 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:29:24.880476 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:24.881822 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:29:24.881987 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:29:24.882741 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:29:24.882895 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:24.884580 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:29:24.884648 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:24.885215 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:29:24.885254 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:24.885680 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:29:24.885726 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:24.886279 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:29:24.886323 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:24.887651 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:24.887697 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:24.895220 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:29:24.896762 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:29:24.897441 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:24.897952 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:29:24.897999 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:24.898488 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:29:24.898529 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:24.899001 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:24.899044 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:24.902871 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:29:24.902974 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:29:24.904461 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:29:24.910183 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:29:24.916308 systemd[1]: Switching root. Dec 13 01:29:24.948054 systemd-journald[187]: Journal stopped Dec 13 01:29:25.854262 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Dec 13 01:29:25.854322 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:29:25.854339 kernel: SELinux: policy capability open_perms=1 Dec 13 01:29:25.854351 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:29:25.854367 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:29:25.854380 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:29:25.854389 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:29:25.854398 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:29:25.854407 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:29:25.854421 kernel: audit: type=1403 audit(1734053365.087:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:29:25.854432 systemd[1]: Successfully loaded SELinux policy in 40.871ms. Dec 13 01:29:25.854445 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.704ms. Dec 13 01:29:25.854456 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:25.854472 systemd[1]: Detected virtualization kvm. Dec 13 01:29:25.854482 systemd[1]: Detected architecture x86-64. Dec 13 01:29:25.854492 systemd[1]: Detected first boot. Dec 13 01:29:25.854502 systemd[1]: Hostname set to . Dec 13 01:29:25.854512 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:25.854527 zram_generator::config[1053]: No configuration found. Dec 13 01:29:25.854538 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:29:25.854550 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:29:25.854560 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:29:25.854570 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:29:25.854580 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:29:25.854591 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:29:25.854601 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:29:25.854611 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:29:25.854621 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:29:25.854631 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:29:25.854643 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:29:25.854653 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:29:25.854663 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:25.854673 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:25.854687 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:29:25.854697 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:29:25.854708 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:29:25.854718 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:25.854728 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:29:25.854740 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:25.854750 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:29:25.854760 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:29:25.854770 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:25.854780 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:29:25.854790 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:25.854802 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:25.854813 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:25.854823 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:25.854833 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:29:25.854843 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:29:25.854853 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:25.854863 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:25.854873 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:25.854883 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:29:25.854895 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:29:25.854905 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:29:25.854915 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:29:25.854930 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:25.854942 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:29:25.854952 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:29:25.854964 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:29:25.854979 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:29:25.854989 systemd[1]: Reached target machines.target - Containers. Dec 13 01:29:25.854998 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:29:25.855008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:25.855019 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:25.855028 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:29:25.855039 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:25.855051 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:25.855080 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:25.855091 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:29:25.855101 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:25.855111 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:29:25.855122 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:29:25.855132 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:29:25.855141 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:29:25.855152 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:29:25.855164 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:25.855174 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:25.855184 kernel: loop: module loaded Dec 13 01:29:25.855193 kernel: fuse: init (API version 7.39) Dec 13 01:29:25.855203 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:29:25.855213 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:29:25.855223 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:25.855233 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:29:25.855243 systemd[1]: Stopped verity-setup.service. Dec 13 01:29:25.855272 systemd-journald[1129]: Collecting audit messages is disabled. Dec 13 01:29:25.855294 systemd-journald[1129]: Journal started Dec 13 01:29:25.855315 systemd-journald[1129]: Runtime Journal (/run/log/journal/e750f6a326854e58a029fcdfac572207) is 4.8M, max 38.4M, 33.6M free. Dec 13 01:29:25.618635 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:29:25.639708 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:29:25.640416 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:29:25.865917 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:25.880092 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:25.874986 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:29:25.875535 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:29:25.876078 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:29:25.876599 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:29:25.877163 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:29:25.877738 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:29:25.879364 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:25.880132 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:29:25.880300 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:29:25.881513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:25.881665 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:25.883490 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:25.884592 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:25.885356 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:29:25.885495 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:29:25.886631 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:25.888636 kernel: ACPI: bus type drm_connector registered Dec 13 01:29:25.887361 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:25.888211 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:25.889545 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:25.890111 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:25.897858 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:29:25.902305 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:29:25.912110 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:29:25.913482 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:29:25.922161 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:29:25.923955 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:29:25.924595 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:29:25.924625 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:25.926131 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:29:25.931688 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:29:25.936249 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:29:25.936857 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:25.943540 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:29:25.948155 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:29:25.948657 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:25.957277 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:29:25.957777 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:25.960217 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:25.966238 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:29:25.974202 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:29:25.981920 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:29:25.983458 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:29:25.984409 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:29:26.006325 systemd-journald[1129]: Time spent on flushing to /var/log/journal/e750f6a326854e58a029fcdfac572207 is 47.521ms for 1135 entries. Dec 13 01:29:26.006325 systemd-journald[1129]: System Journal (/var/log/journal/e750f6a326854e58a029fcdfac572207) is 8.0M, max 584.8M, 576.8M free. Dec 13 01:29:26.078319 systemd-journald[1129]: Received client request to flush runtime journal. Dec 13 01:29:26.078371 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 01:29:26.078396 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:29:26.014717 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:29:26.015359 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:29:26.026649 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:29:26.068211 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:26.079368 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:29:26.084705 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:29:26.087810 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:29:26.089217 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:29:26.100264 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 01:29:26.104851 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:26.105542 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Dec 13 01:29:26.105562 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Dec 13 01:29:26.117733 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:26.128108 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:29:26.133418 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:29:26.145891 kernel: loop2: detected capacity change from 0 to 8 Dec 13 01:29:26.174161 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 01:29:26.179629 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:29:26.187270 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:26.202272 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Dec 13 01:29:26.203095 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Dec 13 01:29:26.209877 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:26.218116 kernel: loop4: detected capacity change from 0 to 205544 Dec 13 01:29:26.243084 kernel: loop5: detected capacity change from 0 to 140768 Dec 13 01:29:26.267083 kernel: loop6: detected capacity change from 0 to 8 Dec 13 01:29:26.270265 kernel: loop7: detected capacity change from 0 to 142488 Dec 13 01:29:26.296506 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 01:29:26.297154 (sd-merge)[1201]: Merged extensions into '/usr'. Dec 13 01:29:26.304092 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:29:26.304113 systemd[1]: Reloading... Dec 13 01:29:26.419086 zram_generator::config[1231]: No configuration found. Dec 13 01:29:26.496801 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:29:26.518872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:26.560665 systemd[1]: Reloading finished in 255 ms. Dec 13 01:29:26.590868 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:29:26.592944 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:29:26.609589 systemd[1]: Starting ensure-sysext.service... Dec 13 01:29:26.612238 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:29:26.626428 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:29:26.628733 systemd[1]: Reloading requested from client PID 1270 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:29:26.628818 systemd[1]: Reloading... Dec 13 01:29:26.642003 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:29:26.642335 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:29:26.643214 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:29:26.643618 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Dec 13 01:29:26.643695 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Dec 13 01:29:26.646795 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:26.646808 systemd-tmpfiles[1271]: Skipping /boot Dec 13 01:29:26.658243 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:26.658259 systemd-tmpfiles[1271]: Skipping /boot Dec 13 01:29:26.705701 zram_generator::config[1299]: No configuration found. Dec 13 01:29:26.803884 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:26.846274 systemd[1]: Reloading finished in 217 ms. Dec 13 01:29:26.864958 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:26.874216 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:29:26.884286 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:29:26.887737 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:29:26.891634 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:29:26.897191 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:26.905906 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:29:26.915211 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:29:26.919486 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:26.919666 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:26.926392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:26.928931 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:26.933154 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:26.933722 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:26.933822 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:26.937322 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:26.937474 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:26.937613 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:26.937692 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:26.941048 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:26.941235 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:26.948432 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:26.949046 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:26.949192 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:26.961918 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:29:26.964112 systemd[1]: Finished ensure-sysext.service. Dec 13 01:29:26.964806 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:26.964962 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:26.988865 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Dec 13 01:29:26.990839 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:29:26.993053 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:26.993336 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:26.996379 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:26.997095 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:26.997852 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:26.998212 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:26.999809 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:26.999913 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:27.007097 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:29:27.016518 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:29:27.020093 augenrules[1379]: No rules Dec 13 01:29:27.027116 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:29:27.028599 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:29:27.045969 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:29:27.050548 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:27.058267 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:27.061967 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:29:27.064367 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:29:27.123345 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:29:27.126080 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1407) Dec 13 01:29:27.131083 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1407) Dec 13 01:29:27.187971 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:29:27.188585 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:29:27.199796 systemd-resolved[1347]: Positive Trust Anchors: Dec 13 01:29:27.199820 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:29:27.199847 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:29:27.206834 systemd-resolved[1347]: Using system hostname 'ci-4081-2-1-2-9d1d8cb240'. Dec 13 01:29:27.208763 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:29:27.209728 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:27.216957 systemd-networkd[1395]: lo: Link UP Dec 13 01:29:27.217221 systemd-networkd[1395]: lo: Gained carrier Dec 13 01:29:27.223120 systemd-networkd[1395]: Enumeration completed Dec 13 01:29:27.223191 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:27.224118 systemd[1]: Reached target network.target - Network. Dec 13 01:29:27.226224 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:27.226354 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:27.227931 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:27.227938 systemd-networkd[1395]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:27.228863 systemd-networkd[1395]: eth0: Link UP Dec 13 01:29:27.229312 systemd-networkd[1395]: eth0: Gained carrier Dec 13 01:29:27.229421 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:27.232232 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:29:27.233480 systemd-networkd[1395]: eth1: Link UP Dec 13 01:29:27.233487 systemd-networkd[1395]: eth1: Gained carrier Dec 13 01:29:27.233498 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:27.254122 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:29:27.256152 systemd-networkd[1395]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:29:27.256823 systemd-timesyncd[1364]: Network configuration changed, trying to establish connection. Dec 13 01:29:27.266102 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1402) Dec 13 01:29:27.284616 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:27.299094 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:29:27.307112 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:29:27.322876 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 01:29:27.329092 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Dec 13 01:29:27.329158 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Dec 13 01:29:27.334947 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:29:27.334988 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 01:29:27.335001 kernel: [drm] features: -context_init Dec 13 01:29:27.335016 kernel: [drm] number of scanouts: 1 Dec 13 01:29:27.335026 kernel: [drm] number of cap sets: 0 Dec 13 01:29:27.334925 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:29:27.338125 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 01:29:27.338897 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 13 01:29:27.339926 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:27.340029 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:27.345614 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 01:29:27.345666 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 01:29:27.347854 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:27.360092 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:29:27.379897 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 01:29:27.385206 systemd-networkd[1395]: eth0: DHCPv4 address 78.46.150.92/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 01:29:27.386721 systemd-timesyncd[1364]: Network configuration changed, trying to establish connection. Dec 13 01:29:27.405311 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:29:27.409529 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:29:27.409716 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:29:27.396462 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:27.412246 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:27.412532 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:27.412565 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:29:27.412581 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:27.414194 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:29:27.414585 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:27.414737 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:27.429040 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:29:27.429806 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:27.431238 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:27.431777 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:27.431976 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:27.435391 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:27.435499 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:27.442320 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:27.450567 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:27.450766 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:27.457186 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:27.511762 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:27.592485 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:29:27.600240 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:29:27.616296 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:27.648443 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:29:27.648822 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:27.648923 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:27.652274 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:29:27.652426 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:29:27.653323 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:29:27.653965 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:29:27.654581 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:29:27.655185 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:29:27.655224 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:29:27.655779 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:29:27.664589 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:29:27.668220 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:29:27.678535 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:29:27.682049 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:29:27.685111 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:29:27.685755 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:29:27.689002 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:27.689656 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:27.689682 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:27.694180 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:29:27.698271 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:29:27.700565 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:27.701201 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:29:27.707210 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:29:27.711028 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:29:27.711496 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:29:27.716257 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:29:27.720218 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:29:27.724083 jq[1460]: false Dec 13 01:29:27.736182 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 01:29:27.740675 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:29:27.749489 coreos-metadata[1458]: Dec 13 01:29:27.748 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 01:29:27.754544 coreos-metadata[1458]: Dec 13 01:29:27.749 INFO Fetch successful Dec 13 01:29:27.754544 coreos-metadata[1458]: Dec 13 01:29:27.749 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 01:29:27.754544 coreos-metadata[1458]: Dec 13 01:29:27.750 INFO Fetch successful Dec 13 01:29:27.751246 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:29:27.757535 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:29:27.762211 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:29:27.762707 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:29:27.764266 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:29:27.767560 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:29:27.775626 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:29:27.782155 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:29:27.782387 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:29:27.801750 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:29:27.802219 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:29:27.821215 extend-filesystems[1463]: Found loop4 Dec 13 01:29:27.833748 extend-filesystems[1463]: Found loop5 Dec 13 01:29:27.833748 extend-filesystems[1463]: Found loop6 Dec 13 01:29:27.833748 extend-filesystems[1463]: Found loop7 Dec 13 01:29:27.833748 extend-filesystems[1463]: Found sda Dec 13 01:29:27.833748 extend-filesystems[1463]: Found sda1 Dec 13 01:29:27.833748 extend-filesystems[1463]: Found sda2 Dec 13 01:29:27.833748 extend-filesystems[1463]: Found sda3 Dec 13 01:29:27.833748 extend-filesystems[1463]: Found usr Dec 13 01:29:27.833748 extend-filesystems[1463]: Found sda4 Dec 13 01:29:27.833748 extend-filesystems[1463]: Found sda6 Dec 13 01:29:27.833748 extend-filesystems[1463]: Found sda7 Dec 13 01:29:27.833748 extend-filesystems[1463]: Found sda9 Dec 13 01:29:27.833748 extend-filesystems[1463]: Checking size of /dev/sda9 Dec 13 01:29:27.891350 dbus-daemon[1459]: [system] SELinux support is enabled Dec 13 01:29:27.838026 (ntainerd)[1489]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:29:27.892091 tar[1477]: linux-amd64/helm Dec 13 01:29:27.892350 update_engine[1472]: I20241213 01:29:27.884308 1472 main.cc:92] Flatcar Update Engine starting Dec 13 01:29:27.892569 jq[1473]: true Dec 13 01:29:27.892742 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:29:27.898086 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:29:27.904820 extend-filesystems[1463]: Resized partition /dev/sda9 Dec 13 01:29:27.898135 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:29:27.911219 jq[1496]: true Dec 13 01:29:27.900017 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:29:27.900035 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:29:27.908182 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:29:27.908376 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:29:27.922851 extend-filesystems[1506]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:29:27.927002 update_engine[1472]: I20241213 01:29:27.924811 1472 update_check_scheduler.cc:74] Next update check in 7m1s Dec 13 01:29:27.931415 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:29:27.937990 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:29:27.938623 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:29:27.939097 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:29:27.946756 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 01:29:28.001855 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1394) Dec 13 01:29:28.012456 systemd-logind[1470]: New seat seat0. Dec 13 01:29:28.027015 systemd-logind[1470]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 01:29:28.027034 systemd-logind[1470]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:29:28.028178 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:29:28.080821 bash[1528]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:29:28.081114 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:29:28.091591 systemd[1]: Starting sshkeys.service... Dec 13 01:29:28.133086 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:29:28.140316 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:29:28.162280 containerd[1489]: time="2024-12-13T01:29:28.162019719Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:29:28.170769 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 01:29:28.183473 coreos-metadata[1539]: Dec 13 01:29:28.183 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 01:29:28.188086 coreos-metadata[1539]: Dec 13 01:29:28.186 INFO Fetch successful Dec 13 01:29:28.188941 containerd[1489]: time="2024-12-13T01:29:28.188900072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:28.190189 extend-filesystems[1506]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 01:29:28.190189 extend-filesystems[1506]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 01:29:28.190189 extend-filesystems[1506]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 01:29:28.200596 extend-filesystems[1463]: Resized filesystem in /dev/sda9 Dec 13 01:29:28.200596 extend-filesystems[1463]: Found sr0 Dec 13 01:29:28.195155 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:29:28.206791 containerd[1489]: time="2024-12-13T01:29:28.202292325Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:28.206791 containerd[1489]: time="2024-12-13T01:29:28.202321098Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:29:28.206791 containerd[1489]: time="2024-12-13T01:29:28.202338591Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:29:28.206791 containerd[1489]: time="2024-12-13T01:29:28.202497519Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:29:28.206791 containerd[1489]: time="2024-12-13T01:29:28.202512016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:28.206791 containerd[1489]: time="2024-12-13T01:29:28.202579944Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:28.206791 containerd[1489]: time="2024-12-13T01:29:28.202590894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:28.206791 containerd[1489]: time="2024-12-13T01:29:28.202765030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:28.206791 containerd[1489]: time="2024-12-13T01:29:28.202779398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:28.206791 containerd[1489]: time="2024-12-13T01:29:28.202790549Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:28.206791 containerd[1489]: time="2024-12-13T01:29:28.202798784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:28.197504 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:29:28.208432 containerd[1489]: time="2024-12-13T01:29:28.202879205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:28.197737 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:29:28.210926 containerd[1489]: time="2024-12-13T01:29:28.208607005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:28.210926 containerd[1489]: time="2024-12-13T01:29:28.208726018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:28.210926 containerd[1489]: time="2024-12-13T01:29:28.208739273Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:29:28.208430 unknown[1539]: wrote ssh authorized keys file for user: core Dec 13 01:29:28.211285 containerd[1489]: time="2024-12-13T01:29:28.211266211Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:29:28.211662 containerd[1489]: time="2024-12-13T01:29:28.211645803Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:29:28.218645 containerd[1489]: time="2024-12-13T01:29:28.218568553Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:29:28.218681 containerd[1489]: time="2024-12-13T01:29:28.218613598Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:29:28.218728 containerd[1489]: time="2024-12-13T01:29:28.218688889Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:29:28.218748 containerd[1489]: time="2024-12-13T01:29:28.218731109Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:29:28.218788 containerd[1489]: time="2024-12-13T01:29:28.218769100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:29:28.218939 containerd[1489]: time="2024-12-13T01:29:28.218916115Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:29:28.221224 containerd[1489]: time="2024-12-13T01:29:28.221196321Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:29:28.221377 containerd[1489]: time="2024-12-13T01:29:28.221353787Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:29:28.221398 containerd[1489]: time="2024-12-13T01:29:28.221376719Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:29:28.221398 containerd[1489]: time="2024-12-13T01:29:28.221390135Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:29:28.221428 containerd[1489]: time="2024-12-13T01:29:28.221421052Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:29:28.221451 containerd[1489]: time="2024-12-13T01:29:28.221434267Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:29:28.221451 containerd[1489]: time="2024-12-13T01:29:28.221444927Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:29:28.221484 containerd[1489]: time="2024-12-13T01:29:28.221457070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:29:28.221484 containerd[1489]: time="2024-12-13T01:29:28.221473481Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:29:28.221513 containerd[1489]: time="2024-12-13T01:29:28.221485623Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:29:28.221529 containerd[1489]: time="2024-12-13T01:29:28.221514288Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:29:28.221529 containerd[1489]: time="2024-12-13T01:29:28.221524927Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:29:28.221562 containerd[1489]: time="2024-12-13T01:29:28.221542490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:28.221562 containerd[1489]: time="2024-12-13T01:29:28.221555525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:29:28.221603 containerd[1489]: time="2024-12-13T01:29:28.221566315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:29:28.221603 containerd[1489]: time="2024-12-13T01:29:28.221593906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:29:28.221639 containerd[1489]: time="2024-12-13T01:29:28.221605919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:29:28.221639 containerd[1489]: time="2024-12-13T01:29:28.221623241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:29:28.221639 containerd[1489]: time="2024-12-13T01:29:28.221633561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:29:28.221682 containerd[1489]: time="2024-12-13T01:29:28.221668887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:29:28.221699 containerd[1489]: time="2024-12-13T01:29:28.221682352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:28.221699 containerd[1489]: time="2024-12-13T01:29:28.221695267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:29:28.221733 containerd[1489]: time="2024-12-13T01:29:28.221712790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:29:28.221733 containerd[1489]: time="2024-12-13T01:29:28.221723149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:29:28.221763 containerd[1489]: time="2024-12-13T01:29:28.221749909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:29:28.221783 containerd[1489]: time="2024-12-13T01:29:28.221764246Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:29:28.221804 containerd[1489]: time="2024-12-13T01:29:28.221781859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:29:28.221804 containerd[1489]: time="2024-12-13T01:29:28.221792089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:29:28.221804 containerd[1489]: time="2024-12-13T01:29:28.221801766Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:29:28.224067 containerd[1489]: time="2024-12-13T01:29:28.221967497Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:29:28.224067 containerd[1489]: time="2024-12-13T01:29:28.221990300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:29:28.224067 containerd[1489]: time="2024-12-13T01:29:28.221999778Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:29:28.224067 containerd[1489]: time="2024-12-13T01:29:28.222743152Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:29:28.224067 containerd[1489]: time="2024-12-13T01:29:28.222755345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:29:28.224067 containerd[1489]: time="2024-12-13T01:29:28.222767778Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:29:28.224067 containerd[1489]: time="2024-12-13T01:29:28.222781804Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:29:28.224067 containerd[1489]: time="2024-12-13T01:29:28.222916807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:29:28.225281 containerd[1489]: time="2024-12-13T01:29:28.225201732Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:29:28.225436 containerd[1489]: time="2024-12-13T01:29:28.225290157Z" level=info msg="Connect containerd service" Dec 13 01:29:28.225436 containerd[1489]: time="2024-12-13T01:29:28.225346092Z" level=info msg="using legacy CRI server" Dec 13 01:29:28.225436 containerd[1489]: time="2024-12-13T01:29:28.225354939Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:29:28.225484 containerd[1489]: time="2024-12-13T01:29:28.225457591Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:29:28.228669 containerd[1489]: time="2024-12-13T01:29:28.228640410Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:29:28.228818 containerd[1489]: time="2024-12-13T01:29:28.228781905Z" level=info msg="Start subscribing containerd event" Dec 13 01:29:28.228846 containerd[1489]: time="2024-12-13T01:29:28.228825987Z" level=info msg="Start recovering state" Dec 13 01:29:28.228897 containerd[1489]: time="2024-12-13T01:29:28.228878817Z" level=info msg="Start event monitor" Dec 13 01:29:28.228897 containerd[1489]: time="2024-12-13T01:29:28.228895428Z" level=info msg="Start snapshots syncer" Dec 13 01:29:28.228933 containerd[1489]: time="2024-12-13T01:29:28.228903633Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:29:28.228933 containerd[1489]: time="2024-12-13T01:29:28.228910566Z" level=info msg="Start streaming server" Dec 13 01:29:28.229936 containerd[1489]: time="2024-12-13T01:29:28.229912716Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:29:28.230157 containerd[1489]: time="2024-12-13T01:29:28.230133870Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:29:28.232365 containerd[1489]: time="2024-12-13T01:29:28.232340197Z" level=info msg="containerd successfully booted in 0.071454s" Dec 13 01:29:28.232654 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:29:28.249003 update-ssh-keys[1550]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:29:28.249560 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:29:28.255536 systemd[1]: Finished sshkeys.service. Dec 13 01:29:28.256088 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:29:28.284637 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:29:28.300725 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:29:28.308299 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:29:28.308507 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:29:28.321986 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:29:28.341420 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:29:28.353330 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:29:28.360590 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:29:28.363341 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:29:28.546301 tar[1477]: linux-amd64/LICENSE Dec 13 01:29:28.546391 tar[1477]: linux-amd64/README.md Dec 13 01:29:28.559845 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:29:28.673309 systemd-networkd[1395]: eth0: Gained IPv6LL Dec 13 01:29:28.674337 systemd-timesyncd[1364]: Network configuration changed, trying to establish connection. Dec 13 01:29:28.677461 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:29:28.683689 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:29:28.693315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:28.698854 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:29:28.740624 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:29:28.993470 systemd-networkd[1395]: eth1: Gained IPv6LL Dec 13 01:29:28.993892 systemd-timesyncd[1364]: Network configuration changed, trying to establish connection. Dec 13 01:29:29.379244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:29.380455 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:29:29.385222 systemd[1]: Startup finished in 1.146s (kernel) + 5.395s (initrd) + 4.337s (userspace) = 10.879s. Dec 13 01:29:29.386386 (kubelet)[1589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:29.874452 kubelet[1589]: E1213 01:29:29.874299 1589 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:29.876921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:29.877226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:40.128021 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:29:40.135785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:40.266279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:40.270171 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:40.312717 kubelet[1608]: E1213 01:29:40.312671 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:40.318930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:40.319262 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:50.458884 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:29:50.466197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:50.588483 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:50.591938 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:50.623884 kubelet[1624]: E1213 01:29:50.623828 1624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:50.626015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:50.626243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:00.192709 systemd-timesyncd[1364]: Contacted time server 185.252.140.125:123 (2.flatcar.pool.ntp.org). Dec 13 01:30:00.192756 systemd-timesyncd[1364]: Initial clock synchronization to Fri 2024-12-13 01:30:00.192573 UTC. Dec 13 01:30:00.193070 systemd-resolved[1347]: Clock change detected. Flushing caches. Dec 13 01:30:01.580564 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:30:01.585438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:01.720534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:01.724995 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:01.760607 kubelet[1640]: E1213 01:30:01.760483 1640 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:01.763509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:01.763688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:11.830500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:30:11.843436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:11.960591 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:11.971621 (kubelet)[1655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:12.003556 kubelet[1655]: E1213 01:30:12.003492 1655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:12.007505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:12.007737 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:14.393403 update_engine[1472]: I20241213 01:30:14.393306 1472 update_attempter.cc:509] Updating boot flags... Dec 13 01:30:14.434325 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1671) Dec 13 01:30:14.503110 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1674) Dec 13 01:30:14.545265 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1674) Dec 13 01:30:22.080126 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:30:22.085427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:22.207005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:22.211628 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:22.246572 kubelet[1691]: E1213 01:30:22.246514 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:22.250157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:22.250374 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:32.330845 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 01:30:32.341475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:32.490959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:32.509746 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:32.548801 kubelet[1706]: E1213 01:30:32.548736 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:32.552654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:32.552841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:42.580204 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 01:30:42.585388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:42.704184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:42.710507 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:42.741664 kubelet[1721]: E1213 01:30:42.741603 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:42.745335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:42.745517 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:52.830277 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 01:30:52.835408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:52.959223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:52.963173 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:53.000278 kubelet[1736]: E1213 01:30:53.000133 1736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:53.003308 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:53.003544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:03.080179 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 01:31:03.085381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:03.206181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:03.209998 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:03.244120 kubelet[1751]: E1213 01:31:03.244056 1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:03.247589 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:03.247778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:13.330319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 01:31:13.335668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:13.456379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:13.460087 (kubelet)[1766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:13.494806 kubelet[1766]: E1213 01:31:13.494705 1766 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:13.498338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:13.498558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:23.580214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 13 01:31:23.585418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:23.721448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:23.725064 (kubelet)[1782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:23.755014 kubelet[1782]: E1213 01:31:23.754957 1782 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:23.758629 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:23.758805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:26.007045 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:31:26.011548 systemd[1]: Started sshd@0-78.46.150.92:22-147.75.109.163:37436.service - OpenSSH per-connection server daemon (147.75.109.163:37436). Dec 13 01:31:27.013171 sshd[1791]: Accepted publickey for core from 147.75.109.163 port 37436 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:27.015976 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:27.027945 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:31:27.033781 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:31:27.040165 systemd-logind[1470]: New session 1 of user core. Dec 13 01:31:27.070929 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:31:27.079012 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:31:27.094813 (systemd)[1795]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:31:27.210677 systemd[1795]: Queued start job for default target default.target. Dec 13 01:31:27.221392 systemd[1795]: Created slice app.slice - User Application Slice. Dec 13 01:31:27.221418 systemd[1795]: Reached target paths.target - Paths. Dec 13 01:31:27.221431 systemd[1795]: Reached target timers.target - Timers. Dec 13 01:31:27.222802 systemd[1795]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:31:27.235828 systemd[1795]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:31:27.235941 systemd[1795]: Reached target sockets.target - Sockets. Dec 13 01:31:27.235955 systemd[1795]: Reached target basic.target - Basic System. Dec 13 01:31:27.235992 systemd[1795]: Reached target default.target - Main User Target. Dec 13 01:31:27.236023 systemd[1795]: Startup finished in 132ms. Dec 13 01:31:27.236119 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:31:27.246383 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:31:27.940159 systemd[1]: Started sshd@1-78.46.150.92:22-147.75.109.163:33970.service - OpenSSH per-connection server daemon (147.75.109.163:33970). Dec 13 01:31:28.918125 sshd[1806]: Accepted publickey for core from 147.75.109.163 port 33970 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:28.920036 sshd[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:28.924729 systemd-logind[1470]: New session 2 of user core. Dec 13 01:31:28.931414 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:31:29.600117 sshd[1806]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:29.604187 systemd-logind[1470]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:31:29.605064 systemd[1]: sshd@1-78.46.150.92:22-147.75.109.163:33970.service: Deactivated successfully. Dec 13 01:31:29.607503 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:31:29.608390 systemd-logind[1470]: Removed session 2. Dec 13 01:31:29.768553 systemd[1]: Started sshd@2-78.46.150.92:22-147.75.109.163:33984.service - OpenSSH per-connection server daemon (147.75.109.163:33984). Dec 13 01:31:30.748470 sshd[1813]: Accepted publickey for core from 147.75.109.163 port 33984 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:30.750275 sshd[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:30.755346 systemd-logind[1470]: New session 3 of user core. Dec 13 01:31:30.763386 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:31:31.426185 sshd[1813]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:31.429189 systemd[1]: sshd@2-78.46.150.92:22-147.75.109.163:33984.service: Deactivated successfully. Dec 13 01:31:31.431368 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:31:31.432702 systemd-logind[1470]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:31:31.433869 systemd-logind[1470]: Removed session 3. Dec 13 01:31:31.599533 systemd[1]: Started sshd@3-78.46.150.92:22-147.75.109.163:33998.service - OpenSSH per-connection server daemon (147.75.109.163:33998). Dec 13 01:31:32.586876 sshd[1820]: Accepted publickey for core from 147.75.109.163 port 33998 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:32.588573 sshd[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:32.593182 systemd-logind[1470]: New session 4 of user core. Dec 13 01:31:32.602382 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:31:33.273592 sshd[1820]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:33.277156 systemd-logind[1470]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:31:33.278125 systemd[1]: sshd@3-78.46.150.92:22-147.75.109.163:33998.service: Deactivated successfully. Dec 13 01:31:33.280204 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:31:33.281042 systemd-logind[1470]: Removed session 4. Dec 13 01:31:33.443367 systemd[1]: Started sshd@4-78.46.150.92:22-147.75.109.163:34014.service - OpenSSH per-connection server daemon (147.75.109.163:34014). Dec 13 01:31:33.830167 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 13 01:31:33.836645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:33.954346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:33.968492 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:33.997180 kubelet[1837]: E1213 01:31:33.997083 1837 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:34.000275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:34.000497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:34.420528 sshd[1827]: Accepted publickey for core from 147.75.109.163 port 34014 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:34.422125 sshd[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:34.427202 systemd-logind[1470]: New session 5 of user core. Dec 13 01:31:34.433366 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:31:34.951992 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:31:34.952403 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:35.202541 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:31:35.204763 (dockerd)[1862]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:31:35.448182 dockerd[1862]: time="2024-12-13T01:31:35.448108347Z" level=info msg="Starting up" Dec 13 01:31:35.540759 dockerd[1862]: time="2024-12-13T01:31:35.540649154Z" level=info msg="Loading containers: start." Dec 13 01:31:35.638265 kernel: Initializing XFRM netlink socket Dec 13 01:31:35.709329 systemd-networkd[1395]: docker0: Link UP Dec 13 01:31:35.727518 dockerd[1862]: time="2024-12-13T01:31:35.727482350Z" level=info msg="Loading containers: done." Dec 13 01:31:35.742130 dockerd[1862]: time="2024-12-13T01:31:35.742088341Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:31:35.742284 dockerd[1862]: time="2024-12-13T01:31:35.742182177Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:31:35.742337 dockerd[1862]: time="2024-12-13T01:31:35.742311199Z" level=info msg="Daemon has completed initialization" Dec 13 01:31:35.772477 dockerd[1862]: time="2024-12-13T01:31:35.772423190Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:31:35.772520 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:31:36.842001 containerd[1489]: time="2024-12-13T01:31:36.841962564Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 01:31:37.429734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3020764741.mount: Deactivated successfully. Dec 13 01:31:38.325851 containerd[1489]: time="2024-12-13T01:31:38.325806114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:38.326689 containerd[1489]: time="2024-12-13T01:31:38.326653168Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975575" Dec 13 01:31:38.327320 containerd[1489]: time="2024-12-13T01:31:38.327276877Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:38.330452 containerd[1489]: time="2024-12-13T01:31:38.329253511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:38.330452 containerd[1489]: time="2024-12-13T01:31:38.330083471Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 1.488083886s" Dec 13 01:31:38.330452 containerd[1489]: time="2024-12-13T01:31:38.330105406Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 01:31:38.331346 containerd[1489]: time="2024-12-13T01:31:38.331327593Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 01:31:39.562815 containerd[1489]: time="2024-12-13T01:31:39.562761991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:39.563705 containerd[1489]: time="2024-12-13T01:31:39.563542056Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702177" Dec 13 01:31:39.564406 containerd[1489]: time="2024-12-13T01:31:39.564369978Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:39.569401 containerd[1489]: time="2024-12-13T01:31:39.569359252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:39.570204 containerd[1489]: time="2024-12-13T01:31:39.569910081Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.238466482s" Dec 13 01:31:39.570204 containerd[1489]: time="2024-12-13T01:31:39.569936485Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 01:31:39.570570 containerd[1489]: time="2024-12-13T01:31:39.570439937Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 01:31:40.449156 containerd[1489]: time="2024-12-13T01:31:40.449091818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:40.450001 containerd[1489]: time="2024-12-13T01:31:40.449961702Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652087" Dec 13 01:31:40.450819 containerd[1489]: time="2024-12-13T01:31:40.450780190Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:40.452836 containerd[1489]: time="2024-12-13T01:31:40.452808512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:40.454628 containerd[1489]: time="2024-12-13T01:31:40.454595604Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 884.129214ms" Dec 13 01:31:40.454628 containerd[1489]: time="2024-12-13T01:31:40.454624072Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 01:31:40.455029 containerd[1489]: time="2024-12-13T01:31:40.454982869Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:31:41.410281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount48028966.mount: Deactivated successfully. Dec 13 01:31:41.733956 containerd[1489]: time="2024-12-13T01:31:41.733638664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:41.734746 containerd[1489]: time="2024-12-13T01:31:41.734590579Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230269" Dec 13 01:31:41.735522 containerd[1489]: time="2024-12-13T01:31:41.735454637Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:41.737087 containerd[1489]: time="2024-12-13T01:31:41.737045175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:41.737990 containerd[1489]: time="2024-12-13T01:31:41.737560046Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 1.282553659s" Dec 13 01:31:41.737990 containerd[1489]: time="2024-12-13T01:31:41.737599806Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 01:31:41.738553 containerd[1489]: time="2024-12-13T01:31:41.738400746Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:31:42.294870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1894347225.mount: Deactivated successfully. Dec 13 01:31:42.966975 containerd[1489]: time="2024-12-13T01:31:42.966878258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:42.968544 containerd[1489]: time="2024-12-13T01:31:42.968318755Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Dec 13 01:31:42.970203 containerd[1489]: time="2024-12-13T01:31:42.970136235Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:42.972582 containerd[1489]: time="2024-12-13T01:31:42.972542442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:42.973793 containerd[1489]: time="2024-12-13T01:31:42.973454203Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.235023346s" Dec 13 01:31:42.973793 containerd[1489]: time="2024-12-13T01:31:42.973493503Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:31:42.974231 containerd[1489]: time="2024-12-13T01:31:42.974026328Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 01:31:43.451858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2183803708.mount: Deactivated successfully. Dec 13 01:31:43.457019 containerd[1489]: time="2024-12-13T01:31:43.456959037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:43.458169 containerd[1489]: time="2024-12-13T01:31:43.458128623Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321158" Dec 13 01:31:43.458953 containerd[1489]: time="2024-12-13T01:31:43.458913946Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:43.460701 containerd[1489]: time="2024-12-13T01:31:43.460653830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:43.461431 containerd[1489]: time="2024-12-13T01:31:43.461396847Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 487.344057ms" Dec 13 01:31:43.461481 containerd[1489]: time="2024-12-13T01:31:43.461432429Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 01:31:43.461984 containerd[1489]: time="2024-12-13T01:31:43.461913660Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 01:31:43.987529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3736066502.mount: Deactivated successfully. Dec 13 01:31:44.080046 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Dec 13 01:31:44.086382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:44.227374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:44.237691 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:44.278448 kubelet[2136]: E1213 01:31:44.278373 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:44.281557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:44.281733 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:45.472260 containerd[1489]: time="2024-12-13T01:31:45.472177809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:45.473133 containerd[1489]: time="2024-12-13T01:31:45.473095753Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780035" Dec 13 01:31:45.473738 containerd[1489]: time="2024-12-13T01:31:45.473698903Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:45.475992 containerd[1489]: time="2024-12-13T01:31:45.475959921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:45.477003 containerd[1489]: time="2024-12-13T01:31:45.476889067Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.014949727s" Dec 13 01:31:45.477003 containerd[1489]: time="2024-12-13T01:31:45.476916121Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 01:31:47.768633 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:47.779662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:47.802859 systemd[1]: Reloading requested from client PID 2212 ('systemctl') (unit session-5.scope)... Dec 13 01:31:47.802874 systemd[1]: Reloading... Dec 13 01:31:47.936260 zram_generator::config[2276]: No configuration found. Dec 13 01:31:48.002883 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:31:48.067481 systemd[1]: Reloading finished in 264 ms. Dec 13 01:31:48.114499 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:31:48.114605 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:31:48.114901 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:48.119489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:48.241384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:48.242038 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:31:48.285077 kubelet[2305]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:48.285445 kubelet[2305]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:31:48.285507 kubelet[2305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:48.285623 kubelet[2305]: I1213 01:31:48.285592 2305 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:31:48.534628 kubelet[2305]: I1213 01:31:48.534528 2305 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:31:48.534628 kubelet[2305]: I1213 01:31:48.534572 2305 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:31:48.535831 kubelet[2305]: I1213 01:31:48.535800 2305 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:31:48.558226 kubelet[2305]: E1213 01:31:48.558177 2305 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://78.46.150.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 78.46.150.92:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:31:48.558494 kubelet[2305]: I1213 01:31:48.558362 2305 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:31:48.567856 kubelet[2305]: E1213 01:31:48.567815 2305 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:31:48.567856 kubelet[2305]: I1213 01:31:48.567854 2305 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:31:48.574774 kubelet[2305]: I1213 01:31:48.574749 2305 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:31:48.575858 kubelet[2305]: I1213 01:31:48.575829 2305 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:31:48.576009 kubelet[2305]: I1213 01:31:48.575968 2305 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:31:48.576152 kubelet[2305]: I1213 01:31:48.575998 2305 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-2-9d1d8cb240","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:31:48.576152 kubelet[2305]: I1213 01:31:48.576148 2305 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:31:48.576259 kubelet[2305]: I1213 01:31:48.576157 2305 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:31:48.576285 kubelet[2305]: I1213 01:31:48.576275 2305 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:48.577993 kubelet[2305]: I1213 01:31:48.577816 2305 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:31:48.577993 kubelet[2305]: I1213 01:31:48.577836 2305 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:31:48.577993 kubelet[2305]: I1213 01:31:48.577865 2305 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:31:48.577993 kubelet[2305]: I1213 01:31:48.577878 2305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:31:48.584625 kubelet[2305]: W1213 01:31:48.584224 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.46.150.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-2-9d1d8cb240&limit=500&resourceVersion=0": dial tcp 78.46.150.92:6443: connect: connection refused Dec 13 01:31:48.584625 kubelet[2305]: E1213 01:31:48.584288 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://78.46.150.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-2-9d1d8cb240&limit=500&resourceVersion=0\": dial tcp 78.46.150.92:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:31:48.588903 kubelet[2305]: W1213 01:31:48.587368 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.46.150.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 78.46.150.92:6443: connect: connection refused Dec 13 01:31:48.588903 kubelet[2305]: E1213 01:31:48.587413 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://78.46.150.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 78.46.150.92:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:31:48.588903 kubelet[2305]: I1213 01:31:48.587488 2305 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:31:48.593828 kubelet[2305]: I1213 01:31:48.590676 2305 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:31:48.593828 kubelet[2305]: W1213 01:31:48.591827 2305 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:31:48.593828 kubelet[2305]: I1213 01:31:48.592679 2305 server.go:1269] "Started kubelet" Dec 13 01:31:48.593828 kubelet[2305]: I1213 01:31:48.592795 2305 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:31:48.601518 kubelet[2305]: I1213 01:31:48.601435 2305 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:31:48.605165 kubelet[2305]: E1213 01:31:48.603043 2305 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://78.46.150.92:6443/api/v1/namespaces/default/events\": dial tcp 78.46.150.92:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-2-9d1d8cb240.1810987a892f7801 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-2-9d1d8cb240,UID:ci-4081-2-1-2-9d1d8cb240,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-2-9d1d8cb240,},FirstTimestamp:2024-12-13 01:31:48.592662529 +0000 UTC m=+0.346905339,LastTimestamp:2024-12-13 01:31:48.592662529 +0000 UTC m=+0.346905339,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-2-9d1d8cb240,}" Dec 13 01:31:48.605165 kubelet[2305]: I1213 01:31:48.605094 2305 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:31:48.607055 kubelet[2305]: I1213 01:31:48.606493 2305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:31:48.607055 kubelet[2305]: I1213 01:31:48.606514 2305 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:31:48.607455 kubelet[2305]: I1213 01:31:48.607442 2305 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:31:48.610934 kubelet[2305]: E1213 01:31:48.610916 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-9d1d8cb240\" not found" Dec 13 01:31:48.611062 kubelet[2305]: I1213 01:31:48.611052 2305 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:31:48.613031 kubelet[2305]: I1213 01:31:48.613018 2305 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:31:48.613147 kubelet[2305]: I1213 01:31:48.613137 2305 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:31:48.613548 kubelet[2305]: W1213 01:31:48.613477 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.46.150.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.46.150.92:6443: connect: connection refused Dec 13 01:31:48.615660 kubelet[2305]: E1213 01:31:48.615313 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://78.46.150.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 78.46.150.92:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:31:48.615660 kubelet[2305]: E1213 01:31:48.615382 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.150.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-2-9d1d8cb240?timeout=10s\": dial tcp 78.46.150.92:6443: connect: connection refused" interval="200ms" Dec 13 01:31:48.616388 kubelet[2305]: I1213 01:31:48.616028 2305 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:31:48.616388 kubelet[2305]: I1213 01:31:48.616090 2305 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:31:48.619046 kubelet[2305]: I1213 01:31:48.618979 2305 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:31:48.619473 kubelet[2305]: E1213 01:31:48.619460 2305 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:31:48.642419 kubelet[2305]: I1213 01:31:48.641811 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:31:48.643506 kubelet[2305]: I1213 01:31:48.643469 2305 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:31:48.643506 kubelet[2305]: I1213 01:31:48.643482 2305 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:31:48.643571 kubelet[2305]: I1213 01:31:48.643547 2305 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:48.643809 kubelet[2305]: I1213 01:31:48.643784 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:31:48.643854 kubelet[2305]: I1213 01:31:48.643814 2305 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:31:48.643854 kubelet[2305]: I1213 01:31:48.643852 2305 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:31:48.643895 kubelet[2305]: E1213 01:31:48.643885 2305 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:31:48.647454 kubelet[2305]: W1213 01:31:48.647415 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.46.150.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.46.150.92:6443: connect: connection refused Dec 13 01:31:48.647548 kubelet[2305]: E1213 01:31:48.647532 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://78.46.150.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 78.46.150.92:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:31:48.648781 kubelet[2305]: I1213 01:31:48.648758 2305 policy_none.go:49] "None policy: Start" Dec 13 01:31:48.649224 kubelet[2305]: I1213 01:31:48.649186 2305 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:31:48.649224 kubelet[2305]: I1213 01:31:48.649209 2305 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:31:48.655793 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:31:48.673855 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:31:48.677402 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:31:48.684142 kubelet[2305]: I1213 01:31:48.683962 2305 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:31:48.684142 kubelet[2305]: I1213 01:31:48.684122 2305 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:31:48.684213 kubelet[2305]: I1213 01:31:48.684132 2305 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:31:48.685027 kubelet[2305]: I1213 01:31:48.684511 2305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:31:48.685710 kubelet[2305]: E1213 01:31:48.685635 2305 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-2-9d1d8cb240\" not found" Dec 13 01:31:48.757649 systemd[1]: Created slice kubepods-burstable-pod1dd77ea92913c122cdd7e2a4a0cc95ae.slice - libcontainer container kubepods-burstable-pod1dd77ea92913c122cdd7e2a4a0cc95ae.slice. Dec 13 01:31:48.771848 systemd[1]: Created slice kubepods-burstable-pod84abc37da06720016f70a1e4e6bd1fd9.slice - libcontainer container kubepods-burstable-pod84abc37da06720016f70a1e4e6bd1fd9.slice. Dec 13 01:31:48.776110 systemd[1]: Created slice kubepods-burstable-podf48e33c45b7a915160e6ff26a059e9a7.slice - libcontainer container kubepods-burstable-podf48e33c45b7a915160e6ff26a059e9a7.slice. Dec 13 01:31:48.786726 kubelet[2305]: I1213 01:31:48.786024 2305 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:48.786726 kubelet[2305]: E1213 01:31:48.786397 2305 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://78.46.150.92:6443/api/v1/nodes\": dial tcp 78.46.150.92:6443: connect: connection refused" node="ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:48.815000 kubelet[2305]: I1213 01:31:48.814960 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1dd77ea92913c122cdd7e2a4a0cc95ae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-2-9d1d8cb240\" (UID: \"1dd77ea92913c122cdd7e2a4a0cc95ae\") " pod="kube-system/kube-apiserver-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:48.815206 kubelet[2305]: I1213 01:31:48.815008 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84abc37da06720016f70a1e4e6bd1fd9-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-2-9d1d8cb240\" (UID: \"84abc37da06720016f70a1e4e6bd1fd9\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:48.815206 kubelet[2305]: I1213 01:31:48.815044 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84abc37da06720016f70a1e4e6bd1fd9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-2-9d1d8cb240\" (UID: \"84abc37da06720016f70a1e4e6bd1fd9\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:48.815206 kubelet[2305]: I1213 01:31:48.815062 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84abc37da06720016f70a1e4e6bd1fd9-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-2-9d1d8cb240\" (UID: \"84abc37da06720016f70a1e4e6bd1fd9\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:48.815206 kubelet[2305]: I1213 01:31:48.815079 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84abc37da06720016f70a1e4e6bd1fd9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-2-9d1d8cb240\" (UID: \"84abc37da06720016f70a1e4e6bd1fd9\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:48.815206 kubelet[2305]: I1213 01:31:48.815096 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84abc37da06720016f70a1e4e6bd1fd9-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-2-9d1d8cb240\" (UID: \"84abc37da06720016f70a1e4e6bd1fd9\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:48.815371 kubelet[2305]: I1213 01:31:48.815112 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f48e33c45b7a915160e6ff26a059e9a7-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-2-9d1d8cb240\" (UID: \"f48e33c45b7a915160e6ff26a059e9a7\") " pod="kube-system/kube-scheduler-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:48.815371 kubelet[2305]: I1213 01:31:48.815127 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1dd77ea92913c122cdd7e2a4a0cc95ae-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-2-9d1d8cb240\" (UID: \"1dd77ea92913c122cdd7e2a4a0cc95ae\") " pod="kube-system/kube-apiserver-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:48.815371 kubelet[2305]: I1213 01:31:48.815141 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1dd77ea92913c122cdd7e2a4a0cc95ae-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-2-9d1d8cb240\" (UID: \"1dd77ea92913c122cdd7e2a4a0cc95ae\") " pod="kube-system/kube-apiserver-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:48.816348 kubelet[2305]: E1213 01:31:48.816283 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.150.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-2-9d1d8cb240?timeout=10s\": dial tcp 78.46.150.92:6443: connect: connection refused" interval="400ms" Dec 13 01:31:48.989102 kubelet[2305]: I1213 01:31:48.989057 2305 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:48.989490 kubelet[2305]: E1213 01:31:48.989438 2305 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://78.46.150.92:6443/api/v1/nodes\": dial tcp 78.46.150.92:6443: connect: connection refused" node="ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:49.071638 containerd[1489]: time="2024-12-13T01:31:49.071493780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-2-9d1d8cb240,Uid:1dd77ea92913c122cdd7e2a4a0cc95ae,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:49.080886 containerd[1489]: time="2024-12-13T01:31:49.080413283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-2-9d1d8cb240,Uid:84abc37da06720016f70a1e4e6bd1fd9,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:49.080886 containerd[1489]: time="2024-12-13T01:31:49.080439927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-2-9d1d8cb240,Uid:f48e33c45b7a915160e6ff26a059e9a7,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:49.217741 kubelet[2305]: E1213 01:31:49.217645 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.150.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-2-9d1d8cb240?timeout=10s\": dial tcp 78.46.150.92:6443: connect: connection refused" interval="800ms" Dec 13 01:31:49.391912 kubelet[2305]: I1213 01:31:49.391838 2305 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:49.392574 kubelet[2305]: E1213 01:31:49.392113 2305 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://78.46.150.92:6443/api/v1/nodes\": dial tcp 78.46.150.92:6443: connect: connection refused" node="ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:49.577900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount738301930.mount: Deactivated successfully. Dec 13 01:31:49.581627 containerd[1489]: time="2024-12-13T01:31:49.581507110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:49.584324 containerd[1489]: time="2024-12-13T01:31:49.584211151Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Dec 13 01:31:49.585004 containerd[1489]: time="2024-12-13T01:31:49.584958690Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:49.585752 containerd[1489]: time="2024-12-13T01:31:49.585708356Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:49.587464 containerd[1489]: time="2024-12-13T01:31:49.587108577Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:49.587464 containerd[1489]: time="2024-12-13T01:31:49.587345109Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:31:49.588318 containerd[1489]: time="2024-12-13T01:31:49.588269522Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:31:49.590585 containerd[1489]: time="2024-12-13T01:31:49.590521222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:49.593727 containerd[1489]: time="2024-12-13T01:31:49.592661288Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 512.143155ms" Dec 13 01:31:49.594065 containerd[1489]: time="2024-12-13T01:31:49.594011140Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 513.52968ms" Dec 13 01:31:49.596167 containerd[1489]: time="2024-12-13T01:31:49.596090065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 524.513159ms" Dec 13 01:31:49.612878 kubelet[2305]: W1213 01:31:49.612809 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.46.150.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 78.46.150.92:6443: connect: connection refused Dec 13 01:31:49.612989 kubelet[2305]: E1213 01:31:49.612883 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://78.46.150.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 78.46.150.92:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:31:49.707426 containerd[1489]: time="2024-12-13T01:31:49.707179755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:49.707426 containerd[1489]: time="2024-12-13T01:31:49.707250515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:49.707426 containerd[1489]: time="2024-12-13T01:31:49.707323150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:49.708733 containerd[1489]: time="2024-12-13T01:31:49.708343544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:49.715063 containerd[1489]: time="2024-12-13T01:31:49.714679828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:49.715063 containerd[1489]: time="2024-12-13T01:31:49.714727793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:49.715063 containerd[1489]: time="2024-12-13T01:31:49.714745828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:49.715063 containerd[1489]: time="2024-12-13T01:31:49.714859285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:49.715508 containerd[1489]: time="2024-12-13T01:31:49.715223762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:49.715508 containerd[1489]: time="2024-12-13T01:31:49.715284883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:49.715508 containerd[1489]: time="2024-12-13T01:31:49.715296057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:49.715508 containerd[1489]: time="2024-12-13T01:31:49.715354683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:49.732493 kubelet[2305]: W1213 01:31:49.732376 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.46.150.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-2-9d1d8cb240&limit=500&resourceVersion=0": dial tcp 78.46.150.92:6443: connect: connection refused Dec 13 01:31:49.732493 kubelet[2305]: E1213 01:31:49.732493 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://78.46.150.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-2-9d1d8cb240&limit=500&resourceVersion=0\": dial tcp 78.46.150.92:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:31:49.737391 systemd[1]: Started cri-containerd-4856dea934df4997ce26bb74e8f1d058de52c8465baad7b733d296eda95b71cc.scope - libcontainer container 4856dea934df4997ce26bb74e8f1d058de52c8465baad7b733d296eda95b71cc. Dec 13 01:31:49.743668 systemd[1]: Started cri-containerd-d4000a19fdde7a36374579dacd670f9b9711b5e3c4af5b98cdd9bf48042838a7.scope - libcontainer container d4000a19fdde7a36374579dacd670f9b9711b5e3c4af5b98cdd9bf48042838a7. Dec 13 01:31:49.748150 systemd[1]: Started cri-containerd-905d5080ba0ad296fb672e31a10e00584411cbc45767b257e98d28abb8658a05.scope - libcontainer container 905d5080ba0ad296fb672e31a10e00584411cbc45767b257e98d28abb8658a05. Dec 13 01:31:49.801978 containerd[1489]: time="2024-12-13T01:31:49.801748635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-2-9d1d8cb240,Uid:f48e33c45b7a915160e6ff26a059e9a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4000a19fdde7a36374579dacd670f9b9711b5e3c4af5b98cdd9bf48042838a7\"" Dec 13 01:31:49.814737 containerd[1489]: time="2024-12-13T01:31:49.812793986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-2-9d1d8cb240,Uid:84abc37da06720016f70a1e4e6bd1fd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"905d5080ba0ad296fb672e31a10e00584411cbc45767b257e98d28abb8658a05\"" Dec 13 01:31:49.814737 containerd[1489]: time="2024-12-13T01:31:49.813978608Z" level=info msg="CreateContainer within sandbox \"d4000a19fdde7a36374579dacd670f9b9711b5e3c4af5b98cdd9bf48042838a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:31:49.825981 containerd[1489]: time="2024-12-13T01:31:49.825928293Z" level=info msg="CreateContainer within sandbox \"905d5080ba0ad296fb672e31a10e00584411cbc45767b257e98d28abb8658a05\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:31:49.828611 containerd[1489]: time="2024-12-13T01:31:49.828582486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-2-9d1d8cb240,Uid:1dd77ea92913c122cdd7e2a4a0cc95ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"4856dea934df4997ce26bb74e8f1d058de52c8465baad7b733d296eda95b71cc\"" Dec 13 01:31:49.832099 containerd[1489]: time="2024-12-13T01:31:49.832072103Z" level=info msg="CreateContainer within sandbox \"4856dea934df4997ce26bb74e8f1d058de52c8465baad7b733d296eda95b71cc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:31:49.836864 containerd[1489]: time="2024-12-13T01:31:49.836828355Z" level=info msg="CreateContainer within sandbox \"d4000a19fdde7a36374579dacd670f9b9711b5e3c4af5b98cdd9bf48042838a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"453037c902ada53ddceb0797aa37b4e2b658c4f8031a7da6c4c2bbb875520cd2\"" Dec 13 01:31:49.837587 containerd[1489]: time="2024-12-13T01:31:49.837550185Z" level=info msg="StartContainer for \"453037c902ada53ddceb0797aa37b4e2b658c4f8031a7da6c4c2bbb875520cd2\"" Dec 13 01:31:49.843447 containerd[1489]: time="2024-12-13T01:31:49.843414126Z" level=info msg="CreateContainer within sandbox \"905d5080ba0ad296fb672e31a10e00584411cbc45767b257e98d28abb8658a05\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4ac3835f87d1df51f90f60e62ba494430ca7b14fee5b915993c577f80b627b30\"" Dec 13 01:31:49.844916 containerd[1489]: time="2024-12-13T01:31:49.844128941Z" level=info msg="StartContainer for \"4ac3835f87d1df51f90f60e62ba494430ca7b14fee5b915993c577f80b627b30\"" Dec 13 01:31:49.849850 containerd[1489]: time="2024-12-13T01:31:49.849828576Z" level=info msg="CreateContainer within sandbox \"4856dea934df4997ce26bb74e8f1d058de52c8465baad7b733d296eda95b71cc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2817ca0626cfc6c5ae3a0f36a2a19780bc0704e71055bf2d6716b7f110ae961a\"" Dec 13 01:31:49.850393 containerd[1489]: time="2024-12-13T01:31:49.850375476Z" level=info msg="StartContainer for \"2817ca0626cfc6c5ae3a0f36a2a19780bc0704e71055bf2d6716b7f110ae961a\"" Dec 13 01:31:49.871383 systemd[1]: Started cri-containerd-453037c902ada53ddceb0797aa37b4e2b658c4f8031a7da6c4c2bbb875520cd2.scope - libcontainer container 453037c902ada53ddceb0797aa37b4e2b658c4f8031a7da6c4c2bbb875520cd2. Dec 13 01:31:49.894357 systemd[1]: Started cri-containerd-2817ca0626cfc6c5ae3a0f36a2a19780bc0704e71055bf2d6716b7f110ae961a.scope - libcontainer container 2817ca0626cfc6c5ae3a0f36a2a19780bc0704e71055bf2d6716b7f110ae961a. Dec 13 01:31:49.895411 systemd[1]: Started cri-containerd-4ac3835f87d1df51f90f60e62ba494430ca7b14fee5b915993c577f80b627b30.scope - libcontainer container 4ac3835f87d1df51f90f60e62ba494430ca7b14fee5b915993c577f80b627b30. Dec 13 01:31:49.896925 kubelet[2305]: W1213 01:31:49.896146 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.46.150.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.46.150.92:6443: connect: connection refused Dec 13 01:31:49.896925 kubelet[2305]: E1213 01:31:49.896633 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://78.46.150.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 78.46.150.92:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:31:49.948486 containerd[1489]: time="2024-12-13T01:31:49.948440577Z" level=info msg="StartContainer for \"2817ca0626cfc6c5ae3a0f36a2a19780bc0704e71055bf2d6716b7f110ae961a\" returns successfully" Dec 13 01:31:49.958980 containerd[1489]: time="2024-12-13T01:31:49.958591756Z" level=info msg="StartContainer for \"453037c902ada53ddceb0797aa37b4e2b658c4f8031a7da6c4c2bbb875520cd2\" returns successfully" Dec 13 01:31:49.968058 containerd[1489]: time="2024-12-13T01:31:49.968024744Z" level=info msg="StartContainer for \"4ac3835f87d1df51f90f60e62ba494430ca7b14fee5b915993c577f80b627b30\" returns successfully" Dec 13 01:31:50.019619 kubelet[2305]: E1213 01:31:50.019281 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.150.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-2-9d1d8cb240?timeout=10s\": dial tcp 78.46.150.92:6443: connect: connection refused" interval="1.6s" Dec 13 01:31:50.101376 kubelet[2305]: W1213 01:31:50.101307 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.46.150.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.46.150.92:6443: connect: connection refused Dec 13 01:31:50.101376 kubelet[2305]: E1213 01:31:50.101378 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://78.46.150.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 78.46.150.92:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:31:50.194694 kubelet[2305]: I1213 01:31:50.194658 2305 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:51.629349 kubelet[2305]: I1213 01:31:51.629306 2305 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:51.629349 kubelet[2305]: E1213 01:31:51.629356 2305 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-2-1-2-9d1d8cb240\": node \"ci-4081-2-1-2-9d1d8cb240\" not found" Dec 13 01:31:51.641212 kubelet[2305]: E1213 01:31:51.641181 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-9d1d8cb240\" not found" Dec 13 01:31:51.680984 kubelet[2305]: E1213 01:31:51.680946 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Dec 13 01:31:51.742111 kubelet[2305]: E1213 01:31:51.742046 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-9d1d8cb240\" not found" Dec 13 01:31:51.842911 kubelet[2305]: E1213 01:31:51.842852 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-9d1d8cb240\" not found" Dec 13 01:31:51.943836 kubelet[2305]: E1213 01:31:51.943711 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-9d1d8cb240\" not found" Dec 13 01:31:52.044593 kubelet[2305]: E1213 01:31:52.044543 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-9d1d8cb240\" not found" Dec 13 01:31:52.145743 kubelet[2305]: E1213 01:31:52.145680 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-9d1d8cb240\" not found" Dec 13 01:31:52.246655 kubelet[2305]: E1213 01:31:52.246520 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-9d1d8cb240\" not found" Dec 13 01:31:52.347622 kubelet[2305]: E1213 01:31:52.347565 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-9d1d8cb240\" not found" Dec 13 01:31:52.448669 kubelet[2305]: E1213 01:31:52.448605 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-9d1d8cb240\" not found" Dec 13 01:31:52.549576 kubelet[2305]: E1213 01:31:52.549450 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-9d1d8cb240\" not found" Dec 13 01:31:52.650634 kubelet[2305]: E1213 01:31:52.650563 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-2-9d1d8cb240\" not found" Dec 13 01:31:53.587144 kubelet[2305]: I1213 01:31:53.587106 2305 apiserver.go:52] "Watching apiserver" Dec 13 01:31:53.614134 kubelet[2305]: I1213 01:31:53.614076 2305 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:31:53.702516 systemd[1]: Reloading requested from client PID 2582 ('systemctl') (unit session-5.scope)... Dec 13 01:31:53.702535 systemd[1]: Reloading... Dec 13 01:31:53.805277 zram_generator::config[2623]: No configuration found. Dec 13 01:31:53.907508 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:31:53.992359 systemd[1]: Reloading finished in 289 ms. Dec 13 01:31:54.038069 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:54.058648 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:31:54.058916 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:54.063554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:54.208283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:54.213193 (kubelet)[2673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:31:54.248276 kubelet[2673]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:54.248276 kubelet[2673]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:31:54.248276 kubelet[2673]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:54.248276 kubelet[2673]: I1213 01:31:54.247948 2673 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:31:54.259688 kubelet[2673]: I1213 01:31:54.259308 2673 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:31:54.259688 kubelet[2673]: I1213 01:31:54.259330 2673 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:31:54.259688 kubelet[2673]: I1213 01:31:54.259515 2673 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:31:54.261371 kubelet[2673]: I1213 01:31:54.261356 2673 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:31:54.266063 kubelet[2673]: I1213 01:31:54.266046 2673 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:31:54.268940 kubelet[2673]: E1213 01:31:54.268905 2673 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:31:54.268940 kubelet[2673]: I1213 01:31:54.268938 2673 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:31:54.273017 kubelet[2673]: I1213 01:31:54.272975 2673 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:31:54.273095 kubelet[2673]: I1213 01:31:54.273079 2673 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:31:54.273301 kubelet[2673]: I1213 01:31:54.273216 2673 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:31:54.273439 kubelet[2673]: I1213 01:31:54.273260 2673 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-2-9d1d8cb240","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:31:54.273564 kubelet[2673]: I1213 01:31:54.273424 2673 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:31:54.273564 kubelet[2673]: I1213 01:31:54.273463 2673 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:31:54.273564 kubelet[2673]: I1213 01:31:54.273491 2673 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:54.276071 kubelet[2673]: I1213 01:31:54.276034 2673 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:31:54.276071 kubelet[2673]: I1213 01:31:54.276055 2673 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:31:54.276174 kubelet[2673]: I1213 01:31:54.276091 2673 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:31:54.276174 kubelet[2673]: I1213 01:31:54.276109 2673 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:31:54.287173 kubelet[2673]: I1213 01:31:54.286295 2673 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:31:54.287173 kubelet[2673]: I1213 01:31:54.286757 2673 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:31:54.287173 kubelet[2673]: I1213 01:31:54.287092 2673 server.go:1269] "Started kubelet" Dec 13 01:31:54.288276 kubelet[2673]: I1213 01:31:54.288233 2673 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:31:54.289391 kubelet[2673]: I1213 01:31:54.289029 2673 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:31:54.292949 kubelet[2673]: I1213 01:31:54.292933 2673 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:31:54.295286 kubelet[2673]: I1213 01:31:54.295176 2673 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:31:54.295480 kubelet[2673]: I1213 01:31:54.295457 2673 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:31:54.300036 kubelet[2673]: I1213 01:31:54.299993 2673 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:31:54.302738 kubelet[2673]: I1213 01:31:54.302708 2673 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:31:54.303360 kubelet[2673]: I1213 01:31:54.303325 2673 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:31:54.303517 kubelet[2673]: I1213 01:31:54.303495 2673 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:31:54.308067 kubelet[2673]: I1213 01:31:54.307998 2673 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:31:54.309663 kubelet[2673]: I1213 01:31:54.309547 2673 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:31:54.309663 kubelet[2673]: I1213 01:31:54.309601 2673 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:31:54.309663 kubelet[2673]: I1213 01:31:54.309620 2673 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:31:54.309860 kubelet[2673]: E1213 01:31:54.309768 2673 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:31:54.311653 kubelet[2673]: I1213 01:31:54.311621 2673 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:31:54.311925 kubelet[2673]: I1213 01:31:54.311794 2673 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:31:54.314032 kubelet[2673]: E1213 01:31:54.313898 2673 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:31:54.314797 kubelet[2673]: I1213 01:31:54.314754 2673 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:31:54.367580 kubelet[2673]: I1213 01:31:54.367492 2673 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:31:54.367580 kubelet[2673]: I1213 01:31:54.367512 2673 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:31:54.367580 kubelet[2673]: I1213 01:31:54.367530 2673 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:54.367810 kubelet[2673]: I1213 01:31:54.367696 2673 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:31:54.367810 kubelet[2673]: I1213 01:31:54.367709 2673 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:31:54.367810 kubelet[2673]: I1213 01:31:54.367729 2673 policy_none.go:49] "None policy: Start" Dec 13 01:31:54.368699 kubelet[2673]: I1213 01:31:54.368552 2673 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:31:54.368699 kubelet[2673]: I1213 01:31:54.368628 2673 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:31:54.369106 kubelet[2673]: I1213 01:31:54.369040 2673 state_mem.go:75] "Updated machine memory state" Dec 13 01:31:54.374051 kubelet[2673]: I1213 01:31:54.373965 2673 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:31:54.374507 kubelet[2673]: I1213 01:31:54.374480 2673 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:31:54.374545 kubelet[2673]: I1213 01:31:54.374505 2673 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:31:54.375722 kubelet[2673]: I1213 01:31:54.375697 2673 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:31:54.417018 kubelet[2673]: E1213 01:31:54.416948 2673 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-2-1-2-9d1d8cb240\" already exists" pod="kube-system/kube-scheduler-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:54.417018 kubelet[2673]: E1213 01:31:54.416967 2673 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-2-9d1d8cb240\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:54.485040 kubelet[2673]: I1213 01:31:54.484865 2673 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:54.491655 kubelet[2673]: I1213 01:31:54.491630 2673 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:54.491764 kubelet[2673]: I1213 01:31:54.491696 2673 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:54.604564 kubelet[2673]: I1213 01:31:54.604470 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84abc37da06720016f70a1e4e6bd1fd9-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-2-9d1d8cb240\" (UID: \"84abc37da06720016f70a1e4e6bd1fd9\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:54.604564 kubelet[2673]: I1213 01:31:54.604515 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84abc37da06720016f70a1e4e6bd1fd9-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-2-9d1d8cb240\" (UID: \"84abc37da06720016f70a1e4e6bd1fd9\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:54.604564 kubelet[2673]: I1213 01:31:54.604549 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84abc37da06720016f70a1e4e6bd1fd9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-2-9d1d8cb240\" (UID: \"84abc37da06720016f70a1e4e6bd1fd9\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:54.604813 kubelet[2673]: I1213 01:31:54.604573 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f48e33c45b7a915160e6ff26a059e9a7-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-2-9d1d8cb240\" (UID: \"f48e33c45b7a915160e6ff26a059e9a7\") " pod="kube-system/kube-scheduler-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:54.604813 kubelet[2673]: I1213 01:31:54.604596 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1dd77ea92913c122cdd7e2a4a0cc95ae-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-2-9d1d8cb240\" (UID: \"1dd77ea92913c122cdd7e2a4a0cc95ae\") " pod="kube-system/kube-apiserver-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:54.604813 kubelet[2673]: I1213 01:31:54.604616 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1dd77ea92913c122cdd7e2a4a0cc95ae-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-2-9d1d8cb240\" (UID: \"1dd77ea92913c122cdd7e2a4a0cc95ae\") " pod="kube-system/kube-apiserver-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:54.604813 kubelet[2673]: I1213 01:31:54.604638 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1dd77ea92913c122cdd7e2a4a0cc95ae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-2-9d1d8cb240\" (UID: \"1dd77ea92913c122cdd7e2a4a0cc95ae\") " pod="kube-system/kube-apiserver-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:54.604813 kubelet[2673]: I1213 01:31:54.604661 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84abc37da06720016f70a1e4e6bd1fd9-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-2-9d1d8cb240\" (UID: \"84abc37da06720016f70a1e4e6bd1fd9\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:54.604940 kubelet[2673]: I1213 01:31:54.604701 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84abc37da06720016f70a1e4e6bd1fd9-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-2-9d1d8cb240\" (UID: \"84abc37da06720016f70a1e4e6bd1fd9\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:55.277918 kubelet[2673]: I1213 01:31:55.277006 2673 apiserver.go:52] "Watching apiserver" Dec 13 01:31:55.304561 kubelet[2673]: I1213 01:31:55.304511 2673 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:31:55.350147 kubelet[2673]: E1213 01:31:55.350079 2673 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-2-9d1d8cb240\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-2-9d1d8cb240" Dec 13 01:31:55.367503 kubelet[2673]: I1213 01:31:55.366739 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-2-9d1d8cb240" podStartSLOduration=2.3667229499999998 podStartE2EDuration="2.36672295s" podCreationTimestamp="2024-12-13 01:31:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:31:55.360900933 +0000 UTC m=+1.143297635" watchObservedRunningTime="2024-12-13 01:31:55.36672295 +0000 UTC m=+1.149119652" Dec 13 01:31:55.372878 kubelet[2673]: I1213 01:31:55.372758 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-2-9d1d8cb240" podStartSLOduration=2.372748939 podStartE2EDuration="2.372748939s" podCreationTimestamp="2024-12-13 01:31:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:31:55.366893407 +0000 UTC m=+1.149290108" watchObservedRunningTime="2024-12-13 01:31:55.372748939 +0000 UTC m=+1.155145641" Dec 13 01:31:55.556151 sudo[1846]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:55.716322 sshd[1827]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:55.719339 systemd[1]: sshd@4-78.46.150.92:22-147.75.109.163:34014.service: Deactivated successfully. Dec 13 01:31:55.721325 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:31:55.721598 systemd[1]: session-5.scope: Consumed 2.944s CPU time, 147.0M memory peak, 0B memory swap peak. Dec 13 01:31:55.723060 systemd-logind[1470]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:31:55.724504 systemd-logind[1470]: Removed session 5. Dec 13 01:31:59.348709 kubelet[2673]: I1213 01:31:59.348655 2673 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:31:59.349120 containerd[1489]: time="2024-12-13T01:31:59.348953678Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:31:59.349444 kubelet[2673]: I1213 01:31:59.349298 2673 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:32:00.367945 kubelet[2673]: I1213 01:32:00.367882 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-2-9d1d8cb240" podStartSLOduration=6.367865931 podStartE2EDuration="6.367865931s" podCreationTimestamp="2024-12-13 01:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:31:55.373302292 +0000 UTC m=+1.155698995" watchObservedRunningTime="2024-12-13 01:32:00.367865931 +0000 UTC m=+6.150262643" Dec 13 01:32:00.386301 systemd[1]: Created slice kubepods-besteffort-pod4a3786ce_9830_488b_b5ce_b33e0a2bc06f.slice - libcontainer container kubepods-besteffort-pod4a3786ce_9830_488b_b5ce_b33e0a2bc06f.slice. Dec 13 01:32:00.390410 systemd[1]: Created slice kubepods-burstable-pod607b89b2_4712_4a74_86c6_9051a7e87ed7.slice - libcontainer container kubepods-burstable-pod607b89b2_4712_4a74_86c6_9051a7e87ed7.slice. Dec 13 01:32:00.447724 kubelet[2673]: I1213 01:32:00.447681 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a3786ce-9830-488b-b5ce-b33e0a2bc06f-kube-proxy\") pod \"kube-proxy-w56gd\" (UID: \"4a3786ce-9830-488b-b5ce-b33e0a2bc06f\") " pod="kube-system/kube-proxy-w56gd" Dec 13 01:32:00.447724 kubelet[2673]: I1213 01:32:00.447721 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a3786ce-9830-488b-b5ce-b33e0a2bc06f-lib-modules\") pod \"kube-proxy-w56gd\" (UID: \"4a3786ce-9830-488b-b5ce-b33e0a2bc06f\") " pod="kube-system/kube-proxy-w56gd" Dec 13 01:32:00.447724 kubelet[2673]: I1213 01:32:00.447740 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntbg7\" (UniqueName: \"kubernetes.io/projected/4a3786ce-9830-488b-b5ce-b33e0a2bc06f-kube-api-access-ntbg7\") pod \"kube-proxy-w56gd\" (UID: \"4a3786ce-9830-488b-b5ce-b33e0a2bc06f\") " pod="kube-system/kube-proxy-w56gd" Dec 13 01:32:00.447921 kubelet[2673]: I1213 01:32:00.447754 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/607b89b2-4712-4a74-86c6-9051a7e87ed7-run\") pod \"kube-flannel-ds-rrwfm\" (UID: \"607b89b2-4712-4a74-86c6-9051a7e87ed7\") " pod="kube-flannel/kube-flannel-ds-rrwfm" Dec 13 01:32:00.447921 kubelet[2673]: I1213 01:32:00.447766 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/607b89b2-4712-4a74-86c6-9051a7e87ed7-cni\") pod \"kube-flannel-ds-rrwfm\" (UID: \"607b89b2-4712-4a74-86c6-9051a7e87ed7\") " pod="kube-flannel/kube-flannel-ds-rrwfm" Dec 13 01:32:00.447921 kubelet[2673]: I1213 01:32:00.447780 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr9h2\" (UniqueName: \"kubernetes.io/projected/607b89b2-4712-4a74-86c6-9051a7e87ed7-kube-api-access-qr9h2\") pod \"kube-flannel-ds-rrwfm\" (UID: \"607b89b2-4712-4a74-86c6-9051a7e87ed7\") " pod="kube-flannel/kube-flannel-ds-rrwfm" Dec 13 01:32:00.447921 kubelet[2673]: I1213 01:32:00.447794 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/607b89b2-4712-4a74-86c6-9051a7e87ed7-xtables-lock\") pod \"kube-flannel-ds-rrwfm\" (UID: \"607b89b2-4712-4a74-86c6-9051a7e87ed7\") " pod="kube-flannel/kube-flannel-ds-rrwfm" Dec 13 01:32:00.447921 kubelet[2673]: I1213 01:32:00.447809 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a3786ce-9830-488b-b5ce-b33e0a2bc06f-xtables-lock\") pod \"kube-proxy-w56gd\" (UID: \"4a3786ce-9830-488b-b5ce-b33e0a2bc06f\") " pod="kube-system/kube-proxy-w56gd" Dec 13 01:32:00.448026 kubelet[2673]: I1213 01:32:00.447821 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/607b89b2-4712-4a74-86c6-9051a7e87ed7-cni-plugin\") pod \"kube-flannel-ds-rrwfm\" (UID: \"607b89b2-4712-4a74-86c6-9051a7e87ed7\") " pod="kube-flannel/kube-flannel-ds-rrwfm" Dec 13 01:32:00.448026 kubelet[2673]: I1213 01:32:00.447835 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/607b89b2-4712-4a74-86c6-9051a7e87ed7-flannel-cfg\") pod \"kube-flannel-ds-rrwfm\" (UID: \"607b89b2-4712-4a74-86c6-9051a7e87ed7\") " pod="kube-flannel/kube-flannel-ds-rrwfm" Dec 13 01:32:00.700288 containerd[1489]: time="2024-12-13T01:32:00.699905500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w56gd,Uid:4a3786ce-9830-488b-b5ce-b33e0a2bc06f,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:00.701876 containerd[1489]: time="2024-12-13T01:32:00.700194737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rrwfm,Uid:607b89b2-4712-4a74-86c6-9051a7e87ed7,Namespace:kube-flannel,Attempt:0,}" Dec 13 01:32:00.749021 containerd[1489]: time="2024-12-13T01:32:00.748920976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:00.749231 containerd[1489]: time="2024-12-13T01:32:00.748986404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:00.750334 containerd[1489]: time="2024-12-13T01:32:00.750087858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:00.750334 containerd[1489]: time="2024-12-13T01:32:00.750209407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:00.750511 containerd[1489]: time="2024-12-13T01:32:00.750275136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:00.750511 containerd[1489]: time="2024-12-13T01:32:00.750405181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:00.750841 containerd[1489]: time="2024-12-13T01:32:00.750674600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:00.751169 containerd[1489]: time="2024-12-13T01:32:00.751053315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:00.778388 systemd[1]: Started cri-containerd-3cda1cb030019f4f69fa6240b985efd1b2926cc38855346a1d303eb718d59cbe.scope - libcontainer container 3cda1cb030019f4f69fa6240b985efd1b2926cc38855346a1d303eb718d59cbe. Dec 13 01:32:00.782210 systemd[1]: Started cri-containerd-db232ab9a39d64cb19367030ef36952ed45c75e46993731d7d993c07e0568f82.scope - libcontainer container db232ab9a39d64cb19367030ef36952ed45c75e46993731d7d993c07e0568f82. Dec 13 01:32:00.812556 containerd[1489]: time="2024-12-13T01:32:00.812327340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w56gd,Uid:4a3786ce-9830-488b-b5ce-b33e0a2bc06f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cda1cb030019f4f69fa6240b985efd1b2926cc38855346a1d303eb718d59cbe\"" Dec 13 01:32:00.818227 containerd[1489]: time="2024-12-13T01:32:00.817492395Z" level=info msg="CreateContainer within sandbox \"3cda1cb030019f4f69fa6240b985efd1b2926cc38855346a1d303eb718d59cbe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:32:00.829487 containerd[1489]: time="2024-12-13T01:32:00.829455844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rrwfm,Uid:607b89b2-4712-4a74-86c6-9051a7e87ed7,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"db232ab9a39d64cb19367030ef36952ed45c75e46993731d7d993c07e0568f82\"" Dec 13 01:32:00.832322 containerd[1489]: time="2024-12-13T01:32:00.831708398Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 01:32:00.845041 containerd[1489]: time="2024-12-13T01:32:00.844993674Z" level=info msg="CreateContainer within sandbox \"3cda1cb030019f4f69fa6240b985efd1b2926cc38855346a1d303eb718d59cbe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00fa86c8edd9be3d1b0e90f4d5b6c9b1173c3855c66af94b5923d6bd8a35bd61\"" Dec 13 01:32:00.845626 containerd[1489]: time="2024-12-13T01:32:00.845461152Z" level=info msg="StartContainer for \"00fa86c8edd9be3d1b0e90f4d5b6c9b1173c3855c66af94b5923d6bd8a35bd61\"" Dec 13 01:32:00.872371 systemd[1]: Started cri-containerd-00fa86c8edd9be3d1b0e90f4d5b6c9b1173c3855c66af94b5923d6bd8a35bd61.scope - libcontainer container 00fa86c8edd9be3d1b0e90f4d5b6c9b1173c3855c66af94b5923d6bd8a35bd61. Dec 13 01:32:00.903200 containerd[1489]: time="2024-12-13T01:32:00.903161146Z" level=info msg="StartContainer for \"00fa86c8edd9be3d1b0e90f4d5b6c9b1173c3855c66af94b5923d6bd8a35bd61\" returns successfully" Dec 13 01:32:01.381446 kubelet[2673]: I1213 01:32:01.381379 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w56gd" podStartSLOduration=1.3813617360000001 podStartE2EDuration="1.381361736s" podCreationTimestamp="2024-12-13 01:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:01.381087929 +0000 UTC m=+7.163484641" watchObservedRunningTime="2024-12-13 01:32:01.381361736 +0000 UTC m=+7.163758448" Dec 13 01:32:03.318694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1967679617.mount: Deactivated successfully. Dec 13 01:32:03.345497 containerd[1489]: time="2024-12-13T01:32:03.345435656Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:03.346407 containerd[1489]: time="2024-12-13T01:32:03.346321460Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Dec 13 01:32:03.347270 containerd[1489]: time="2024-12-13T01:32:03.347208968Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:03.349064 containerd[1489]: time="2024-12-13T01:32:03.349033059Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:03.350124 containerd[1489]: time="2024-12-13T01:32:03.349496616Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.517753833s" Dec 13 01:32:03.350124 containerd[1489]: time="2024-12-13T01:32:03.349526456Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 13 01:32:03.352743 containerd[1489]: time="2024-12-13T01:32:03.352717813Z" level=info msg="CreateContainer within sandbox \"db232ab9a39d64cb19367030ef36952ed45c75e46993731d7d993c07e0568f82\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 01:32:03.363936 containerd[1489]: time="2024-12-13T01:32:03.363897530Z" level=info msg="CreateContainer within sandbox \"db232ab9a39d64cb19367030ef36952ed45c75e46993731d7d993c07e0568f82\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"9edb6d179d37545a4e94b60de11c653f0bddc8616593793ab1d02df7afc7bb6c\"" Dec 13 01:32:03.366032 containerd[1489]: time="2024-12-13T01:32:03.365992813Z" level=info msg="StartContainer for \"9edb6d179d37545a4e94b60de11c653f0bddc8616593793ab1d02df7afc7bb6c\"" Dec 13 01:32:03.403402 systemd[1]: Started cri-containerd-9edb6d179d37545a4e94b60de11c653f0bddc8616593793ab1d02df7afc7bb6c.scope - libcontainer container 9edb6d179d37545a4e94b60de11c653f0bddc8616593793ab1d02df7afc7bb6c. Dec 13 01:32:03.432362 systemd[1]: cri-containerd-9edb6d179d37545a4e94b60de11c653f0bddc8616593793ab1d02df7afc7bb6c.scope: Deactivated successfully. Dec 13 01:32:03.434349 containerd[1489]: time="2024-12-13T01:32:03.433007112Z" level=info msg="StartContainer for \"9edb6d179d37545a4e94b60de11c653f0bddc8616593793ab1d02df7afc7bb6c\" returns successfully" Dec 13 01:32:03.463162 containerd[1489]: time="2024-12-13T01:32:03.463051593Z" level=info msg="shim disconnected" id=9edb6d179d37545a4e94b60de11c653f0bddc8616593793ab1d02df7afc7bb6c namespace=k8s.io Dec 13 01:32:03.463162 containerd[1489]: time="2024-12-13T01:32:03.463129436Z" level=warning msg="cleaning up after shim disconnected" id=9edb6d179d37545a4e94b60de11c653f0bddc8616593793ab1d02df7afc7bb6c namespace=k8s.io Dec 13 01:32:03.463162 containerd[1489]: time="2024-12-13T01:32:03.463138043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:04.213858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9edb6d179d37545a4e94b60de11c653f0bddc8616593793ab1d02df7afc7bb6c-rootfs.mount: Deactivated successfully. Dec 13 01:32:04.368104 containerd[1489]: time="2024-12-13T01:32:04.367874740Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 01:32:06.843175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2490595147.mount: Deactivated successfully. Dec 13 01:32:09.741112 containerd[1489]: time="2024-12-13T01:32:09.741052969Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:09.742075 containerd[1489]: time="2024-12-13T01:32:09.741948421Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Dec 13 01:32:09.742684 containerd[1489]: time="2024-12-13T01:32:09.742663745Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:09.745868 containerd[1489]: time="2024-12-13T01:32:09.745839406Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:09.747203 containerd[1489]: time="2024-12-13T01:32:09.747173213Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 5.379262624s" Dec 13 01:32:09.747284 containerd[1489]: time="2024-12-13T01:32:09.747202510Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 13 01:32:09.750992 containerd[1489]: time="2024-12-13T01:32:09.750938142Z" level=info msg="CreateContainer within sandbox \"db232ab9a39d64cb19367030ef36952ed45c75e46993731d7d993c07e0568f82\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:32:09.760989 containerd[1489]: time="2024-12-13T01:32:09.760904362Z" level=info msg="CreateContainer within sandbox \"db232ab9a39d64cb19367030ef36952ed45c75e46993731d7d993c07e0568f82\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"87c83294bab41973b214da8670b8f2197fdf39f9fa2c1d2e61cd168fe6b69d6e\"" Dec 13 01:32:09.761565 containerd[1489]: time="2024-12-13T01:32:09.761539317Z" level=info msg="StartContainer for \"87c83294bab41973b214da8670b8f2197fdf39f9fa2c1d2e61cd168fe6b69d6e\"" Dec 13 01:32:09.798446 systemd[1]: Started cri-containerd-87c83294bab41973b214da8670b8f2197fdf39f9fa2c1d2e61cd168fe6b69d6e.scope - libcontainer container 87c83294bab41973b214da8670b8f2197fdf39f9fa2c1d2e61cd168fe6b69d6e. Dec 13 01:32:09.824175 systemd[1]: cri-containerd-87c83294bab41973b214da8670b8f2197fdf39f9fa2c1d2e61cd168fe6b69d6e.scope: Deactivated successfully. Dec 13 01:32:09.827834 containerd[1489]: time="2024-12-13T01:32:09.827800404Z" level=info msg="StartContainer for \"87c83294bab41973b214da8670b8f2197fdf39f9fa2c1d2e61cd168fe6b69d6e\" returns successfully" Dec 13 01:32:09.847763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87c83294bab41973b214da8670b8f2197fdf39f9fa2c1d2e61cd168fe6b69d6e-rootfs.mount: Deactivated successfully. Dec 13 01:32:09.886352 kubelet[2673]: I1213 01:32:09.885529 2673 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 01:32:09.887681 containerd[1489]: time="2024-12-13T01:32:09.887491231Z" level=info msg="shim disconnected" id=87c83294bab41973b214da8670b8f2197fdf39f9fa2c1d2e61cd168fe6b69d6e namespace=k8s.io Dec 13 01:32:09.887681 containerd[1489]: time="2024-12-13T01:32:09.887537200Z" level=warning msg="cleaning up after shim disconnected" id=87c83294bab41973b214da8670b8f2197fdf39f9fa2c1d2e61cd168fe6b69d6e namespace=k8s.io Dec 13 01:32:09.887681 containerd[1489]: time="2024-12-13T01:32:09.887544655Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:09.926111 systemd[1]: Created slice kubepods-burstable-podb7fe6192_944f_4db5_bc9f_6c8621ce9dfc.slice - libcontainer container kubepods-burstable-podb7fe6192_944f_4db5_bc9f_6c8621ce9dfc.slice. Dec 13 01:32:09.936051 systemd[1]: Created slice kubepods-burstable-podbd556884_730d_4aa9_ad03_c95be279e282.slice - libcontainer container kubepods-burstable-podbd556884_730d_4aa9_ad03_c95be279e282.slice. Dec 13 01:32:10.010597 kubelet[2673]: I1213 01:32:10.010389 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qprxv\" (UniqueName: \"kubernetes.io/projected/bd556884-730d-4aa9-ad03-c95be279e282-kube-api-access-qprxv\") pod \"coredns-6f6b679f8f-8nll9\" (UID: \"bd556884-730d-4aa9-ad03-c95be279e282\") " pod="kube-system/coredns-6f6b679f8f-8nll9" Dec 13 01:32:10.010597 kubelet[2673]: I1213 01:32:10.010438 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7fe6192-944f-4db5-bc9f-6c8621ce9dfc-config-volume\") pod \"coredns-6f6b679f8f-q64fv\" (UID: \"b7fe6192-944f-4db5-bc9f-6c8621ce9dfc\") " pod="kube-system/coredns-6f6b679f8f-q64fv" Dec 13 01:32:10.010597 kubelet[2673]: I1213 01:32:10.010460 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7zt4\" (UniqueName: \"kubernetes.io/projected/b7fe6192-944f-4db5-bc9f-6c8621ce9dfc-kube-api-access-b7zt4\") pod \"coredns-6f6b679f8f-q64fv\" (UID: \"b7fe6192-944f-4db5-bc9f-6c8621ce9dfc\") " pod="kube-system/coredns-6f6b679f8f-q64fv" Dec 13 01:32:10.010597 kubelet[2673]: I1213 01:32:10.010473 2673 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd556884-730d-4aa9-ad03-c95be279e282-config-volume\") pod \"coredns-6f6b679f8f-8nll9\" (UID: \"bd556884-730d-4aa9-ad03-c95be279e282\") " pod="kube-system/coredns-6f6b679f8f-8nll9" Dec 13 01:32:10.234692 containerd[1489]: time="2024-12-13T01:32:10.234629764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q64fv,Uid:b7fe6192-944f-4db5-bc9f-6c8621ce9dfc,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:10.242104 containerd[1489]: time="2024-12-13T01:32:10.242004717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8nll9,Uid:bd556884-730d-4aa9-ad03-c95be279e282,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:10.286337 containerd[1489]: time="2024-12-13T01:32:10.286163685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q64fv,Uid:b7fe6192-944f-4db5-bc9f-6c8621ce9dfc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"56017e66652b268070de32034f46874e20d7367fd2ec07a6fc00f4036ae4c2c3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:32:10.286567 kubelet[2673]: E1213 01:32:10.286449 2673 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56017e66652b268070de32034f46874e20d7367fd2ec07a6fc00f4036ae4c2c3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:32:10.286567 kubelet[2673]: E1213 01:32:10.286532 2673 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56017e66652b268070de32034f46874e20d7367fd2ec07a6fc00f4036ae4c2c3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-q64fv" Dec 13 01:32:10.286567 kubelet[2673]: E1213 01:32:10.286557 2673 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56017e66652b268070de32034f46874e20d7367fd2ec07a6fc00f4036ae4c2c3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-q64fv" Dec 13 01:32:10.286660 kubelet[2673]: E1213 01:32:10.286606 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-q64fv_kube-system(b7fe6192-944f-4db5-bc9f-6c8621ce9dfc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-q64fv_kube-system(b7fe6192-944f-4db5-bc9f-6c8621ce9dfc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56017e66652b268070de32034f46874e20d7367fd2ec07a6fc00f4036ae4c2c3\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-q64fv" podUID="b7fe6192-944f-4db5-bc9f-6c8621ce9dfc" Dec 13 01:32:10.290312 containerd[1489]: time="2024-12-13T01:32:10.290232128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8nll9,Uid:bd556884-730d-4aa9-ad03-c95be279e282,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98d7ae49c0201dd11a4ca3f444d7ac96a31cd02928012a858a40231badd89cf0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:32:10.290485 kubelet[2673]: E1213 01:32:10.290361 2673 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98d7ae49c0201dd11a4ca3f444d7ac96a31cd02928012a858a40231badd89cf0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:32:10.290485 kubelet[2673]: E1213 01:32:10.290387 2673 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98d7ae49c0201dd11a4ca3f444d7ac96a31cd02928012a858a40231badd89cf0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-8nll9" Dec 13 01:32:10.290485 kubelet[2673]: E1213 01:32:10.290402 2673 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98d7ae49c0201dd11a4ca3f444d7ac96a31cd02928012a858a40231badd89cf0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-8nll9" Dec 13 01:32:10.290485 kubelet[2673]: E1213 01:32:10.290431 2673 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8nll9_kube-system(bd556884-730d-4aa9-ad03-c95be279e282)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8nll9_kube-system(bd556884-730d-4aa9-ad03-c95be279e282)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98d7ae49c0201dd11a4ca3f444d7ac96a31cd02928012a858a40231badd89cf0\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-8nll9" podUID="bd556884-730d-4aa9-ad03-c95be279e282" Dec 13 01:32:10.379180 containerd[1489]: time="2024-12-13T01:32:10.379077751Z" level=info msg="CreateContainer within sandbox \"db232ab9a39d64cb19367030ef36952ed45c75e46993731d7d993c07e0568f82\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 01:32:10.390845 containerd[1489]: time="2024-12-13T01:32:10.390756024Z" level=info msg="CreateContainer within sandbox \"db232ab9a39d64cb19367030ef36952ed45c75e46993731d7d993c07e0568f82\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"893552e86b134586b72cb3472d3b06186dd03c1643dccc6e0547dde6201f2ba8\"" Dec 13 01:32:10.392350 containerd[1489]: time="2024-12-13T01:32:10.391628912Z" level=info msg="StartContainer for \"893552e86b134586b72cb3472d3b06186dd03c1643dccc6e0547dde6201f2ba8\"" Dec 13 01:32:10.427369 systemd[1]: Started cri-containerd-893552e86b134586b72cb3472d3b06186dd03c1643dccc6e0547dde6201f2ba8.scope - libcontainer container 893552e86b134586b72cb3472d3b06186dd03c1643dccc6e0547dde6201f2ba8. Dec 13 01:32:10.454009 containerd[1489]: time="2024-12-13T01:32:10.453960995Z" level=info msg="StartContainer for \"893552e86b134586b72cb3472d3b06186dd03c1643dccc6e0547dde6201f2ba8\" returns successfully" Dec 13 01:32:11.503869 systemd-networkd[1395]: flannel.1: Link UP Dec 13 01:32:11.503879 systemd-networkd[1395]: flannel.1: Gained carrier Dec 13 01:32:13.257572 systemd-networkd[1395]: flannel.1: Gained IPv6LL Dec 13 01:32:23.310973 containerd[1489]: time="2024-12-13T01:32:23.310912178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q64fv,Uid:b7fe6192-944f-4db5-bc9f-6c8621ce9dfc,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:23.340344 systemd-networkd[1395]: cni0: Link UP Dec 13 01:32:23.340354 systemd-networkd[1395]: cni0: Gained carrier Dec 13 01:32:23.344109 systemd-networkd[1395]: cni0: Lost carrier Dec 13 01:32:23.353813 systemd-networkd[1395]: veth5a11c237: Link UP Dec 13 01:32:23.356009 kernel: cni0: port 1(veth5a11c237) entered blocking state Dec 13 01:32:23.356063 kernel: cni0: port 1(veth5a11c237) entered disabled state Dec 13 01:32:23.362064 kernel: veth5a11c237: entered allmulticast mode Dec 13 01:32:23.362103 kernel: veth5a11c237: entered promiscuous mode Dec 13 01:32:23.364768 kernel: cni0: port 1(veth5a11c237) entered blocking state Dec 13 01:32:23.366798 kernel: cni0: port 1(veth5a11c237) entered forwarding state Dec 13 01:32:23.366837 kernel: cni0: port 1(veth5a11c237) entered disabled state Dec 13 01:32:23.374562 kernel: cni0: port 1(veth5a11c237) entered blocking state Dec 13 01:32:23.374600 kernel: cni0: port 1(veth5a11c237) entered forwarding state Dec 13 01:32:23.374214 systemd-networkd[1395]: veth5a11c237: Gained carrier Dec 13 01:32:23.375115 systemd-networkd[1395]: cni0: Gained carrier Dec 13 01:32:23.378713 containerd[1489]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009c8e8), "name":"cbr0", "type":"bridge"} Dec 13 01:32:23.378713 containerd[1489]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:32:23.407051 containerd[1489]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:32:23.406899539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:23.407254 containerd[1489]: time="2024-12-13T01:32:23.407055249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:23.407254 containerd[1489]: time="2024-12-13T01:32:23.407072171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:23.407978 containerd[1489]: time="2024-12-13T01:32:23.407867876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:23.430421 systemd[1]: Started cri-containerd-38cc3de7072142332c8493b055fe08552111be2a0993c2fa5636b89d7d3ddda6.scope - libcontainer container 38cc3de7072142332c8493b055fe08552111be2a0993c2fa5636b89d7d3ddda6. Dec 13 01:32:23.468091 containerd[1489]: time="2024-12-13T01:32:23.468058276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q64fv,Uid:b7fe6192-944f-4db5-bc9f-6c8621ce9dfc,Namespace:kube-system,Attempt:0,} returns sandbox id \"38cc3de7072142332c8493b055fe08552111be2a0993c2fa5636b89d7d3ddda6\"" Dec 13 01:32:23.473011 containerd[1489]: time="2024-12-13T01:32:23.472980023Z" level=info msg="CreateContainer within sandbox \"38cc3de7072142332c8493b055fe08552111be2a0993c2fa5636b89d7d3ddda6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:32:23.482648 containerd[1489]: time="2024-12-13T01:32:23.482525732Z" level=info msg="CreateContainer within sandbox \"38cc3de7072142332c8493b055fe08552111be2a0993c2fa5636b89d7d3ddda6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e615f1f023d490ecb8a1329549093db64012ec9a17180acc8e80aa124b86379a\"" Dec 13 01:32:23.484912 containerd[1489]: time="2024-12-13T01:32:23.483407884Z" level=info msg="StartContainer for \"e615f1f023d490ecb8a1329549093db64012ec9a17180acc8e80aa124b86379a\"" Dec 13 01:32:23.508375 systemd[1]: Started cri-containerd-e615f1f023d490ecb8a1329549093db64012ec9a17180acc8e80aa124b86379a.scope - libcontainer container e615f1f023d490ecb8a1329549093db64012ec9a17180acc8e80aa124b86379a. Dec 13 01:32:23.533938 containerd[1489]: time="2024-12-13T01:32:23.533898386Z" level=info msg="StartContainer for \"e615f1f023d490ecb8a1329549093db64012ec9a17180acc8e80aa124b86379a\" returns successfully" Dec 13 01:32:24.310947 containerd[1489]: time="2024-12-13T01:32:24.310632323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8nll9,Uid:bd556884-730d-4aa9-ad03-c95be279e282,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:24.332794 systemd-networkd[1395]: veth7d8a7c15: Link UP Dec 13 01:32:24.335668 kernel: cni0: port 2(veth7d8a7c15) entered blocking state Dec 13 01:32:24.335784 kernel: cni0: port 2(veth7d8a7c15) entered disabled state Dec 13 01:32:24.337260 kernel: veth7d8a7c15: entered allmulticast mode Dec 13 01:32:24.340773 kernel: veth7d8a7c15: entered promiscuous mode Dec 13 01:32:24.341289 kernel: cni0: port 2(veth7d8a7c15) entered blocking state Dec 13 01:32:24.344355 kernel: cni0: port 2(veth7d8a7c15) entered forwarding state Dec 13 01:32:24.347277 kernel: cni0: port 2(veth7d8a7c15) entered disabled state Dec 13 01:32:24.353303 kernel: cni0: port 2(veth7d8a7c15) entered blocking state Dec 13 01:32:24.353351 kernel: cni0: port 2(veth7d8a7c15) entered forwarding state Dec 13 01:32:24.353368 systemd-networkd[1395]: veth7d8a7c15: Gained carrier Dec 13 01:32:24.355267 containerd[1489]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009c8e8), "name":"cbr0", "type":"bridge"} Dec 13 01:32:24.355267 containerd[1489]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:32:24.370680 containerd[1489]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:32:24.370586303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:24.370680 containerd[1489]: time="2024-12-13T01:32:24.370649364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:24.370680 containerd[1489]: time="2024-12-13T01:32:24.370662409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:24.372213 containerd[1489]: time="2024-12-13T01:32:24.372161447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:24.392222 systemd[1]: run-containerd-runc-k8s.io-0c8efe881104706569f70c80a370c26eab33c98c1913454194a6791d6628d461-runc.x09xJS.mount: Deactivated successfully. Dec 13 01:32:24.401021 systemd[1]: Started cri-containerd-0c8efe881104706569f70c80a370c26eab33c98c1913454194a6791d6628d461.scope - libcontainer container 0c8efe881104706569f70c80a370c26eab33c98c1913454194a6791d6628d461. Dec 13 01:32:24.415871 kubelet[2673]: I1213 01:32:24.415816 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-rrwfm" podStartSLOduration=15.498464499 podStartE2EDuration="24.415800217s" podCreationTimestamp="2024-12-13 01:32:00 +0000 UTC" firstStartedPulling="2024-12-13 01:32:00.831131966 +0000 UTC m=+6.613528667" lastFinishedPulling="2024-12-13 01:32:09.748467683 +0000 UTC m=+15.530864385" observedRunningTime="2024-12-13 01:32:11.391377179 +0000 UTC m=+17.173773891" watchObservedRunningTime="2024-12-13 01:32:24.415800217 +0000 UTC m=+30.198196919" Dec 13 01:32:24.429438 kubelet[2673]: I1213 01:32:24.428813 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-q64fv" podStartSLOduration=24.428798427 podStartE2EDuration="24.428798427s" podCreationTimestamp="2024-12-13 01:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:24.417419757 +0000 UTC m=+30.199816458" watchObservedRunningTime="2024-12-13 01:32:24.428798427 +0000 UTC m=+30.211195129" Dec 13 01:32:24.452092 containerd[1489]: time="2024-12-13T01:32:24.452002570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8nll9,Uid:bd556884-730d-4aa9-ad03-c95be279e282,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c8efe881104706569f70c80a370c26eab33c98c1913454194a6791d6628d461\"" Dec 13 01:32:24.456017 containerd[1489]: time="2024-12-13T01:32:24.455961998Z" level=info msg="CreateContainer within sandbox \"0c8efe881104706569f70c80a370c26eab33c98c1913454194a6791d6628d461\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:32:24.471640 containerd[1489]: time="2024-12-13T01:32:24.471522275Z" level=info msg="CreateContainer within sandbox \"0c8efe881104706569f70c80a370c26eab33c98c1913454194a6791d6628d461\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b060212630ff18693b7ff080460a239b9de11fd5b2a6868275e5cd1408beb59\"" Dec 13 01:32:24.472442 containerd[1489]: time="2024-12-13T01:32:24.472414153Z" level=info msg="StartContainer for \"2b060212630ff18693b7ff080460a239b9de11fd5b2a6868275e5cd1408beb59\"" Dec 13 01:32:24.507389 systemd[1]: Started cri-containerd-2b060212630ff18693b7ff080460a239b9de11fd5b2a6868275e5cd1408beb59.scope - libcontainer container 2b060212630ff18693b7ff080460a239b9de11fd5b2a6868275e5cd1408beb59. Dec 13 01:32:24.533979 containerd[1489]: time="2024-12-13T01:32:24.533764704Z" level=info msg="StartContainer for \"2b060212630ff18693b7ff080460a239b9de11fd5b2a6868275e5cd1408beb59\" returns successfully" Dec 13 01:32:24.714441 systemd-networkd[1395]: veth5a11c237: Gained IPv6LL Dec 13 01:32:25.033454 systemd-networkd[1395]: cni0: Gained IPv6LL Dec 13 01:32:26.185498 systemd-networkd[1395]: veth7d8a7c15: Gained IPv6LL Dec 13 01:32:30.254073 kubelet[2673]: I1213 01:32:30.253881 2673 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-8nll9" podStartSLOduration=30.253863127 podStartE2EDuration="30.253863127s" podCreationTimestamp="2024-12-13 01:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:25.417580213 +0000 UTC m=+31.199976915" watchObservedRunningTime="2024-12-13 01:32:30.253863127 +0000 UTC m=+36.036259839" Dec 13 01:36:30.474250 update_engine[1472]: I20241213 01:36:30.474154 1472 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 01:36:30.474250 update_engine[1472]: I20241213 01:36:30.474216 1472 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 01:36:30.476670 update_engine[1472]: I20241213 01:36:30.474545 1472 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 01:36:30.476670 update_engine[1472]: I20241213 01:36:30.475196 1472 omaha_request_params.cc:62] Current group set to stable Dec 13 01:36:30.476670 update_engine[1472]: I20241213 01:36:30.475989 1472 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 01:36:30.476670 update_engine[1472]: I20241213 01:36:30.476008 1472 update_attempter.cc:643] Scheduling an action processor start. Dec 13 01:36:30.476670 update_engine[1472]: I20241213 01:36:30.476030 1472 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:36:30.476670 update_engine[1472]: I20241213 01:36:30.476071 1472 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 01:36:30.476670 update_engine[1472]: I20241213 01:36:30.476163 1472 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:36:30.476670 update_engine[1472]: I20241213 01:36:30.476182 1472 omaha_request_action.cc:272] Request: Dec 13 01:36:30.476670 update_engine[1472]: Dec 13 01:36:30.476670 update_engine[1472]: Dec 13 01:36:30.476670 update_engine[1472]: Dec 13 01:36:30.476670 update_engine[1472]: Dec 13 01:36:30.476670 update_engine[1472]: Dec 13 01:36:30.476670 update_engine[1472]: Dec 13 01:36:30.476670 update_engine[1472]: Dec 13 01:36:30.476670 update_engine[1472]: Dec 13 01:36:30.476670 update_engine[1472]: I20241213 01:36:30.476195 1472 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:36:30.477331 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 01:36:30.479654 update_engine[1472]: I20241213 01:36:30.479609 1472 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:36:30.479955 update_engine[1472]: I20241213 01:36:30.479909 1472 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:36:30.480804 update_engine[1472]: E20241213 01:36:30.480759 1472 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:36:30.480873 update_engine[1472]: I20241213 01:36:30.480855 1472 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 01:36:37.778611 systemd[1]: Started sshd@5-78.46.150.92:22-147.75.109.163:45122.service - OpenSSH per-connection server daemon (147.75.109.163:45122). Dec 13 01:36:38.763871 sshd[4649]: Accepted publickey for core from 147.75.109.163 port 45122 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:36:38.765545 sshd[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:38.771342 systemd-logind[1470]: New session 6 of user core. Dec 13 01:36:38.776653 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:36:39.499426 sshd[4649]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:39.504370 systemd[1]: sshd@5-78.46.150.92:22-147.75.109.163:45122.service: Deactivated successfully. Dec 13 01:36:39.506879 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:36:39.507673 systemd-logind[1470]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:36:39.508738 systemd-logind[1470]: Removed session 6. Dec 13 01:36:40.379465 update_engine[1472]: I20241213 01:36:40.379364 1472 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:36:40.379865 update_engine[1472]: I20241213 01:36:40.379637 1472 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:36:40.379914 update_engine[1472]: I20241213 01:36:40.379861 1472 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:36:40.380627 update_engine[1472]: E20241213 01:36:40.380590 1472 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:36:40.380672 update_engine[1472]: I20241213 01:36:40.380646 1472 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 01:36:44.672566 systemd[1]: Started sshd@6-78.46.150.92:22-147.75.109.163:45128.service - OpenSSH per-connection server daemon (147.75.109.163:45128). Dec 13 01:36:45.658425 sshd[4699]: Accepted publickey for core from 147.75.109.163 port 45128 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:36:45.660062 sshd[4699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:45.664814 systemd-logind[1470]: New session 7 of user core. Dec 13 01:36:45.670372 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:36:46.388402 sshd[4699]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:46.392047 systemd-logind[1470]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:36:46.392797 systemd[1]: sshd@6-78.46.150.92:22-147.75.109.163:45128.service: Deactivated successfully. Dec 13 01:36:46.394877 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:36:46.396011 systemd-logind[1470]: Removed session 7. Dec 13 01:36:50.380066 update_engine[1472]: I20241213 01:36:50.379537 1472 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:36:50.380066 update_engine[1472]: I20241213 01:36:50.379798 1472 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:36:50.380066 update_engine[1472]: I20241213 01:36:50.379997 1472 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:36:50.381186 update_engine[1472]: E20241213 01:36:50.381145 1472 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:36:50.381347 update_engine[1472]: I20241213 01:36:50.381258 1472 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 01:36:51.563506 systemd[1]: Started sshd@7-78.46.150.92:22-147.75.109.163:40688.service - OpenSSH per-connection server daemon (147.75.109.163:40688). Dec 13 01:36:52.550361 sshd[4734]: Accepted publickey for core from 147.75.109.163 port 40688 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:36:52.552031 sshd[4734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:52.557359 systemd-logind[1470]: New session 8 of user core. Dec 13 01:36:52.565413 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:36:53.291551 sshd[4734]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:53.295010 systemd[1]: sshd@7-78.46.150.92:22-147.75.109.163:40688.service: Deactivated successfully. Dec 13 01:36:53.297029 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:36:53.297889 systemd-logind[1470]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:36:53.298792 systemd-logind[1470]: Removed session 8. Dec 13 01:36:53.464770 systemd[1]: Started sshd@8-78.46.150.92:22-147.75.109.163:40704.service - OpenSSH per-connection server daemon (147.75.109.163:40704). Dec 13 01:36:54.452753 sshd[4769]: Accepted publickey for core from 147.75.109.163 port 40704 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:36:54.454753 sshd[4769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:54.461082 systemd-logind[1470]: New session 9 of user core. Dec 13 01:36:54.464432 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:36:55.215864 sshd[4769]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:55.219633 systemd-logind[1470]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:36:55.220155 systemd[1]: sshd@8-78.46.150.92:22-147.75.109.163:40704.service: Deactivated successfully. Dec 13 01:36:55.222024 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:36:55.223079 systemd-logind[1470]: Removed session 9. Dec 13 01:36:55.387436 systemd[1]: Started sshd@9-78.46.150.92:22-147.75.109.163:40714.service - OpenSSH per-connection server daemon (147.75.109.163:40714). Dec 13 01:36:56.379763 sshd[4781]: Accepted publickey for core from 147.75.109.163 port 40714 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:36:56.381501 sshd[4781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:56.386508 systemd-logind[1470]: New session 10 of user core. Dec 13 01:36:56.393377 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:36:57.120985 sshd[4781]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:57.125140 systemd[1]: sshd@9-78.46.150.92:22-147.75.109.163:40714.service: Deactivated successfully. Dec 13 01:36:57.127153 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:36:57.127849 systemd-logind[1470]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:36:57.129194 systemd-logind[1470]: Removed session 10. Dec 13 01:37:00.382471 update_engine[1472]: I20241213 01:37:00.382364 1472 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:37:00.382964 update_engine[1472]: I20241213 01:37:00.382766 1472 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:37:00.383122 update_engine[1472]: I20241213 01:37:00.383078 1472 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:37:00.383916 update_engine[1472]: E20241213 01:37:00.383853 1472 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:37:00.384019 update_engine[1472]: I20241213 01:37:00.383931 1472 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:37:00.384019 update_engine[1472]: I20241213 01:37:00.383949 1472 omaha_request_action.cc:617] Omaha request response: Dec 13 01:37:00.384083 update_engine[1472]: E20241213 01:37:00.384063 1472 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 01:37:00.384111 update_engine[1472]: I20241213 01:37:00.384092 1472 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 01:37:00.384138 update_engine[1472]: I20241213 01:37:00.384104 1472 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:37:00.384138 update_engine[1472]: I20241213 01:37:00.384116 1472 update_attempter.cc:306] Processing Done. Dec 13 01:37:00.384200 update_engine[1472]: E20241213 01:37:00.384140 1472 update_attempter.cc:619] Update failed. Dec 13 01:37:00.384200 update_engine[1472]: I20241213 01:37:00.384151 1472 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 01:37:00.384200 update_engine[1472]: I20241213 01:37:00.384163 1472 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 01:37:00.384200 update_engine[1472]: I20241213 01:37:00.384176 1472 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 01:37:00.384329 update_engine[1472]: I20241213 01:37:00.384305 1472 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:37:00.384356 update_engine[1472]: I20241213 01:37:00.384341 1472 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:37:00.384382 update_engine[1472]: I20241213 01:37:00.384356 1472 omaha_request_action.cc:272] Request: Dec 13 01:37:00.384382 update_engine[1472]: Dec 13 01:37:00.384382 update_engine[1472]: Dec 13 01:37:00.384382 update_engine[1472]: Dec 13 01:37:00.384382 update_engine[1472]: Dec 13 01:37:00.384382 update_engine[1472]: Dec 13 01:37:00.384382 update_engine[1472]: Dec 13 01:37:00.384382 update_engine[1472]: I20241213 01:37:00.384367 1472 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:37:00.384754 update_engine[1472]: I20241213 01:37:00.384596 1472 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:37:00.384888 update_engine[1472]: I20241213 01:37:00.384845 1472 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:37:00.385163 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 01:37:00.385568 update_engine[1472]: E20241213 01:37:00.385533 1472 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:37:00.385652 update_engine[1472]: I20241213 01:37:00.385619 1472 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:37:00.385652 update_engine[1472]: I20241213 01:37:00.385643 1472 omaha_request_action.cc:617] Omaha request response: Dec 13 01:37:00.385720 update_engine[1472]: I20241213 01:37:00.385659 1472 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:37:00.385720 update_engine[1472]: I20241213 01:37:00.385682 1472 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:37:00.385787 update_engine[1472]: I20241213 01:37:00.385710 1472 update_attempter.cc:306] Processing Done. Dec 13 01:37:00.385787 update_engine[1472]: I20241213 01:37:00.385735 1472 update_attempter.cc:310] Error event sent. Dec 13 01:37:00.385787 update_engine[1472]: I20241213 01:37:00.385763 1472 update_check_scheduler.cc:74] Next update check in 43m36s Dec 13 01:37:00.386371 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 01:37:02.288457 systemd[1]: Started sshd@10-78.46.150.92:22-147.75.109.163:37996.service - OpenSSH per-connection server daemon (147.75.109.163:37996). Dec 13 01:37:03.265711 sshd[4819]: Accepted publickey for core from 147.75.109.163 port 37996 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:03.267408 sshd[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:03.272499 systemd-logind[1470]: New session 11 of user core. Dec 13 01:37:03.277391 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:37:04.007588 sshd[4819]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:04.012071 systemd-logind[1470]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:37:04.012974 systemd[1]: sshd@10-78.46.150.92:22-147.75.109.163:37996.service: Deactivated successfully. Dec 13 01:37:04.015806 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:37:04.017058 systemd-logind[1470]: Removed session 11. Dec 13 01:37:04.188497 systemd[1]: Started sshd@11-78.46.150.92:22-147.75.109.163:38004.service - OpenSSH per-connection server daemon (147.75.109.163:38004). Dec 13 01:37:05.176046 sshd[4853]: Accepted publickey for core from 147.75.109.163 port 38004 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:05.177596 sshd[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:05.181821 systemd-logind[1470]: New session 12 of user core. Dec 13 01:37:05.189513 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:37:06.039229 sshd[4853]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:06.043556 systemd-logind[1470]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:37:06.043899 systemd[1]: sshd@11-78.46.150.92:22-147.75.109.163:38004.service: Deactivated successfully. Dec 13 01:37:06.046172 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:37:06.047167 systemd-logind[1470]: Removed session 12. Dec 13 01:37:06.211608 systemd[1]: Started sshd@12-78.46.150.92:22-147.75.109.163:38010.service - OpenSSH per-connection server daemon (147.75.109.163:38010). Dec 13 01:37:07.202290 sshd[4865]: Accepted publickey for core from 147.75.109.163 port 38010 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:07.203987 sshd[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:07.209018 systemd-logind[1470]: New session 13 of user core. Dec 13 01:37:07.216393 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:37:09.312654 sshd[4865]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:09.315741 systemd[1]: sshd@12-78.46.150.92:22-147.75.109.163:38010.service: Deactivated successfully. Dec 13 01:37:09.318001 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:37:09.319824 systemd-logind[1470]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:37:09.321173 systemd-logind[1470]: Removed session 13. Dec 13 01:37:09.487646 systemd[1]: Started sshd@13-78.46.150.92:22-147.75.109.163:59340.service - OpenSSH per-connection server daemon (147.75.109.163:59340). Dec 13 01:37:10.458615 sshd[4905]: Accepted publickey for core from 147.75.109.163 port 59340 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:10.461538 sshd[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:10.469397 systemd-logind[1470]: New session 14 of user core. Dec 13 01:37:10.478466 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:37:11.318002 sshd[4905]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:11.322075 systemd[1]: sshd@13-78.46.150.92:22-147.75.109.163:59340.service: Deactivated successfully. Dec 13 01:37:11.323972 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:37:11.324743 systemd-logind[1470]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:37:11.325879 systemd-logind[1470]: Removed session 14. Dec 13 01:37:11.492563 systemd[1]: Started sshd@14-78.46.150.92:22-147.75.109.163:59342.service - OpenSSH per-connection server daemon (147.75.109.163:59342). Dec 13 01:37:12.467520 sshd[4916]: Accepted publickey for core from 147.75.109.163 port 59342 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:12.469268 sshd[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:12.473731 systemd-logind[1470]: New session 15 of user core. Dec 13 01:37:12.481403 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:37:13.192721 sshd[4916]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:13.195926 systemd[1]: sshd@14-78.46.150.92:22-147.75.109.163:59342.service: Deactivated successfully. Dec 13 01:37:13.198285 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:37:13.199828 systemd-logind[1470]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:37:13.201430 systemd-logind[1470]: Removed session 15. Dec 13 01:37:18.364483 systemd[1]: Started sshd@15-78.46.150.92:22-147.75.109.163:39844.service - OpenSSH per-connection server daemon (147.75.109.163:39844). Dec 13 01:37:19.340800 sshd[4974]: Accepted publickey for core from 147.75.109.163 port 39844 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:19.342880 sshd[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:19.347802 systemd-logind[1470]: New session 16 of user core. Dec 13 01:37:19.354403 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:37:20.080334 sshd[4974]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:20.083919 systemd-logind[1470]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:37:20.084847 systemd[1]: sshd@15-78.46.150.92:22-147.75.109.163:39844.service: Deactivated successfully. Dec 13 01:37:20.087139 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:37:20.088540 systemd-logind[1470]: Removed session 16. Dec 13 01:37:25.256773 systemd[1]: Started sshd@16-78.46.150.92:22-147.75.109.163:39854.service - OpenSSH per-connection server daemon (147.75.109.163:39854). Dec 13 01:37:26.251155 sshd[5009]: Accepted publickey for core from 147.75.109.163 port 39854 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:26.252939 sshd[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:26.257675 systemd-logind[1470]: New session 17 of user core. Dec 13 01:37:26.262396 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:37:26.988638 sshd[5009]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:26.991371 systemd[1]: sshd@16-78.46.150.92:22-147.75.109.163:39854.service: Deactivated successfully. Dec 13 01:37:26.993291 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:37:26.994791 systemd-logind[1470]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:37:26.995893 systemd-logind[1470]: Removed session 17. Dec 13 01:37:42.988105 kubelet[2673]: E1213 01:37:42.988044 2673 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56488->10.0.0.2:2379: read: connection timed out" Dec 13 01:37:42.995751 systemd[1]: cri-containerd-453037c902ada53ddceb0797aa37b4e2b658c4f8031a7da6c4c2bbb875520cd2.scope: Deactivated successfully. Dec 13 01:37:42.996321 systemd[1]: cri-containerd-453037c902ada53ddceb0797aa37b4e2b658c4f8031a7da6c4c2bbb875520cd2.scope: Consumed 1.278s CPU time, 13.6M memory peak, 0B memory swap peak. Dec 13 01:37:43.018203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-453037c902ada53ddceb0797aa37b4e2b658c4f8031a7da6c4c2bbb875520cd2-rootfs.mount: Deactivated successfully. Dec 13 01:37:43.023168 containerd[1489]: time="2024-12-13T01:37:43.023105144Z" level=info msg="shim disconnected" id=453037c902ada53ddceb0797aa37b4e2b658c4f8031a7da6c4c2bbb875520cd2 namespace=k8s.io Dec 13 01:37:43.023737 containerd[1489]: time="2024-12-13T01:37:43.023169978Z" level=warning msg="cleaning up after shim disconnected" id=453037c902ada53ddceb0797aa37b4e2b658c4f8031a7da6c4c2bbb875520cd2 namespace=k8s.io Dec 13 01:37:43.023737 containerd[1489]: time="2024-12-13T01:37:43.023181319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:43.464692 systemd[1]: cri-containerd-4ac3835f87d1df51f90f60e62ba494430ca7b14fee5b915993c577f80b627b30.scope: Deactivated successfully. Dec 13 01:37:43.466284 systemd[1]: cri-containerd-4ac3835f87d1df51f90f60e62ba494430ca7b14fee5b915993c577f80b627b30.scope: Consumed 4.556s CPU time, 17.4M memory peak, 0B memory swap peak. Dec 13 01:37:43.488556 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ac3835f87d1df51f90f60e62ba494430ca7b14fee5b915993c577f80b627b30-rootfs.mount: Deactivated successfully. Dec 13 01:37:43.491844 containerd[1489]: time="2024-12-13T01:37:43.491768464Z" level=info msg="shim disconnected" id=4ac3835f87d1df51f90f60e62ba494430ca7b14fee5b915993c577f80b627b30 namespace=k8s.io Dec 13 01:37:43.491844 containerd[1489]: time="2024-12-13T01:37:43.491819431Z" level=warning msg="cleaning up after shim disconnected" id=4ac3835f87d1df51f90f60e62ba494430ca7b14fee5b915993c577f80b627b30 namespace=k8s.io Dec 13 01:37:43.491844 containerd[1489]: time="2024-12-13T01:37:43.491828688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:43.989796 kubelet[2673]: I1213 01:37:43.989669 2673 scope.go:117] "RemoveContainer" containerID="4ac3835f87d1df51f90f60e62ba494430ca7b14fee5b915993c577f80b627b30" Dec 13 01:37:43.992348 containerd[1489]: time="2024-12-13T01:37:43.992178052Z" level=info msg="CreateContainer within sandbox \"905d5080ba0ad296fb672e31a10e00584411cbc45767b257e98d28abb8658a05\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 01:37:43.992527 kubelet[2673]: I1213 01:37:43.992503 2673 scope.go:117] "RemoveContainer" containerID="453037c902ada53ddceb0797aa37b4e2b658c4f8031a7da6c4c2bbb875520cd2" Dec 13 01:37:43.994428 containerd[1489]: time="2024-12-13T01:37:43.994403056Z" level=info msg="CreateContainer within sandbox \"d4000a19fdde7a36374579dacd670f9b9711b5e3c4af5b98cdd9bf48042838a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 01:37:44.007758 containerd[1489]: time="2024-12-13T01:37:44.007722618Z" level=info msg="CreateContainer within sandbox \"905d5080ba0ad296fb672e31a10e00584411cbc45767b257e98d28abb8658a05\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3b2ac2f2e98c487bbe08547ee13d14c59bdd405a8ba479640ab4262de584bd65\"" Dec 13 01:37:44.008331 containerd[1489]: time="2024-12-13T01:37:44.008299000Z" level=info msg="StartContainer for \"3b2ac2f2e98c487bbe08547ee13d14c59bdd405a8ba479640ab4262de584bd65\"" Dec 13 01:37:44.010922 containerd[1489]: time="2024-12-13T01:37:44.010896657Z" level=info msg="CreateContainer within sandbox \"d4000a19fdde7a36374579dacd670f9b9711b5e3c4af5b98cdd9bf48042838a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f4948172172dd65d10a5cafcdff6583cf982070f8537028076a86123e909ff00\"" Dec 13 01:37:44.011618 containerd[1489]: time="2024-12-13T01:37:44.011566288Z" level=info msg="StartContainer for \"f4948172172dd65d10a5cafcdff6583cf982070f8537028076a86123e909ff00\"" Dec 13 01:37:44.041470 systemd[1]: Started cri-containerd-3b2ac2f2e98c487bbe08547ee13d14c59bdd405a8ba479640ab4262de584bd65.scope - libcontainer container 3b2ac2f2e98c487bbe08547ee13d14c59bdd405a8ba479640ab4262de584bd65. Dec 13 01:37:44.065380 systemd[1]: Started cri-containerd-f4948172172dd65d10a5cafcdff6583cf982070f8537028076a86123e909ff00.scope - libcontainer container f4948172172dd65d10a5cafcdff6583cf982070f8537028076a86123e909ff00. Dec 13 01:37:44.101182 containerd[1489]: time="2024-12-13T01:37:44.100890160Z" level=info msg="StartContainer for \"3b2ac2f2e98c487bbe08547ee13d14c59bdd405a8ba479640ab4262de584bd65\" returns successfully" Dec 13 01:37:44.109401 containerd[1489]: time="2024-12-13T01:37:44.109370767Z" level=info msg="StartContainer for \"f4948172172dd65d10a5cafcdff6583cf982070f8537028076a86123e909ff00\" returns successfully" Dec 13 01:37:48.304184 kubelet[2673]: E1213 01:37:48.300051 2673 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56304->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-2-1-2-9d1d8cb240.181098cbdb48140d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-2-1-2-9d1d8cb240,UID:1dd77ea92913c122cdd7e2a4a0cc95ae,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-2-9d1d8cb240,},FirstTimestamp:2024-12-13 01:37:37.862358029 +0000 UTC m=+343.644754741,LastTimestamp:2024-12-13 01:37:37.862358029 +0000 UTC m=+343.644754741,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-2-9d1d8cb240,}"