Dec 13 01:29:17.920945 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:29:17.920968 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:17.920975 kernel: BIOS-provided physical RAM map: Dec 13 01:29:17.920981 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:29:17.920986 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:29:17.920991 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:29:17.920997 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Dec 13 01:29:17.921002 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Dec 13 01:29:17.921010 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:29:17.921015 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:29:17.921020 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:29:17.921025 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:29:17.921030 kernel: NX (Execute Disable) protection: active Dec 13 01:29:17.921035 kernel: APIC: Static calls initialized Dec 13 01:29:17.921044 kernel: SMBIOS 2.8 present. Dec 13 01:29:17.921049 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Dec 13 01:29:17.921055 kernel: Hypervisor detected: KVM Dec 13 01:29:17.921060 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:29:17.921065 kernel: kvm-clock: using sched offset of 2950093468 cycles Dec 13 01:29:17.921071 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:29:17.921077 kernel: tsc: Detected 2445.406 MHz processor Dec 13 01:29:17.921083 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:29:17.921089 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:29:17.921096 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Dec 13 01:29:17.921102 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:29:17.921108 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:29:17.921113 kernel: Using GB pages for direct mapping Dec 13 01:29:17.921119 kernel: ACPI: Early table checksum verification disabled Dec 13 01:29:17.921124 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Dec 13 01:29:17.921130 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:17.921135 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:17.921141 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:17.921149 kernel: ACPI: FACS 0x000000007CFE0000 000040 Dec 13 01:29:17.921154 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:17.921160 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:17.921166 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:17.921171 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:17.921177 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Dec 13 01:29:17.921182 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Dec 13 01:29:17.921188 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Dec 13 01:29:17.921199 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Dec 13 01:29:17.921204 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Dec 13 01:29:17.921210 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Dec 13 01:29:17.921216 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Dec 13 01:29:17.921222 kernel: No NUMA configuration found Dec 13 01:29:17.921228 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Dec 13 01:29:17.921235 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Dec 13 01:29:17.921241 kernel: Zone ranges: Dec 13 01:29:17.921247 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:29:17.921253 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Dec 13 01:29:17.921259 kernel: Normal empty Dec 13 01:29:17.921264 kernel: Movable zone start for each node Dec 13 01:29:17.921270 kernel: Early memory node ranges Dec 13 01:29:17.921276 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:29:17.921281 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Dec 13 01:29:17.921287 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Dec 13 01:29:17.921295 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:29:17.921301 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:29:17.921307 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:29:17.921312 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:29:17.921318 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:29:17.921324 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:29:17.921330 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:29:17.921335 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:29:17.921341 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:29:17.921349 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:29:17.921355 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:29:17.921360 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:29:17.921366 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:29:17.921372 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:29:17.921378 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:29:17.921384 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:29:17.921389 kernel: Booting paravirtualized kernel on KVM Dec 13 01:29:17.921395 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:29:17.921403 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:29:17.921409 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:29:17.921415 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:29:17.921420 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:29:17.921426 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 01:29:17.921433 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:17.921439 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:29:17.921445 kernel: random: crng init done Dec 13 01:29:17.921453 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:29:17.921459 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:29:17.921465 kernel: Fallback order for Node 0: 0 Dec 13 01:29:17.921470 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Dec 13 01:29:17.921476 kernel: Policy zone: DMA32 Dec 13 01:29:17.921482 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:29:17.921488 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 01:29:17.921494 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:29:17.921500 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:29:17.921535 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:29:17.921542 kernel: Dynamic Preempt: voluntary Dec 13 01:29:17.921548 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:29:17.921554 kernel: rcu: RCU event tracing is enabled. Dec 13 01:29:17.921560 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:29:17.921567 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:29:17.921573 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:29:17.921579 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:29:17.921585 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:29:17.921591 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:29:17.921599 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:29:17.921605 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:29:17.921610 kernel: Console: colour VGA+ 80x25 Dec 13 01:29:17.921616 kernel: printk: console [tty0] enabled Dec 13 01:29:17.921622 kernel: printk: console [ttyS0] enabled Dec 13 01:29:17.921628 kernel: ACPI: Core revision 20230628 Dec 13 01:29:17.921634 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:29:17.921640 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:29:17.921646 kernel: x2apic enabled Dec 13 01:29:17.921654 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:29:17.921660 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:29:17.921666 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:29:17.921671 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) Dec 13 01:29:17.921677 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:29:17.921683 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:29:17.921689 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:29:17.921695 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:29:17.921709 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:29:17.921716 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:29:17.921722 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:29:17.921730 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:29:17.921737 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:29:17.921743 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:29:17.921749 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:29:17.921755 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:29:17.921762 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:29:17.921768 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:29:17.921775 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:29:17.921783 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:29:17.921789 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:29:17.921795 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:29:17.921801 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:29:17.921808 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:29:17.921816 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:29:17.921822 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:29:17.921828 kernel: landlock: Up and running. Dec 13 01:29:17.921834 kernel: SELinux: Initializing. Dec 13 01:29:17.921840 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:29:17.921847 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:29:17.921853 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:29:17.921859 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:17.921865 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:17.921873 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:17.921879 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:29:17.921886 kernel: ... version: 0 Dec 13 01:29:17.921892 kernel: ... bit width: 48 Dec 13 01:29:17.921898 kernel: ... generic registers: 6 Dec 13 01:29:17.921904 kernel: ... value mask: 0000ffffffffffff Dec 13 01:29:17.921910 kernel: ... max period: 00007fffffffffff Dec 13 01:29:17.921916 kernel: ... fixed-purpose events: 0 Dec 13 01:29:17.921933 kernel: ... event mask: 000000000000003f Dec 13 01:29:17.921941 kernel: signal: max sigframe size: 1776 Dec 13 01:29:17.921947 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:29:17.921954 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:29:17.921960 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:29:17.921966 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:29:17.921972 kernel: .... node #0, CPUs: #1 Dec 13 01:29:17.921978 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:29:17.921984 kernel: smpboot: Max logical packages: 1 Dec 13 01:29:17.921990 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) Dec 13 01:29:17.921996 kernel: devtmpfs: initialized Dec 13 01:29:17.922005 kernel: x86/mm: Memory block size: 128MB Dec 13 01:29:17.922011 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:29:17.922018 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:29:17.922024 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:29:17.922030 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:29:17.922036 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:29:17.922042 kernel: audit: type=2000 audit(1734053356.546:1): state=initialized audit_enabled=0 res=1 Dec 13 01:29:17.922048 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:29:17.922054 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:29:17.922062 kernel: cpuidle: using governor menu Dec 13 01:29:17.922068 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:29:17.922075 kernel: dca service started, version 1.12.1 Dec 13 01:29:17.922081 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:29:17.922087 kernel: PCI: Using configuration type 1 for base access Dec 13 01:29:17.922093 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:29:17.922099 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:29:17.922105 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:29:17.922114 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:29:17.922120 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:29:17.922128 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:29:17.922140 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:29:17.922149 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:29:17.922155 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:29:17.922161 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:29:17.922167 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:29:17.922174 kernel: ACPI: Interpreter enabled Dec 13 01:29:17.922180 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:29:17.922188 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:29:17.922195 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:29:17.922201 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:29:17.922207 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:29:17.922214 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:29:17.922377 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:29:17.922531 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:29:17.922655 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:29:17.922665 kernel: PCI host bridge to bus 0000:00 Dec 13 01:29:17.922780 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:29:17.922876 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:29:17.922989 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:29:17.923084 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Dec 13 01:29:17.923177 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:29:17.923275 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:29:17.923370 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:29:17.923492 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:29:17.923643 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Dec 13 01:29:17.923754 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Dec 13 01:29:17.923857 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Dec 13 01:29:17.923982 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Dec 13 01:29:17.924093 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Dec 13 01:29:17.924195 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:29:17.924306 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.924409 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Dec 13 01:29:17.924546 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.924654 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Dec 13 01:29:17.924770 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.924874 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Dec 13 01:29:17.925000 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.925105 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Dec 13 01:29:17.925219 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.925322 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Dec 13 01:29:17.925440 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.925596 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Dec 13 01:29:17.925712 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.925816 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Dec 13 01:29:17.925945 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.926052 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Dec 13 01:29:17.926169 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.926273 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Dec 13 01:29:17.926385 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:29:17.926488 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:29:17.926681 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:29:17.926786 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Dec 13 01:29:17.926893 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Dec 13 01:29:17.927020 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:29:17.927154 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:29:17.927286 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 01:29:17.927415 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Dec 13 01:29:17.927612 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 01:29:17.927726 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Dec 13 01:29:17.927838 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 01:29:17.927955 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:17.928059 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:17.928176 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 01:29:17.928285 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Dec 13 01:29:17.928388 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 01:29:17.928497 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:17.928634 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:17.928752 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 01:29:17.928862 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Dec 13 01:29:17.928987 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Dec 13 01:29:17.929092 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 01:29:17.929194 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:17.929295 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:17.929416 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 01:29:17.929572 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 01:29:17.929678 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 01:29:17.929780 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:17.929882 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:17.930009 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 01:29:17.930119 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Dec 13 01:29:17.930227 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 01:29:17.930329 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:17.930432 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:17.930581 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 01:29:17.930693 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Dec 13 01:29:17.930801 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Dec 13 01:29:17.930903 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 01:29:17.931029 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 01:29:17.931134 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:17.931143 kernel: acpiphp: Slot [0] registered Dec 13 01:29:17.931257 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 01:29:17.931364 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Dec 13 01:29:17.931472 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Dec 13 01:29:17.931638 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Dec 13 01:29:17.931769 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 01:29:17.931881 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:17.932002 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:17.932012 kernel: acpiphp: Slot [0-2] registered Dec 13 01:29:17.932134 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 01:29:17.932238 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:17.932339 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:17.932348 kernel: acpiphp: Slot [0-3] registered Dec 13 01:29:17.932448 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 01:29:17.932618 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:17.932722 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:17.932731 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:29:17.932738 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:29:17.932744 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:29:17.932751 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:29:17.932757 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:29:17.932763 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:29:17.932769 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:29:17.932780 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:29:17.932786 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:29:17.932792 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:29:17.932798 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:29:17.932804 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:29:17.932811 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:29:17.932817 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:29:17.932823 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:29:17.932829 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:29:17.932838 kernel: iommu: Default domain type: Translated Dec 13 01:29:17.932845 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:29:17.932851 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:29:17.932857 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:29:17.932864 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:29:17.932870 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Dec 13 01:29:17.932990 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:29:17.933093 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:29:17.933194 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:29:17.933206 kernel: vgaarb: loaded Dec 13 01:29:17.933213 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:29:17.933219 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:29:17.933225 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:29:17.933232 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:29:17.933238 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:29:17.933244 kernel: pnp: PnP ACPI init Dec 13 01:29:17.933380 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:29:17.933403 kernel: pnp: PnP ACPI: found 5 devices Dec 13 01:29:17.933410 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:29:17.933417 kernel: NET: Registered PF_INET protocol family Dec 13 01:29:17.933423 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:29:17.933429 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:29:17.933436 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:29:17.933442 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:29:17.933449 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:29:17.933461 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:29:17.933477 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:29:17.933489 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:29:17.933498 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:29:17.933547 kernel: NET: Registered PF_XDP protocol family Dec 13 01:29:17.933664 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 01:29:17.933768 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 01:29:17.933876 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 01:29:17.934290 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 01:29:17.934407 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 01:29:17.934527 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 01:29:17.934634 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 01:29:17.934737 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:17.934839 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:17.934955 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 01:29:17.935060 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:17.935168 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:17.935271 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 01:29:17.935372 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:17.935497 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:17.936239 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 01:29:17.936360 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:17.937679 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:17.937814 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 01:29:17.937969 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:17.938088 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:17.938201 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 01:29:17.938320 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 01:29:17.938434 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:17.939650 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 01:29:17.939775 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Dec 13 01:29:17.939881 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:17.939999 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:17.940109 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 01:29:17.940211 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Dec 13 01:29:17.940312 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:17.940417 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:17.941580 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 01:29:17.941708 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Dec 13 01:29:17.941822 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:17.941955 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:17.942061 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:29:17.942157 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:29:17.942257 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:29:17.942359 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Dec 13 01:29:17.942455 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:29:17.943600 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:29:17.943720 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:17.943823 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:17.943946 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:17.944055 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:17.944162 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:17.944261 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:17.944492 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:17.945178 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:17.947573 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:17.947806 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:17.947939 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 01:29:17.948044 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:17.948185 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Dec 13 01:29:17.948289 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:17.948387 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:17.948498 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Dec 13 01:29:17.948638 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:17.948738 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:17.948843 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Dec 13 01:29:17.948996 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:17.949133 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:17.949145 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:29:17.949153 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:29:17.949164 kernel: Initialise system trusted keyrings Dec 13 01:29:17.949171 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:29:17.949178 kernel: Key type asymmetric registered Dec 13 01:29:17.949185 kernel: Asymmetric key parser 'x509' registered Dec 13 01:29:17.949191 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:29:17.949198 kernel: io scheduler mq-deadline registered Dec 13 01:29:17.949204 kernel: io scheduler kyber registered Dec 13 01:29:17.949211 kernel: io scheduler bfq registered Dec 13 01:29:17.949334 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 01:29:17.949448 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 01:29:17.949603 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 01:29:17.949713 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 01:29:17.949818 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 01:29:17.949935 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 01:29:17.950042 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 01:29:17.950150 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 01:29:17.950258 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 01:29:17.950369 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 01:29:17.950474 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 01:29:17.950611 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 01:29:17.950716 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 01:29:17.950820 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 01:29:17.950935 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 01:29:17.951047 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 01:29:17.951057 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:29:17.951161 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Dec 13 01:29:17.951268 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Dec 13 01:29:17.951277 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:29:17.951284 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Dec 13 01:29:17.951294 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:29:17.951301 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:29:17.951307 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:29:17.951314 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:29:17.951321 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:29:17.951327 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:29:17.955721 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 01:29:17.955893 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 01:29:17.956087 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T01:29:17 UTC (1734053357) Dec 13 01:29:17.956259 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:29:17.956281 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:29:17.956294 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:29:17.956306 kernel: Segment Routing with IPv6 Dec 13 01:29:17.956325 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:29:17.956338 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:29:17.956350 kernel: Key type dns_resolver registered Dec 13 01:29:17.956362 kernel: IPI shorthand broadcast: enabled Dec 13 01:29:17.956373 kernel: sched_clock: Marking stable (1153013602, 141904952)->(1305838651, -10920097) Dec 13 01:29:17.956385 kernel: registered taskstats version 1 Dec 13 01:29:17.956397 kernel: Loading compiled-in X.509 certificates Dec 13 01:29:17.956409 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:29:17.956421 kernel: Key type .fscrypt registered Dec 13 01:29:17.956439 kernel: Key type fscrypt-provisioning registered Dec 13 01:29:17.956451 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:29:17.956462 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:29:17.956473 kernel: ima: No architecture policies found Dec 13 01:29:17.956485 kernel: clk: Disabling unused clocks Dec 13 01:29:17.956497 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:29:17.956527 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:29:17.956541 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:29:17.956554 kernel: Run /init as init process Dec 13 01:29:17.956573 kernel: with arguments: Dec 13 01:29:17.956585 kernel: /init Dec 13 01:29:17.956597 kernel: with environment: Dec 13 01:29:17.956609 kernel: HOME=/ Dec 13 01:29:17.956621 kernel: TERM=linux Dec 13 01:29:17.956633 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:29:17.956648 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:17.956663 systemd[1]: Detected virtualization kvm. Dec 13 01:29:17.956682 systemd[1]: Detected architecture x86-64. Dec 13 01:29:17.956694 systemd[1]: Running in initrd. Dec 13 01:29:17.956707 systemd[1]: No hostname configured, using default hostname. Dec 13 01:29:17.956719 systemd[1]: Hostname set to . Dec 13 01:29:17.956733 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:17.956747 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:29:17.956761 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:17.956774 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:17.956794 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:29:17.956807 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:17.956821 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:29:17.956834 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:29:17.956850 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:29:17.956863 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:29:17.956876 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:17.956893 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:17.956906 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:29:17.956932 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:17.956946 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:17.956958 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:29:17.956972 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:17.956984 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:17.956998 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:29:17.957014 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:29:17.957026 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:17.957039 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:17.957052 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:17.957064 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:29:17.957076 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:29:17.957089 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:17.957101 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:29:17.957113 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:29:17.957129 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:17.957142 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:17.957155 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:17.957202 systemd-journald[186]: Collecting audit messages is disabled. Dec 13 01:29:17.957240 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:17.957254 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:17.957267 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:29:17.957281 systemd-journald[186]: Journal started Dec 13 01:29:17.957312 systemd-journald[186]: Runtime Journal (/run/log/journal/182970e442834aa9a794dbd0945f9f0c) is 4.8M, max 38.4M, 33.6M free. Dec 13 01:29:17.959550 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:29:17.947816 systemd-modules-load[188]: Inserted module 'overlay' Dec 13 01:29:18.012557 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:29:18.012580 kernel: Bridge firewalling registered Dec 13 01:29:18.012590 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:17.973258 systemd-modules-load[188]: Inserted module 'br_netfilter' Dec 13 01:29:18.022078 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:18.023567 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:18.031649 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:18.033832 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:18.035666 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:29:18.038260 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:18.053662 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:18.055702 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:18.059681 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:18.061025 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:18.065812 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:29:18.074761 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:29:18.077181 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:18.088530 dracut-cmdline[219]: dracut-dracut-053 Dec 13 01:29:18.088530 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:18.105708 systemd-resolved[221]: Positive Trust Anchors: Dec 13 01:29:18.109772 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:29:18.109814 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:29:18.112542 systemd-resolved[221]: Defaulting to hostname 'linux'. Dec 13 01:29:18.113836 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:29:18.114461 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:18.162528 kernel: SCSI subsystem initialized Dec 13 01:29:18.171543 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:29:18.183538 kernel: iscsi: registered transport (tcp) Dec 13 01:29:18.204651 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:29:18.204699 kernel: QLogic iSCSI HBA Driver Dec 13 01:29:18.246497 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:18.251644 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:29:18.274762 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:29:18.274831 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:29:18.274843 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:29:18.316538 kernel: raid6: avx2x4 gen() 31602 MB/s Dec 13 01:29:18.333528 kernel: raid6: avx2x2 gen() 28999 MB/s Dec 13 01:29:18.351712 kernel: raid6: avx2x1 gen() 22486 MB/s Dec 13 01:29:18.351798 kernel: raid6: using algorithm avx2x4 gen() 31602 MB/s Dec 13 01:29:18.369530 kernel: raid6: .... xor() 4521 MB/s, rmw enabled Dec 13 01:29:18.369594 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:29:18.389584 kernel: xor: automatically using best checksumming function avx Dec 13 01:29:18.532552 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:29:18.544551 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:18.549659 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:18.575323 systemd-udevd[405]: Using default interface naming scheme 'v255'. Dec 13 01:29:18.579271 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:18.588124 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:29:18.601185 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Dec 13 01:29:18.629072 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:18.633632 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:18.700547 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:18.710857 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:29:18.726259 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:18.728341 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:18.729972 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:18.731350 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:18.736670 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:29:18.748654 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:18.784844 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:29:18.797584 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 01:29:18.800669 kernel: libata version 3.00 loaded. Dec 13 01:29:18.832542 kernel: ACPI: bus type USB registered Dec 13 01:29:18.835205 kernel: usbcore: registered new interface driver usbfs Dec 13 01:29:18.835243 kernel: usbcore: registered new interface driver hub Dec 13 01:29:18.836475 kernel: usbcore: registered new device driver usb Dec 13 01:29:18.839700 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:18.839863 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:18.842906 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:29:18.842723 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:18.844425 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:18.844626 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:18.846452 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:18.853711 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:18.868938 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:29:18.875236 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:29:18.899201 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:29:18.899242 kernel: AES CTR mode by8 optimization enabled Dec 13 01:29:18.899267 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:29:18.899488 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:29:18.899716 kernel: scsi host1: ahci Dec 13 01:29:18.899962 kernel: scsi host2: ahci Dec 13 01:29:18.900186 kernel: scsi host3: ahci Dec 13 01:29:18.900397 kernel: scsi host4: ahci Dec 13 01:29:18.901040 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 01:29:18.907763 kernel: scsi host5: ahci Dec 13 01:29:18.907965 kernel: scsi host6: ahci Dec 13 01:29:18.908105 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 46 Dec 13 01:29:18.908116 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 46 Dec 13 01:29:18.908130 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 46 Dec 13 01:29:18.908138 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 46 Dec 13 01:29:18.908146 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 46 Dec 13 01:29:18.908154 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 46 Dec 13 01:29:18.908162 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 01:29:18.908298 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 01:29:18.908422 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 01:29:18.909844 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 01:29:18.910061 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 01:29:18.910216 kernel: hub 1-0:1.0: USB hub found Dec 13 01:29:18.910362 kernel: hub 1-0:1.0: 4 ports detected Dec 13 01:29:18.910489 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 01:29:18.910697 kernel: hub 2-0:1.0: USB hub found Dec 13 01:29:18.910849 kernel: hub 2-0:1.0: 4 ports detected Dec 13 01:29:18.960054 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:18.965660 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:18.981763 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:19.146569 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 01:29:19.208144 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:19.208223 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:19.208236 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:19.208246 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:29:19.209684 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:29:19.212030 kernel: ata1.00: applying bridge limits Dec 13 01:29:19.213172 kernel: ata1.00: configured for UDMA/100 Dec 13 01:29:19.217477 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:29:19.217570 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:19.217583 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:19.249814 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 13 01:29:19.269148 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 01:29:19.269306 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:29:19.269441 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 13 01:29:19.269597 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:29:19.269730 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:29:19.269747 kernel: GPT:17805311 != 80003071 Dec 13 01:29:19.269756 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:29:19.269764 kernel: GPT:17805311 != 80003071 Dec 13 01:29:19.269771 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:29:19.269780 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:19.269788 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:29:19.280641 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:29:19.296377 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:29:19.296400 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:29:19.300530 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:29:19.306520 kernel: usbcore: registered new interface driver usbhid Dec 13 01:29:19.306543 kernel: usbhid: USB HID core driver Dec 13 01:29:19.309534 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 01:29:19.309557 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (458) Dec 13 01:29:19.316060 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 01:29:19.322638 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (453) Dec 13 01:29:19.321230 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 01:29:19.332614 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 01:29:19.340800 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 01:29:19.344559 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 01:29:19.345104 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 01:29:19.351633 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:29:19.357145 disk-uuid[574]: Primary Header is updated. Dec 13 01:29:19.357145 disk-uuid[574]: Secondary Entries is updated. Dec 13 01:29:19.357145 disk-uuid[574]: Secondary Header is updated. Dec 13 01:29:19.375528 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:19.390597 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:19.402547 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:20.399540 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:20.400272 disk-uuid[575]: The operation has completed successfully. Dec 13 01:29:20.459564 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:29:20.459720 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:29:20.480678 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:29:20.485068 sh[595]: Success Dec 13 01:29:20.498609 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:29:20.556691 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:29:20.571669 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:29:20.572457 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:29:20.600541 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:29:20.600599 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:20.603337 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:29:20.603370 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:29:20.605693 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:29:20.613543 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:29:20.615429 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:29:20.616728 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:29:20.622678 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:29:20.625646 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:29:20.638015 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:20.638047 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:20.639749 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:20.646564 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:20.646626 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:20.658167 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:29:20.663191 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:20.665759 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:29:20.671692 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:29:20.751950 ignition[691]: Ignition 2.19.0 Dec 13 01:29:20.752817 ignition[691]: Stage: fetch-offline Dec 13 01:29:20.752864 ignition[691]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:20.752874 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:20.752971 ignition[691]: parsed url from cmdline: "" Dec 13 01:29:20.752975 ignition[691]: no config URL provided Dec 13 01:29:20.752980 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:29:20.758694 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:20.752989 ignition[691]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:29:20.759773 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:20.752994 ignition[691]: failed to fetch config: resource requires networking Dec 13 01:29:20.753402 ignition[691]: Ignition finished successfully Dec 13 01:29:20.765746 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:20.790765 systemd-networkd[782]: lo: Link UP Dec 13 01:29:20.790776 systemd-networkd[782]: lo: Gained carrier Dec 13 01:29:20.793710 systemd-networkd[782]: Enumeration completed Dec 13 01:29:20.794046 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:20.794927 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:20.794945 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:20.795019 systemd[1]: Reached target network.target - Network. Dec 13 01:29:20.795952 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:20.795956 systemd-networkd[782]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:20.797462 systemd-networkd[782]: eth0: Link UP Dec 13 01:29:20.797466 systemd-networkd[782]: eth0: Gained carrier Dec 13 01:29:20.797475 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:20.801681 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:29:20.801866 systemd-networkd[782]: eth1: Link UP Dec 13 01:29:20.801870 systemd-networkd[782]: eth1: Gained carrier Dec 13 01:29:20.801879 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:20.817116 ignition[784]: Ignition 2.19.0 Dec 13 01:29:20.817794 ignition[784]: Stage: fetch Dec 13 01:29:20.817974 ignition[784]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:20.817986 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:20.818079 ignition[784]: parsed url from cmdline: "" Dec 13 01:29:20.818083 ignition[784]: no config URL provided Dec 13 01:29:20.818088 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:29:20.818096 ignition[784]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:29:20.818113 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 01:29:20.818302 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 01:29:20.827558 systemd-networkd[782]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:29:20.865592 systemd-networkd[782]: eth0: DHCPv4 address 78.47.95.198/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 01:29:21.018487 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 01:29:21.026271 ignition[784]: GET result: OK Dec 13 01:29:21.026358 ignition[784]: parsing config with SHA512: 81bf4be6fbcebcc28c8ffd52ec0c916288a7b14ea99e19f61b2cfe87305cdc5f61bc610a67913f160c4ca720c4f5efb506a96cc080bd3b1f40d00beee787e83a Dec 13 01:29:21.031113 unknown[784]: fetched base config from "system" Dec 13 01:29:21.031598 ignition[784]: fetch: fetch complete Dec 13 01:29:21.031125 unknown[784]: fetched base config from "system" Dec 13 01:29:21.031606 ignition[784]: fetch: fetch passed Dec 13 01:29:21.031136 unknown[784]: fetched user config from "hetzner" Dec 13 01:29:21.032287 ignition[784]: Ignition finished successfully Dec 13 01:29:21.035555 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:29:21.043698 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:29:21.059907 ignition[791]: Ignition 2.19.0 Dec 13 01:29:21.059919 ignition[791]: Stage: kargs Dec 13 01:29:21.060140 ignition[791]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:21.060154 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:21.062783 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:29:21.061011 ignition[791]: kargs: kargs passed Dec 13 01:29:21.061059 ignition[791]: Ignition finished successfully Dec 13 01:29:21.074743 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:29:21.088256 ignition[797]: Ignition 2.19.0 Dec 13 01:29:21.088274 ignition[797]: Stage: disks Dec 13 01:29:21.088474 ignition[797]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:21.088491 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:21.090806 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:29:21.089293 ignition[797]: disks: disks passed Dec 13 01:29:21.092661 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:21.089336 ignition[797]: Ignition finished successfully Dec 13 01:29:21.094832 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:29:21.095793 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:21.097024 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:21.098090 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:21.104640 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:29:21.121687 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:29:21.124382 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:29:21.128612 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:29:21.212541 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:29:21.212686 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:29:21.213680 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:21.219580 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:21.222626 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:29:21.224665 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:29:21.226901 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:29:21.228077 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:21.234530 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (814) Dec 13 01:29:21.238708 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:21.238747 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:21.238763 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:21.238042 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:29:21.245378 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:21.245419 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:21.250122 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:29:21.252909 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:21.299781 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:29:21.302481 coreos-metadata[816]: Dec 13 01:29:21.302 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 01:29:21.304119 coreos-metadata[816]: Dec 13 01:29:21.304 INFO Fetch successful Dec 13 01:29:21.304119 coreos-metadata[816]: Dec 13 01:29:21.304 INFO wrote hostname ci-4081-2-1-8-09b5250c1f to /sysroot/etc/hostname Dec 13 01:29:21.306415 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:29:21.309255 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:29:21.312574 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:29:21.317336 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:29:21.412229 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:21.417597 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:29:21.419638 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:29:21.431552 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:21.447780 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:29:21.452918 ignition[932]: INFO : Ignition 2.19.0 Dec 13 01:29:21.452918 ignition[932]: INFO : Stage: mount Dec 13 01:29:21.454088 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:21.454088 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:21.454088 ignition[932]: INFO : mount: mount passed Dec 13 01:29:21.454088 ignition[932]: INFO : Ignition finished successfully Dec 13 01:29:21.455132 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:29:21.464626 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:29:21.598569 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:29:21.603670 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:21.616532 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (944) Dec 13 01:29:21.620545 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:21.620583 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:21.620595 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:21.627038 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:21.627083 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:21.629666 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:21.651699 ignition[960]: INFO : Ignition 2.19.0 Dec 13 01:29:21.651699 ignition[960]: INFO : Stage: files Dec 13 01:29:21.653639 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:21.653639 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:21.653639 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:29:21.656863 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:29:21.656863 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:29:21.659100 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:29:21.659100 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:29:21.660842 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:29:21.659100 unknown[960]: wrote ssh authorized keys file for user: core Dec 13 01:29:21.662520 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:29:21.662520 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:29:21.662520 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:29:21.662520 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:29:21.766027 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:29:21.940255 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:29:21.940255 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:21.944466 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:21.944466 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:21.944466 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:21.944466 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:21.944466 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:21.944466 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:21.944466 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:21.944466 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:21.944466 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:21.944466 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:29:21.944466 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:29:21.944466 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:29:21.944466 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:29:22.197678 systemd-networkd[782]: eth1: Gained IPv6LL Dec 13 01:29:22.473823 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:29:22.581770 systemd-networkd[782]: eth0: Gained IPv6LL Dec 13 01:29:22.726052 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:29:22.726052 ignition[960]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 01:29:22.729257 ignition[960]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:29:22.729257 ignition[960]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:29:22.729257 ignition[960]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 01:29:22.729257 ignition[960]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 01:29:22.729257 ignition[960]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:22.729257 ignition[960]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:22.729257 ignition[960]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 01:29:22.729257 ignition[960]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Dec 13 01:29:22.729257 ignition[960]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 01:29:22.729257 ignition[960]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 01:29:22.729257 ignition[960]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Dec 13 01:29:22.729257 ignition[960]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:22.729257 ignition[960]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:22.729257 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:22.729257 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:22.729257 ignition[960]: INFO : files: files passed Dec 13 01:29:22.729257 ignition[960]: INFO : Ignition finished successfully Dec 13 01:29:22.733095 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:29:22.741670 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:29:22.746667 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:29:22.748241 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:29:22.748864 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:29:22.774570 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:22.775533 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:22.775533 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:22.777350 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:22.778122 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:29:22.782700 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:29:22.804358 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:29:22.804469 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:29:22.805608 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:29:22.806577 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:29:22.807712 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:29:22.812657 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:29:22.823315 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:22.828614 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:29:22.838073 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:22.838691 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:22.839728 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:29:22.840705 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:29:22.840798 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:22.841962 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:29:22.842590 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:29:22.843623 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:29:22.844633 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:22.845499 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:22.846545 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:29:22.847577 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:22.848678 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:29:22.849707 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:29:22.850787 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:29:22.851735 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:29:22.851829 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:22.852920 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:22.853575 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:22.854480 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:29:22.854586 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:22.855580 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:29:22.855671 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:22.857118 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:29:22.857220 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:22.857873 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:29:22.858016 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:29:22.858714 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:29:22.858802 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:29:22.865908 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:29:22.868683 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:29:22.869150 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:29:22.869293 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:22.875683 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:29:22.875780 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:22.879928 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:29:22.880535 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:29:22.886751 ignition[1013]: INFO : Ignition 2.19.0 Dec 13 01:29:22.886751 ignition[1013]: INFO : Stage: umount Dec 13 01:29:22.887858 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:22.887858 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:22.889536 ignition[1013]: INFO : umount: umount passed Dec 13 01:29:22.889536 ignition[1013]: INFO : Ignition finished successfully Dec 13 01:29:22.891646 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:29:22.892442 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:29:22.893293 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:29:22.893370 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:29:22.895660 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:29:22.895719 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:29:22.896270 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:29:22.896312 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:29:22.896921 systemd[1]: Stopped target network.target - Network. Dec 13 01:29:22.897319 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:29:22.897366 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:22.898908 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:29:22.899495 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:29:22.904883 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:22.906441 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:29:22.907756 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:29:22.910218 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:29:22.910277 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:22.910886 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:29:22.910962 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:22.912557 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:29:22.912611 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:29:22.913830 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:29:22.913883 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:22.918101 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:29:22.921071 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:29:22.923633 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:29:22.924207 systemd-networkd[782]: eth0: DHCPv6 lease lost Dec 13 01:29:22.927644 systemd-networkd[782]: eth1: DHCPv6 lease lost Dec 13 01:29:22.930111 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:29:22.930224 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:29:22.933921 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:29:22.934061 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:29:22.938411 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:29:22.938541 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:29:22.941921 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:29:22.941997 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:22.943315 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:29:22.943377 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:22.957613 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:29:22.958233 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:29:22.958317 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:22.959091 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:29:22.959177 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:22.959842 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:29:22.959907 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:22.960953 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:29:22.961024 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:22.962324 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:22.978845 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:29:22.979021 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:29:22.980903 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:29:22.981077 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:22.982780 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:29:22.982890 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:22.983728 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:29:22.983803 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:22.984969 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:29:22.985036 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:22.986640 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:29:22.986687 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:22.987647 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:22.987695 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:22.998657 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:29:22.999871 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:29:22.999923 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:23.001086 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:29:23.001131 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:23.002714 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:29:23.002763 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:23.003643 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:23.003687 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:23.004766 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:29:23.004893 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:29:23.006296 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:29:23.013042 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:29:23.020815 systemd[1]: Switching root. Dec 13 01:29:23.047596 systemd-journald[186]: Journal stopped Dec 13 01:29:24.054129 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Dec 13 01:29:24.054197 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:29:24.054210 kernel: SELinux: policy capability open_perms=1 Dec 13 01:29:24.054220 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:29:24.054234 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:29:24.054248 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:29:24.054257 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:29:24.054270 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:29:24.054279 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:29:24.054288 kernel: audit: type=1403 audit(1734053363.217:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:29:24.054298 systemd[1]: Successfully loaded SELinux policy in 46.522ms. Dec 13 01:29:24.054320 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.182ms. Dec 13 01:29:24.054331 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:24.054341 systemd[1]: Detected virtualization kvm. Dec 13 01:29:24.054351 systemd[1]: Detected architecture x86-64. Dec 13 01:29:24.054364 systemd[1]: Detected first boot. Dec 13 01:29:24.054378 systemd[1]: Hostname set to . Dec 13 01:29:24.054388 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:24.054398 zram_generator::config[1073]: No configuration found. Dec 13 01:29:24.054414 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:29:24.054424 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:29:24.054434 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:29:24.054445 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:29:24.054457 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:29:24.054467 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:29:24.054477 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:29:24.054488 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:29:24.054498 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:29:24.057563 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:29:24.057579 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:29:24.057590 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:24.057601 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:24.057616 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:29:24.057627 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:29:24.057637 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:29:24.057648 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:24.057658 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:29:24.057668 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:24.057679 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:29:24.057688 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:24.057701 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:24.057711 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:24.057721 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:24.057731 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:29:24.057741 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:29:24.057756 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:29:24.057766 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:29:24.057778 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:24.057791 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:24.057806 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:24.057816 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:29:24.057826 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:29:24.057836 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:29:24.057848 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:29:24.057858 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:24.057873 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:29:24.057883 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:29:24.057893 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:29:24.057904 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:29:24.057915 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:24.057925 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:24.057935 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:29:24.057961 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:24.057972 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:24.057983 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:24.057993 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:29:24.058003 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:24.058013 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:29:24.058024 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:29:24.058035 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:29:24.058047 kernel: fuse: init (API version 7.39) Dec 13 01:29:24.058058 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:24.058068 kernel: loop: module loaded Dec 13 01:29:24.058078 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:24.058088 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:29:24.058098 kernel: ACPI: bus type drm_connector registered Dec 13 01:29:24.058131 systemd-journald[1167]: Collecting audit messages is disabled. Dec 13 01:29:24.058153 systemd-journald[1167]: Journal started Dec 13 01:29:24.058174 systemd-journald[1167]: Runtime Journal (/run/log/journal/182970e442834aa9a794dbd0945f9f0c) is 4.8M, max 38.4M, 33.6M free. Dec 13 01:29:24.067636 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:29:24.076601 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:24.080552 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:24.083590 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:24.085000 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:29:24.085849 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:29:24.086405 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:29:24.087117 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:29:24.087821 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:29:24.088387 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:29:24.089345 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:29:24.090375 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:24.091273 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:29:24.091624 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:29:24.092449 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:24.092755 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:24.093808 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:24.094104 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:24.095116 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:24.095367 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:24.096852 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:29:24.097054 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:29:24.098021 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:24.098335 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:24.099328 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:24.100395 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:29:24.102381 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:29:24.119375 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:29:24.125621 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:29:24.131615 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:29:24.134054 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:29:24.146663 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:29:24.149828 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:29:24.151602 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:24.156644 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:29:24.157613 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:24.164843 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:24.171323 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:29:24.178359 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:29:24.184723 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:29:24.190716 systemd-journald[1167]: Time spent on flushing to /var/log/journal/182970e442834aa9a794dbd0945f9f0c is 28.966ms for 1124 entries. Dec 13 01:29:24.190716 systemd-journald[1167]: System Journal (/var/log/journal/182970e442834aa9a794dbd0945f9f0c) is 8.0M, max 584.8M, 576.8M free. Dec 13 01:29:24.247678 systemd-journald[1167]: Received client request to flush runtime journal. Dec 13 01:29:24.205031 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:29:24.207918 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:29:24.211844 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:24.221735 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:29:24.238275 udevadm[1224]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:29:24.251991 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:29:24.254194 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:24.261895 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Dec 13 01:29:24.261909 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Dec 13 01:29:24.267689 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:24.276678 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:29:24.306839 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:29:24.319705 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:24.340355 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Dec 13 01:29:24.340378 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Dec 13 01:29:24.352484 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:24.721116 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:29:24.728684 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:24.752799 systemd-udevd[1244]: Using default interface naming scheme 'v255'. Dec 13 01:29:24.773138 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:24.783015 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:24.804697 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:29:24.836920 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 01:29:24.869539 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1245) Dec 13 01:29:24.871249 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:29:24.900554 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1245) Dec 13 01:29:24.930235 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:29:24.946536 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:29:24.961556 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:29:24.969532 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Dec 13 01:29:24.972155 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Dec 13 01:29:24.981336 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:24.981700 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:24.988295 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:24.994234 systemd-networkd[1248]: lo: Link UP Dec 13 01:29:24.994546 systemd-networkd[1248]: lo: Gained carrier Dec 13 01:29:24.998676 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:24.999743 systemd-networkd[1248]: Enumeration completed Dec 13 01:29:24.999917 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:29:25.000728 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 01:29:25.000756 kernel: [drm] features: -context_init Dec 13 01:29:25.000878 systemd-networkd[1248]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:25.000932 systemd-networkd[1248]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:25.002883 kernel: [drm] number of scanouts: 1 Dec 13 01:29:25.002912 kernel: [drm] number of cap sets: 0 Dec 13 01:29:25.003919 systemd-networkd[1248]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:25.005259 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 01:29:25.005362 systemd-networkd[1248]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:25.006396 systemd-networkd[1248]: eth0: Link UP Dec 13 01:29:25.006447 systemd-networkd[1248]: eth0: Gained carrier Dec 13 01:29:25.006494 systemd-networkd[1248]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:25.018446 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 01:29:25.018493 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 01:29:25.011372 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:25.011696 systemd-networkd[1248]: eth1: Link UP Dec 13 01:29:25.011700 systemd-networkd[1248]: eth1: Gained carrier Dec 13 01:29:25.011714 systemd-networkd[1248]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:25.018178 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:29:25.018220 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:29:25.018261 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:25.021318 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:25.030295 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 01:29:25.035130 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:25.036427 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:25.036931 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:25.037175 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:25.037829 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:25.038124 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:25.051778 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:29:25.055574 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:25.055748 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:25.063567 systemd-networkd[1248]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:29:25.067850 systemd-networkd[1248]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:25.091015 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:29:25.100038 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1254) Dec 13 01:29:25.100057 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:29:25.100229 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:29:25.101372 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:29:25.099679 systemd-networkd[1248]: eth0: DHCPv4 address 78.47.95.198/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 01:29:25.112555 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:29:25.117491 systemd-networkd[1248]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:25.170746 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:25.178936 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 01:29:25.188439 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:25.188800 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:25.201711 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:25.258141 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:25.323174 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:29:25.331746 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:29:25.350875 lvm[1312]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:25.381710 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:29:25.381997 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:25.388744 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:29:25.394257 lvm[1315]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:25.432978 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:29:25.435170 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:29:25.438128 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:29:25.438293 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:25.438987 systemd[1]: Reached target machines.target - Containers. Dec 13 01:29:25.441066 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:29:25.447710 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:29:25.451076 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:29:25.457196 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:25.459666 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:29:25.465853 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:29:25.477494 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:29:25.482689 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:29:25.499590 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 01:29:25.497867 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:29:25.506487 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:29:25.507283 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:29:25.532579 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:29:25.553541 kernel: loop1: detected capacity change from 0 to 8 Dec 13 01:29:25.572547 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 01:29:25.609537 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 01:29:25.655541 kernel: loop4: detected capacity change from 0 to 211296 Dec 13 01:29:25.676307 kernel: loop5: detected capacity change from 0 to 8 Dec 13 01:29:25.680628 kernel: loop6: detected capacity change from 0 to 140768 Dec 13 01:29:25.698677 kernel: loop7: detected capacity change from 0 to 142488 Dec 13 01:29:25.717468 (sd-merge)[1336]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 01:29:25.718102 (sd-merge)[1336]: Merged extensions into '/usr'. Dec 13 01:29:25.721995 systemd[1]: Reloading requested from client PID 1323 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:29:25.722091 systemd[1]: Reloading... Dec 13 01:29:25.789564 zram_generator::config[1362]: No configuration found. Dec 13 01:29:25.869558 ldconfig[1319]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:29:25.919678 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:25.975650 systemd[1]: Reloading finished in 252 ms. Dec 13 01:29:25.995611 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:29:26.000322 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:29:26.011636 systemd[1]: Starting ensure-sysext.service... Dec 13 01:29:26.018130 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:29:26.025586 systemd[1]: Reloading requested from client PID 1414 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:29:26.025606 systemd[1]: Reloading... Dec 13 01:29:26.040943 systemd-tmpfiles[1415]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:29:26.041632 systemd-tmpfiles[1415]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:29:26.042462 systemd-tmpfiles[1415]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:29:26.042854 systemd-tmpfiles[1415]: ACLs are not supported, ignoring. Dec 13 01:29:26.042998 systemd-tmpfiles[1415]: ACLs are not supported, ignoring. Dec 13 01:29:26.049469 systemd-tmpfiles[1415]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:26.049479 systemd-tmpfiles[1415]: Skipping /boot Dec 13 01:29:26.065009 systemd-tmpfiles[1415]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:26.065021 systemd-tmpfiles[1415]: Skipping /boot Dec 13 01:29:26.095924 zram_generator::config[1445]: No configuration found. Dec 13 01:29:26.211491 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:26.230672 systemd-networkd[1248]: eth0: Gained IPv6LL Dec 13 01:29:26.271356 systemd[1]: Reloading finished in 245 ms. Dec 13 01:29:26.289571 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:29:26.300391 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:26.316291 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:29:26.334689 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:29:26.340560 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:29:26.349689 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:29:26.362173 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:29:26.374828 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:26.375046 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:26.386563 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:26.392575 augenrules[1520]: No rules Dec 13 01:29:26.394054 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:26.401690 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:26.404481 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:26.404662 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:26.407479 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:29:26.412499 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:29:26.420778 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:26.420995 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:26.424148 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:29:26.427014 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:26.427209 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:26.430341 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:26.430559 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:26.445282 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:26.445442 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:26.450987 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:26.457742 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:26.463425 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:26.473826 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:26.481779 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:29:26.482664 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:26.484987 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:26.485188 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:26.487773 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:26.488870 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:26.490485 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:26.490977 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:26.499098 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:29:26.507951 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:26.508194 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:26.508929 systemd-resolved[1507]: Positive Trust Anchors: Dec 13 01:29:26.508943 systemd-resolved[1507]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:29:26.508978 systemd-resolved[1507]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:29:26.512483 systemd-resolved[1507]: Using system hostname 'ci-4081-2-1-8-09b5250c1f'. Dec 13 01:29:26.518157 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:26.524804 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:26.530932 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:26.534794 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:26.537114 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:26.537240 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:26.540438 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:29:26.544150 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:29:26.546219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:26.546592 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:26.547682 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:26.547949 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:26.549010 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:26.549205 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:26.550249 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:26.550663 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:26.553553 systemd[1]: Finished ensure-sysext.service. Dec 13 01:29:26.562745 systemd[1]: Reached target network.target - Network. Dec 13 01:29:26.564009 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:29:26.564771 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:26.565346 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:26.565478 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:26.571646 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:29:26.576943 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:29:26.640025 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:29:26.641123 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:26.641814 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:29:26.642494 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:29:26.644119 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:29:26.644852 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:29:26.644984 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:29:26.645589 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:29:26.646287 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:29:26.647036 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:29:26.647562 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:29:26.653723 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:29:26.656594 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:29:26.664127 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:29:26.665360 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:29:26.669691 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:29:26.670241 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:26.670921 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:29:26.670980 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:26.671006 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:26.671803 systemd-timesyncd[1569]: Contacted time server 89.58.6.143:123 (0.flatcar.pool.ntp.org). Dec 13 01:29:26.671860 systemd-timesyncd[1569]: Initial clock synchronization to Fri 2024-12-13 01:29:26.770194 UTC. Dec 13 01:29:26.676629 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:29:26.685729 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:29:26.697747 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:29:26.703132 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:29:26.707753 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:29:26.708284 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:29:26.713610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:26.720571 coreos-metadata[1574]: Dec 13 01:29:26.719 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 01:29:26.724075 coreos-metadata[1574]: Dec 13 01:29:26.723 INFO Fetch successful Dec 13 01:29:26.724075 coreos-metadata[1574]: Dec 13 01:29:26.723 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 01:29:26.726073 coreos-metadata[1574]: Dec 13 01:29:26.725 INFO Fetch successful Dec 13 01:29:26.733574 jq[1579]: false Dec 13 01:29:26.736672 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:29:26.746224 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:29:26.750786 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:29:26.761195 extend-filesystems[1580]: Found loop4 Dec 13 01:29:26.764397 extend-filesystems[1580]: Found loop5 Dec 13 01:29:26.764397 extend-filesystems[1580]: Found loop6 Dec 13 01:29:26.764397 extend-filesystems[1580]: Found loop7 Dec 13 01:29:26.764397 extend-filesystems[1580]: Found sda Dec 13 01:29:26.764397 extend-filesystems[1580]: Found sda1 Dec 13 01:29:26.764397 extend-filesystems[1580]: Found sda2 Dec 13 01:29:26.764397 extend-filesystems[1580]: Found sda3 Dec 13 01:29:26.764397 extend-filesystems[1580]: Found usr Dec 13 01:29:26.764397 extend-filesystems[1580]: Found sda4 Dec 13 01:29:26.764397 extend-filesystems[1580]: Found sda6 Dec 13 01:29:26.764397 extend-filesystems[1580]: Found sda7 Dec 13 01:29:26.764397 extend-filesystems[1580]: Found sda9 Dec 13 01:29:26.764397 extend-filesystems[1580]: Checking size of /dev/sda9 Dec 13 01:29:26.761691 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 01:29:26.772890 dbus-daemon[1577]: [system] SELinux support is enabled Dec 13 01:29:26.769688 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:29:26.791647 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:29:26.799684 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:29:26.813837 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:29:26.823951 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:29:26.830623 extend-filesystems[1580]: Resized partition /dev/sda9 Dec 13 01:29:26.836638 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:29:26.842722 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:29:26.850277 extend-filesystems[1617]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:29:26.857158 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 01:29:26.857220 jq[1616]: true Dec 13 01:29:26.861997 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:29:26.862287 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:29:26.878793 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:29:26.879215 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:29:26.883440 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:29:26.888010 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:29:26.888284 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:29:26.925304 jq[1625]: true Dec 13 01:29:26.931038 (ntainerd)[1629]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:29:26.939021 systemd-networkd[1248]: eth1: Gained IPv6LL Dec 13 01:29:26.944212 update_engine[1611]: I20241213 01:29:26.942183 1611 main.cc:92] Flatcar Update Engine starting Dec 13 01:29:26.960537 update_engine[1611]: I20241213 01:29:26.959582 1611 update_check_scheduler.cc:74] Next update check in 7m26s Dec 13 01:29:26.962256 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:29:26.962298 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:29:26.962852 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:29:26.962875 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:29:26.963300 systemd-logind[1606]: New seat seat0. Dec 13 01:29:26.965945 systemd-logind[1606]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 01:29:26.965980 systemd-logind[1606]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:29:26.972826 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:29:26.977335 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:29:26.978812 tar[1624]: linux-amd64/helm Dec 13 01:29:26.988799 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1249) Dec 13 01:29:26.989237 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:29:26.991060 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:29:27.050997 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:29:27.065742 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:29:27.127085 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 01:29:27.127165 bash[1666]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:29:27.133985 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:29:27.151792 systemd[1]: Starting sshkeys.service... Dec 13 01:29:27.168553 extend-filesystems[1617]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 01:29:27.168553 extend-filesystems[1617]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 01:29:27.168553 extend-filesystems[1617]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 01:29:27.191611 extend-filesystems[1580]: Resized filesystem in /dev/sda9 Dec 13 01:29:27.191611 extend-filesystems[1580]: Found sr0 Dec 13 01:29:27.179615 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:29:27.179976 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:29:27.211084 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:29:27.217872 sshd_keygen[1618]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:29:27.223809 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:29:27.254140 coreos-metadata[1684]: Dec 13 01:29:27.253 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 01:29:27.256026 coreos-metadata[1684]: Dec 13 01:29:27.255 INFO Fetch successful Dec 13 01:29:27.257320 locksmithd[1650]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:29:27.258229 unknown[1684]: wrote ssh authorized keys file for user: core Dec 13 01:29:27.290696 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:29:27.299498 update-ssh-keys[1694]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:29:27.303872 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:29:27.306953 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:29:27.321837 systemd[1]: Finished sshkeys.service. Dec 13 01:29:27.344219 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:29:27.344631 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:29:27.360557 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:29:27.392276 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:29:27.403326 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:29:27.416547 containerd[1629]: time="2024-12-13T01:29:27.415910202Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:29:27.418794 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:29:27.419654 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:29:27.453551 containerd[1629]: time="2024-12-13T01:29:27.453039099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:27.454904 containerd[1629]: time="2024-12-13T01:29:27.454872617Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:27.455075 containerd[1629]: time="2024-12-13T01:29:27.454995570Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:29:27.455543 containerd[1629]: time="2024-12-13T01:29:27.455132499Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:29:27.455543 containerd[1629]: time="2024-12-13T01:29:27.455306032Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:29:27.455543 containerd[1629]: time="2024-12-13T01:29:27.455320910Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:27.455543 containerd[1629]: time="2024-12-13T01:29:27.455387666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:27.455543 containerd[1629]: time="2024-12-13T01:29:27.455401286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:27.456168 containerd[1629]: time="2024-12-13T01:29:27.456148956Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:27.456235 containerd[1629]: time="2024-12-13T01:29:27.456221837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:27.456490 containerd[1629]: time="2024-12-13T01:29:27.456470351Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:27.456571 containerd[1629]: time="2024-12-13T01:29:27.456557440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:27.457343 containerd[1629]: time="2024-12-13T01:29:27.456700800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:27.457343 containerd[1629]: time="2024-12-13T01:29:27.456983087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:27.457343 containerd[1629]: time="2024-12-13T01:29:27.457162137Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:27.457343 containerd[1629]: time="2024-12-13T01:29:27.457174743Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:29:27.457343 containerd[1629]: time="2024-12-13T01:29:27.457262817Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:29:27.457343 containerd[1629]: time="2024-12-13T01:29:27.457314431Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:29:27.463452 containerd[1629]: time="2024-12-13T01:29:27.463425016Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:29:27.463611 containerd[1629]: time="2024-12-13T01:29:27.463580952Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:29:27.463731 containerd[1629]: time="2024-12-13T01:29:27.463708125Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:29:27.465536 containerd[1629]: time="2024-12-13T01:29:27.464790759Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:29:27.465536 containerd[1629]: time="2024-12-13T01:29:27.464813711Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:29:27.465536 containerd[1629]: time="2024-12-13T01:29:27.464956817Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:29:27.465536 containerd[1629]: time="2024-12-13T01:29:27.465207490Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:29:27.465536 containerd[1629]: time="2024-12-13T01:29:27.465314145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:29:27.465536 containerd[1629]: time="2024-12-13T01:29:27.465328313Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:29:27.465536 containerd[1629]: time="2024-12-13T01:29:27.465339570Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:29:27.465536 containerd[1629]: time="2024-12-13T01:29:27.465352411Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:29:27.465536 containerd[1629]: time="2024-12-13T01:29:27.465370860Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:29:27.465536 containerd[1629]: time="2024-12-13T01:29:27.465386002Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:29:27.465536 containerd[1629]: time="2024-12-13T01:29:27.465398841Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:29:27.465536 containerd[1629]: time="2024-12-13T01:29:27.465414298Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:29:27.465536 containerd[1629]: time="2024-12-13T01:29:27.465425900Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:29:27.465536 containerd[1629]: time="2024-12-13T01:29:27.465436925Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:29:27.465830 containerd[1629]: time="2024-12-13T01:29:27.465447545Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:29:27.465830 containerd[1629]: time="2024-12-13T01:29:27.465466277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:27.465830 containerd[1629]: time="2024-12-13T01:29:27.465485101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:29:27.465830 containerd[1629]: time="2024-12-13T01:29:27.465496490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:29:27.465942 containerd[1629]: time="2024-12-13T01:29:27.465507951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:29:27.466006 containerd[1629]: time="2024-12-13T01:29:27.465991995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:29:27.466183 containerd[1629]: time="2024-12-13T01:29:27.466168205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:29:27.466243 containerd[1629]: time="2024-12-13T01:29:27.466231523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:29:27.466290 containerd[1629]: time="2024-12-13T01:29:27.466279870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:29:27.466332 containerd[1629]: time="2024-12-13T01:29:27.466322792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:27.466377 containerd[1629]: time="2024-12-13T01:29:27.466367072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:29:27.466430 containerd[1629]: time="2024-12-13T01:29:27.466418676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:29:27.466480 containerd[1629]: time="2024-12-13T01:29:27.466469802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:29:27.467538 containerd[1629]: time="2024-12-13T01:29:27.466530665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:29:27.467538 containerd[1629]: time="2024-12-13T01:29:27.466551345Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:29:27.467538 containerd[1629]: time="2024-12-13T01:29:27.466583455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:29:27.467538 containerd[1629]: time="2024-12-13T01:29:27.466594682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:29:27.467538 containerd[1629]: time="2024-12-13T01:29:27.466604216Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:29:27.467538 containerd[1629]: time="2024-12-13T01:29:27.466648709Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:29:27.467538 containerd[1629]: time="2024-12-13T01:29:27.466663770Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:29:27.467538 containerd[1629]: time="2024-12-13T01:29:27.466673223Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:29:27.467538 containerd[1629]: time="2024-12-13T01:29:27.466682889Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:29:27.467538 containerd[1629]: time="2024-12-13T01:29:27.466691154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:29:27.467538 containerd[1629]: time="2024-12-13T01:29:27.466701713Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:29:27.467538 containerd[1629]: time="2024-12-13T01:29:27.466710283Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:29:27.467538 containerd[1629]: time="2024-12-13T01:29:27.466718874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:29:27.467783 containerd[1629]: time="2024-12-13T01:29:27.466959475Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:29:27.467783 containerd[1629]: time="2024-12-13T01:29:27.467045938Z" level=info msg="Connect containerd service" Dec 13 01:29:27.467783 containerd[1629]: time="2024-12-13T01:29:27.467081922Z" level=info msg="using legacy CRI server" Dec 13 01:29:27.467783 containerd[1629]: time="2024-12-13T01:29:27.467089103Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:29:27.467783 containerd[1629]: time="2024-12-13T01:29:27.467168860Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:29:27.468583 containerd[1629]: time="2024-12-13T01:29:27.468563396Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:29:27.469821 containerd[1629]: time="2024-12-13T01:29:27.469803943Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:29:27.469921 containerd[1629]: time="2024-12-13T01:29:27.469908569Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:29:27.470316 containerd[1629]: time="2024-12-13T01:29:27.470283951Z" level=info msg="Start subscribing containerd event" Dec 13 01:29:27.471016 containerd[1629]: time="2024-12-13T01:29:27.471001376Z" level=info msg="Start recovering state" Dec 13 01:29:27.471195 containerd[1629]: time="2024-12-13T01:29:27.471113234Z" level=info msg="Start event monitor" Dec 13 01:29:27.471195 containerd[1629]: time="2024-12-13T01:29:27.471140244Z" level=info msg="Start snapshots syncer" Dec 13 01:29:27.471195 containerd[1629]: time="2024-12-13T01:29:27.471149938Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:29:27.471195 containerd[1629]: time="2024-12-13T01:29:27.471158215Z" level=info msg="Start streaming server" Dec 13 01:29:27.471430 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:29:27.474124 containerd[1629]: time="2024-12-13T01:29:27.472177724Z" level=info msg="containerd successfully booted in 0.058999s" Dec 13 01:29:27.705812 tar[1624]: linux-amd64/LICENSE Dec 13 01:29:27.705812 tar[1624]: linux-amd64/README.md Dec 13 01:29:27.717851 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:29:28.030180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:28.040724 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:29:28.041013 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:28.042291 systemd[1]: Startup finished in 6.790s (kernel) + 4.869s (userspace) = 11.659s. Dec 13 01:29:28.674771 kubelet[1732]: E1213 01:29:28.674666 1732 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:28.679076 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:28.679428 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:38.929998 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:29:38.937990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:39.079824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:39.082590 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:39.143259 kubelet[1757]: E1213 01:29:39.143194 1757 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:39.150654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:39.150948 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:49.392032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:29:49.408710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:49.539635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:49.549165 (kubelet)[1778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:49.590152 kubelet[1778]: E1213 01:29:49.590086 1778 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:49.593180 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:49.593383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:59.642215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:29:59.648770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:59.805710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:59.811002 (kubelet)[1799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:59.856619 kubelet[1799]: E1213 01:29:59.856500 1799 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:59.860334 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:59.861316 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:09.892390 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:30:09.899815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:10.042665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:10.053954 (kubelet)[1821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:10.095570 kubelet[1821]: E1213 01:30:10.095471 1821 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:10.100244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:10.100624 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:12.411454 update_engine[1611]: I20241213 01:30:12.411360 1611 update_attempter.cc:509] Updating boot flags... Dec 13 01:30:12.475532 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1839) Dec 13 01:30:12.520853 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1841) Dec 13 01:30:12.565528 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1841) Dec 13 01:30:20.142052 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:30:20.154787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:20.287818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:20.291124 (kubelet)[1863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:20.332009 kubelet[1863]: E1213 01:30:20.331941 1863 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:20.336232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:20.336552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:30.391981 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 01:30:30.405102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:30.527883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:30.538922 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:30.580491 kubelet[1884]: E1213 01:30:30.580419 1884 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:30.584559 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:30.584800 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:40.642178 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 01:30:40.654716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:40.802632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:40.813985 (kubelet)[1905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:40.860840 kubelet[1905]: E1213 01:30:40.860777 1905 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:40.865272 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:40.865613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:50.892450 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 01:30:50.899916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:51.078690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:51.078947 (kubelet)[1927]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:51.117277 kubelet[1927]: E1213 01:30:51.117210 1927 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:51.121273 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:51.122382 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:01.141962 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 01:31:01.147679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:01.318656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:01.323680 (kubelet)[1948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:01.375647 kubelet[1948]: E1213 01:31:01.375576 1948 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:01.379892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:01.380142 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:11.392008 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 01:31:11.398973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:11.556744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:11.557018 (kubelet)[1970]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:11.606127 kubelet[1970]: E1213 01:31:11.605990 1970 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:11.611058 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:11.611296 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:21.641982 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 13 01:31:21.654982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:21.780679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:21.782365 (kubelet)[1991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:21.823049 kubelet[1991]: E1213 01:31:21.822995 1991 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:21.826966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:21.827201 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:25.991369 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:31:25.997744 systemd[1]: Started sshd@0-78.47.95.198:22-147.75.109.163:47670.service - OpenSSH per-connection server daemon (147.75.109.163:47670). Dec 13 01:31:26.984998 sshd[2001]: Accepted publickey for core from 147.75.109.163 port 47670 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:26.987304 sshd[2001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:26.996053 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:31:27.000723 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:31:27.002926 systemd-logind[1606]: New session 1 of user core. Dec 13 01:31:27.019761 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:31:27.027039 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:31:27.037575 (systemd)[2007]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:31:27.138161 systemd[2007]: Queued start job for default target default.target. Dec 13 01:31:27.138539 systemd[2007]: Created slice app.slice - User Application Slice. Dec 13 01:31:27.138558 systemd[2007]: Reached target paths.target - Paths. Dec 13 01:31:27.138569 systemd[2007]: Reached target timers.target - Timers. Dec 13 01:31:27.143690 systemd[2007]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:31:27.152603 systemd[2007]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:31:27.152736 systemd[2007]: Reached target sockets.target - Sockets. Dec 13 01:31:27.152763 systemd[2007]: Reached target basic.target - Basic System. Dec 13 01:31:27.153432 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:31:27.154377 systemd[2007]: Reached target default.target - Main User Target. Dec 13 01:31:27.154873 systemd[2007]: Startup finished in 108ms. Dec 13 01:31:27.167864 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:31:27.856886 systemd[1]: Started sshd@1-78.47.95.198:22-147.75.109.163:45856.service - OpenSSH per-connection server daemon (147.75.109.163:45856). Dec 13 01:31:28.854811 sshd[2019]: Accepted publickey for core from 147.75.109.163 port 45856 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:28.856487 sshd[2019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:28.861982 systemd-logind[1606]: New session 2 of user core. Dec 13 01:31:28.867820 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:31:29.537461 sshd[2019]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:29.540789 systemd[1]: sshd@1-78.47.95.198:22-147.75.109.163:45856.service: Deactivated successfully. Dec 13 01:31:29.544163 systemd-logind[1606]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:31:29.544860 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:31:29.547658 systemd-logind[1606]: Removed session 2. Dec 13 01:31:29.711104 systemd[1]: Started sshd@2-78.47.95.198:22-147.75.109.163:45862.service - OpenSSH per-connection server daemon (147.75.109.163:45862). Dec 13 01:31:30.693146 sshd[2027]: Accepted publickey for core from 147.75.109.163 port 45862 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:30.694940 sshd[2027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:30.699938 systemd-logind[1606]: New session 3 of user core. Dec 13 01:31:30.709853 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:31:31.376126 sshd[2027]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:31.383756 systemd[1]: sshd@2-78.47.95.198:22-147.75.109.163:45862.service: Deactivated successfully. Dec 13 01:31:31.389005 systemd-logind[1606]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:31:31.390248 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:31:31.391763 systemd-logind[1606]: Removed session 3. Dec 13 01:31:31.540930 systemd[1]: Started sshd@3-78.47.95.198:22-147.75.109.163:45866.service - OpenSSH per-connection server daemon (147.75.109.163:45866). Dec 13 01:31:31.892132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 13 01:31:31.899139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:32.042311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:32.044857 (kubelet)[2049]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:32.084205 kubelet[2049]: E1213 01:31:32.084139 2049 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:32.088080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:32.088332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:32.523125 sshd[2035]: Accepted publickey for core from 147.75.109.163 port 45866 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:32.524859 sshd[2035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:32.529298 systemd-logind[1606]: New session 4 of user core. Dec 13 01:31:32.535781 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:31:33.197845 sshd[2035]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:33.201154 systemd[1]: sshd@3-78.47.95.198:22-147.75.109.163:45866.service: Deactivated successfully. Dec 13 01:31:33.205760 systemd-logind[1606]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:31:33.206532 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:31:33.208473 systemd-logind[1606]: Removed session 4. Dec 13 01:31:33.359942 systemd[1]: Started sshd@4-78.47.95.198:22-147.75.109.163:45880.service - OpenSSH per-connection server daemon (147.75.109.163:45880). Dec 13 01:31:34.326148 sshd[2064]: Accepted publickey for core from 147.75.109.163 port 45880 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:34.327880 sshd[2064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:34.333314 systemd-logind[1606]: New session 5 of user core. Dec 13 01:31:34.340811 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:31:34.852590 sudo[2068]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:31:34.852978 sudo[2068]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:35.102799 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:31:35.105797 (dockerd)[2083]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:31:35.355001 dockerd[2083]: time="2024-12-13T01:31:35.354880110Z" level=info msg="Starting up" Dec 13 01:31:35.450244 dockerd[2083]: time="2024-12-13T01:31:35.450211867Z" level=info msg="Loading containers: start." Dec 13 01:31:35.548554 kernel: Initializing XFRM netlink socket Dec 13 01:31:35.635483 systemd-networkd[1248]: docker0: Link UP Dec 13 01:31:35.656873 dockerd[2083]: time="2024-12-13T01:31:35.656827516Z" level=info msg="Loading containers: done." Dec 13 01:31:35.673394 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3651902012-merged.mount: Deactivated successfully. Dec 13 01:31:35.674880 dockerd[2083]: time="2024-12-13T01:31:35.674836927Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:31:35.675787 dockerd[2083]: time="2024-12-13T01:31:35.675187844Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:31:35.675787 dockerd[2083]: time="2024-12-13T01:31:35.675386565Z" level=info msg="Daemon has completed initialization" Dec 13 01:31:35.704971 dockerd[2083]: time="2024-12-13T01:31:35.704699831Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:31:35.704901 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:31:36.799369 containerd[1629]: time="2024-12-13T01:31:36.799318555Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:31:37.391960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3075280535.mount: Deactivated successfully. Dec 13 01:31:40.128501 containerd[1629]: time="2024-12-13T01:31:40.128440918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:40.129575 containerd[1629]: time="2024-12-13T01:31:40.129536492Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139346" Dec 13 01:31:40.130298 containerd[1629]: time="2024-12-13T01:31:40.130260189Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:40.132564 containerd[1629]: time="2024-12-13T01:31:40.132530083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:40.133900 containerd[1629]: time="2024-12-13T01:31:40.133521381Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.33414638s" Dec 13 01:31:40.133900 containerd[1629]: time="2024-12-13T01:31:40.133562338Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:31:40.154307 containerd[1629]: time="2024-12-13T01:31:40.154235886Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:31:42.141991 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Dec 13 01:31:42.147958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:42.337691 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:42.343349 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:42.394487 kubelet[2298]: E1213 01:31:42.394355 2298 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:42.398454 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:42.398816 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:43.011997 containerd[1629]: time="2024-12-13T01:31:43.011928580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:43.013474 containerd[1629]: time="2024-12-13T01:31:43.013400359Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217752" Dec 13 01:31:43.014115 containerd[1629]: time="2024-12-13T01:31:43.014071318Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:43.017206 containerd[1629]: time="2024-12-13T01:31:43.017159219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:43.018282 containerd[1629]: time="2024-12-13T01:31:43.018171016Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.863715678s" Dec 13 01:31:43.018282 containerd[1629]: time="2024-12-13T01:31:43.018200351Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:31:43.041361 containerd[1629]: time="2024-12-13T01:31:43.041321120Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:31:44.773466 containerd[1629]: time="2024-12-13T01:31:44.773394935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:44.774448 containerd[1629]: time="2024-12-13T01:31:44.774396683Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332842" Dec 13 01:31:44.775326 containerd[1629]: time="2024-12-13T01:31:44.775285450Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:44.777771 containerd[1629]: time="2024-12-13T01:31:44.777711381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:44.778779 containerd[1629]: time="2024-12-13T01:31:44.778642056Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.737149125s" Dec 13 01:31:44.778779 containerd[1629]: time="2024-12-13T01:31:44.778671993Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:31:44.800148 containerd[1629]: time="2024-12-13T01:31:44.800113108Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:31:45.806186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount881452521.mount: Deactivated successfully. Dec 13 01:31:46.367615 containerd[1629]: time="2024-12-13T01:31:46.367566175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:46.368635 containerd[1629]: time="2024-12-13T01:31:46.368476643Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619984" Dec 13 01:31:46.369545 containerd[1629]: time="2024-12-13T01:31:46.369492239Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:46.371409 containerd[1629]: time="2024-12-13T01:31:46.371369030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:46.372160 containerd[1629]: time="2024-12-13T01:31:46.372052932Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.57169208s" Dec 13 01:31:46.372160 containerd[1629]: time="2024-12-13T01:31:46.372080354Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:31:46.390989 containerd[1629]: time="2024-12-13T01:31:46.390959436Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:31:46.939039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582801528.mount: Deactivated successfully. Dec 13 01:31:47.567846 containerd[1629]: time="2024-12-13T01:31:47.567789391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:47.569000 containerd[1629]: time="2024-12-13T01:31:47.568961121Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Dec 13 01:31:47.569976 containerd[1629]: time="2024-12-13T01:31:47.569937332Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:47.574929 containerd[1629]: time="2024-12-13T01:31:47.574821377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:47.576610 containerd[1629]: time="2024-12-13T01:31:47.576580839Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.185435344s" Dec 13 01:31:47.576680 containerd[1629]: time="2024-12-13T01:31:47.576610756Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:31:47.596242 containerd[1629]: time="2024-12-13T01:31:47.596209263Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:31:48.138121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount678434564.mount: Deactivated successfully. Dec 13 01:31:48.142108 containerd[1629]: time="2024-12-13T01:31:48.142061753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:48.143006 containerd[1629]: time="2024-12-13T01:31:48.142956793Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Dec 13 01:31:48.144164 containerd[1629]: time="2024-12-13T01:31:48.144118453Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:48.146731 containerd[1629]: time="2024-12-13T01:31:48.146688376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:48.147847 containerd[1629]: time="2024-12-13T01:31:48.147399160Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 551.157837ms" Dec 13 01:31:48.147847 containerd[1629]: time="2024-12-13T01:31:48.147430468Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:31:48.174102 containerd[1629]: time="2024-12-13T01:31:48.174041252Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:31:48.702095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3751006504.mount: Deactivated successfully. Dec 13 01:31:50.063810 containerd[1629]: time="2024-12-13T01:31:50.063716069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:50.067530 containerd[1629]: time="2024-12-13T01:31:50.065929804Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651705" Dec 13 01:31:50.067530 containerd[1629]: time="2024-12-13T01:31:50.066621694Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:50.071825 containerd[1629]: time="2024-12-13T01:31:50.071793482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:50.072624 containerd[1629]: time="2024-12-13T01:31:50.072590760Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 1.898509683s" Dec 13 01:31:50.072687 containerd[1629]: time="2024-12-13T01:31:50.072628040Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:31:52.641923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Dec 13 01:31:52.651781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:52.825693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:52.826834 (kubelet)[2510]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:52.875841 kubelet[2510]: E1213 01:31:52.875796 2510 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:52.881659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:52.881863 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:52.958781 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:52.967804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:53.008762 systemd[1]: Reloading requested from client PID 2526 ('systemctl') (unit session-5.scope)... Dec 13 01:31:53.008782 systemd[1]: Reloading... Dec 13 01:31:53.129562 zram_generator::config[2570]: No configuration found. Dec 13 01:31:53.254114 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:31:53.331376 systemd[1]: Reloading finished in 322 ms. Dec 13 01:31:53.384897 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:31:53.385033 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:31:53.385608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:53.393272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:53.530724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:53.533419 (kubelet)[2629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:31:53.577228 kubelet[2629]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:53.577228 kubelet[2629]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:31:53.577228 kubelet[2629]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:53.578671 kubelet[2629]: I1213 01:31:53.578599 2629 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:31:54.007149 kubelet[2629]: I1213 01:31:54.007095 2629 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:31:54.007149 kubelet[2629]: I1213 01:31:54.007138 2629 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:31:54.007420 kubelet[2629]: I1213 01:31:54.007393 2629 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:31:54.030788 kubelet[2629]: E1213 01:31:54.030725 2629 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://78.47.95.198:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 78.47.95.198:6443: connect: connection refused Dec 13 01:31:54.031055 kubelet[2629]: I1213 01:31:54.030943 2629 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:31:54.043497 kubelet[2629]: I1213 01:31:54.043449 2629 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:31:54.047987 kubelet[2629]: I1213 01:31:54.047955 2629 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:31:54.049054 kubelet[2629]: I1213 01:31:54.049015 2629 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:31:54.049564 kubelet[2629]: I1213 01:31:54.049536 2629 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:31:54.049564 kubelet[2629]: I1213 01:31:54.049557 2629 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:31:54.049713 kubelet[2629]: I1213 01:31:54.049677 2629 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:54.051004 kubelet[2629]: I1213 01:31:54.049806 2629 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:31:54.051004 kubelet[2629]: I1213 01:31:54.049868 2629 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:31:54.051004 kubelet[2629]: I1213 01:31:54.049901 2629 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:31:54.051004 kubelet[2629]: I1213 01:31:54.049913 2629 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:31:54.051004 kubelet[2629]: W1213 01:31:54.050365 2629 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://78.47.95.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-8-09b5250c1f&limit=500&resourceVersion=0": dial tcp 78.47.95.198:6443: connect: connection refused Dec 13 01:31:54.051004 kubelet[2629]: E1213 01:31:54.050429 2629 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.47.95.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-8-09b5250c1f&limit=500&resourceVersion=0": dial tcp 78.47.95.198:6443: connect: connection refused Dec 13 01:31:54.052052 kubelet[2629]: W1213 01:31:54.051929 2629 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://78.47.95.198:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.95.198:6443: connect: connection refused Dec 13 01:31:54.052052 kubelet[2629]: E1213 01:31:54.051972 2629 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.47.95.198:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.95.198:6443: connect: connection refused Dec 13 01:31:54.052563 kubelet[2629]: I1213 01:31:54.052418 2629 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:31:54.056978 kubelet[2629]: I1213 01:31:54.056962 2629 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:31:54.058058 kubelet[2629]: W1213 01:31:54.058017 2629 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:31:54.058952 kubelet[2629]: I1213 01:31:54.058794 2629 server.go:1256] "Started kubelet" Dec 13 01:31:54.059008 kubelet[2629]: I1213 01:31:54.058965 2629 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:31:54.059972 kubelet[2629]: I1213 01:31:54.059942 2629 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:31:54.062894 kubelet[2629]: I1213 01:31:54.062564 2629 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:31:54.062894 kubelet[2629]: I1213 01:31:54.062796 2629 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:31:54.065300 kubelet[2629]: E1213 01:31:54.065269 2629 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://78.47.95.198:6443/api/v1/namespaces/default/events\": dial tcp 78.47.95.198:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-8-09b5250c1f.1810987bcefdb1d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-8-09b5250c1f,UID:ci-4081-2-1-8-09b5250c1f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-8-09b5250c1f,},FirstTimestamp:2024-12-13 01:31:54.058772951 +0000 UTC m=+0.521029128,LastTimestamp:2024-12-13 01:31:54.058772951 +0000 UTC m=+0.521029128,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-8-09b5250c1f,}" Dec 13 01:31:54.066729 kubelet[2629]: I1213 01:31:54.066577 2629 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:31:54.070207 kubelet[2629]: E1213 01:31:54.070178 2629 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-8-09b5250c1f\" not found" Dec 13 01:31:54.070266 kubelet[2629]: I1213 01:31:54.070225 2629 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:31:54.070534 kubelet[2629]: I1213 01:31:54.070312 2629 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:31:54.070534 kubelet[2629]: I1213 01:31:54.070372 2629 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:31:54.070710 kubelet[2629]: W1213 01:31:54.070660 2629 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://78.47.95.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.95.198:6443: connect: connection refused Dec 13 01:31:54.070751 kubelet[2629]: E1213 01:31:54.070715 2629 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.47.95.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.95.198:6443: connect: connection refused Dec 13 01:31:54.071031 kubelet[2629]: E1213 01:31:54.070886 2629 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.95.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-8-09b5250c1f?timeout=10s\": dial tcp 78.47.95.198:6443: connect: connection refused" interval="200ms" Dec 13 01:31:54.073004 kubelet[2629]: I1213 01:31:54.072944 2629 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:31:54.073052 kubelet[2629]: I1213 01:31:54.073020 2629 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:31:54.074839 kubelet[2629]: I1213 01:31:54.074816 2629 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:31:54.086960 kubelet[2629]: I1213 01:31:54.086860 2629 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:31:54.088315 kubelet[2629]: I1213 01:31:54.088011 2629 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:31:54.088315 kubelet[2629]: I1213 01:31:54.088035 2629 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:31:54.088315 kubelet[2629]: I1213 01:31:54.088058 2629 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:31:54.088315 kubelet[2629]: E1213 01:31:54.088102 2629 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:31:54.092900 kubelet[2629]: W1213 01:31:54.092864 2629 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://78.47.95.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.95.198:6443: connect: connection refused Dec 13 01:31:54.093007 kubelet[2629]: E1213 01:31:54.092998 2629 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.47.95.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.95.198:6443: connect: connection refused Dec 13 01:31:54.100398 kubelet[2629]: E1213 01:31:54.099817 2629 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:31:54.108177 kubelet[2629]: I1213 01:31:54.108059 2629 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:31:54.108177 kubelet[2629]: I1213 01:31:54.108081 2629 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:31:54.108177 kubelet[2629]: I1213 01:31:54.108133 2629 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:54.110274 kubelet[2629]: I1213 01:31:54.110254 2629 policy_none.go:49] "None policy: Start" Dec 13 01:31:54.111057 kubelet[2629]: I1213 01:31:54.110998 2629 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:31:54.111057 kubelet[2629]: I1213 01:31:54.111023 2629 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:31:54.119023 kubelet[2629]: I1213 01:31:54.118536 2629 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:31:54.119023 kubelet[2629]: I1213 01:31:54.118789 2629 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:31:54.121918 kubelet[2629]: E1213 01:31:54.121889 2629 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-8-09b5250c1f\" not found" Dec 13 01:31:54.173419 kubelet[2629]: I1213 01:31:54.173393 2629 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.173991 kubelet[2629]: E1213 01:31:54.173959 2629 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.95.198:6443/api/v1/nodes\": dial tcp 78.47.95.198:6443: connect: connection refused" node="ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.189338 kubelet[2629]: I1213 01:31:54.189287 2629 topology_manager.go:215] "Topology Admit Handler" podUID="8260eff0f7b5a97cfcb883df35025561" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.191171 kubelet[2629]: I1213 01:31:54.191152 2629 topology_manager.go:215] "Topology Admit Handler" podUID="965f8135f8d6ebd6042442e74475e4f2" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.192832 kubelet[2629]: I1213 01:31:54.192610 2629 topology_manager.go:215] "Topology Admit Handler" podUID="7a3255923c842e606904472dafaeb7b3" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.271913 kubelet[2629]: E1213 01:31:54.271778 2629 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.95.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-8-09b5250c1f?timeout=10s\": dial tcp 78.47.95.198:6443: connect: connection refused" interval="400ms" Dec 13 01:31:54.372528 kubelet[2629]: I1213 01:31:54.372452 2629 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/965f8135f8d6ebd6042442e74475e4f2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-8-09b5250c1f\" (UID: \"965f8135f8d6ebd6042442e74475e4f2\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.372730 kubelet[2629]: I1213 01:31:54.372557 2629 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8260eff0f7b5a97cfcb883df35025561-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-8-09b5250c1f\" (UID: \"8260eff0f7b5a97cfcb883df35025561\") " pod="kube-system/kube-apiserver-ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.372730 kubelet[2629]: I1213 01:31:54.372599 2629 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/965f8135f8d6ebd6042442e74475e4f2-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-8-09b5250c1f\" (UID: \"965f8135f8d6ebd6042442e74475e4f2\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.372730 kubelet[2629]: I1213 01:31:54.372640 2629 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/965f8135f8d6ebd6042442e74475e4f2-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-8-09b5250c1f\" (UID: \"965f8135f8d6ebd6042442e74475e4f2\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.372730 kubelet[2629]: I1213 01:31:54.372682 2629 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/965f8135f8d6ebd6042442e74475e4f2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-8-09b5250c1f\" (UID: \"965f8135f8d6ebd6042442e74475e4f2\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.372908 kubelet[2629]: I1213 01:31:54.372744 2629 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a3255923c842e606904472dafaeb7b3-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-8-09b5250c1f\" (UID: \"7a3255923c842e606904472dafaeb7b3\") " pod="kube-system/kube-scheduler-ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.372908 kubelet[2629]: I1213 01:31:54.372806 2629 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8260eff0f7b5a97cfcb883df35025561-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-8-09b5250c1f\" (UID: \"8260eff0f7b5a97cfcb883df35025561\") " pod="kube-system/kube-apiserver-ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.372908 kubelet[2629]: I1213 01:31:54.372837 2629 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8260eff0f7b5a97cfcb883df35025561-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-8-09b5250c1f\" (UID: \"8260eff0f7b5a97cfcb883df35025561\") " pod="kube-system/kube-apiserver-ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.372908 kubelet[2629]: I1213 01:31:54.372867 2629 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/965f8135f8d6ebd6042442e74475e4f2-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-8-09b5250c1f\" (UID: \"965f8135f8d6ebd6042442e74475e4f2\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.376283 kubelet[2629]: I1213 01:31:54.376257 2629 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.376629 kubelet[2629]: E1213 01:31:54.376575 2629 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.95.198:6443/api/v1/nodes\": dial tcp 78.47.95.198:6443: connect: connection refused" node="ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.499704 containerd[1629]: time="2024-12-13T01:31:54.499416324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-8-09b5250c1f,Uid:8260eff0f7b5a97cfcb883df35025561,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:54.499704 containerd[1629]: time="2024-12-13T01:31:54.499455597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-8-09b5250c1f,Uid:7a3255923c842e606904472dafaeb7b3,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:54.502836 containerd[1629]: time="2024-12-13T01:31:54.502655838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-8-09b5250c1f,Uid:965f8135f8d6ebd6042442e74475e4f2,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:54.672856 kubelet[2629]: E1213 01:31:54.672782 2629 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.95.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-8-09b5250c1f?timeout=10s\": dial tcp 78.47.95.198:6443: connect: connection refused" interval="800ms" Dec 13 01:31:54.779383 kubelet[2629]: I1213 01:31:54.779348 2629 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.779683 kubelet[2629]: E1213 01:31:54.779656 2629 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.95.198:6443/api/v1/nodes\": dial tcp 78.47.95.198:6443: connect: connection refused" node="ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:54.861478 kubelet[2629]: W1213 01:31:54.861379 2629 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://78.47.95.198:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.95.198:6443: connect: connection refused Dec 13 01:31:54.861478 kubelet[2629]: E1213 01:31:54.861460 2629 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.47.95.198:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.95.198:6443: connect: connection refused Dec 13 01:31:54.999986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3490350229.mount: Deactivated successfully. Dec 13 01:31:55.004629 containerd[1629]: time="2024-12-13T01:31:55.004564948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:55.006310 containerd[1629]: time="2024-12-13T01:31:55.006248640Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:31:55.007091 containerd[1629]: time="2024-12-13T01:31:55.007030600Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:55.008285 containerd[1629]: time="2024-12-13T01:31:55.008213080Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:55.009883 containerd[1629]: time="2024-12-13T01:31:55.009747774Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Dec 13 01:31:55.010918 containerd[1629]: time="2024-12-13T01:31:55.010758071Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:31:55.010918 containerd[1629]: time="2024-12-13T01:31:55.010849914Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:55.014804 containerd[1629]: time="2024-12-13T01:31:55.014765891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:55.016925 containerd[1629]: time="2024-12-13T01:31:55.016754276Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 517.24081ms" Dec 13 01:31:55.019311 containerd[1629]: time="2024-12-13T01:31:55.019218425Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 516.482998ms" Dec 13 01:31:55.020223 containerd[1629]: time="2024-12-13T01:31:55.020113136Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 520.613305ms" Dec 13 01:31:55.076141 kubelet[2629]: W1213 01:31:55.076072 2629 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://78.47.95.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.95.198:6443: connect: connection refused Dec 13 01:31:55.076319 kubelet[2629]: E1213 01:31:55.076152 2629 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.47.95.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.95.198:6443: connect: connection refused Dec 13 01:31:55.143268 containerd[1629]: time="2024-12-13T01:31:55.142415978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:55.143268 containerd[1629]: time="2024-12-13T01:31:55.142483284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:55.143268 containerd[1629]: time="2024-12-13T01:31:55.142524341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:55.143268 containerd[1629]: time="2024-12-13T01:31:55.142660777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:55.150671 containerd[1629]: time="2024-12-13T01:31:55.150351315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:55.150671 containerd[1629]: time="2024-12-13T01:31:55.150397372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:55.150671 containerd[1629]: time="2024-12-13T01:31:55.150415926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:55.151098 containerd[1629]: time="2024-12-13T01:31:55.151065807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:55.156057 containerd[1629]: time="2024-12-13T01:31:55.155798387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:55.156057 containerd[1629]: time="2024-12-13T01:31:55.155846728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:55.156057 containerd[1629]: time="2024-12-13T01:31:55.155870914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:55.156057 containerd[1629]: time="2024-12-13T01:31:55.155981471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:55.226981 containerd[1629]: time="2024-12-13T01:31:55.226948226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-8-09b5250c1f,Uid:7a3255923c842e606904472dafaeb7b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"86fba9cd6c80a99f54bb0337600c8ecc0df3266319c9fea5323e249a75bb0fb8\"" Dec 13 01:31:55.235241 containerd[1629]: time="2024-12-13T01:31:55.235078248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-8-09b5250c1f,Uid:965f8135f8d6ebd6042442e74475e4f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3fa21c186d9b1b82a71790bb8ace29af51f6a97035b9ba4a1cc3bc49003ff79\"" Dec 13 01:31:55.236528 containerd[1629]: time="2024-12-13T01:31:55.236369013Z" level=info msg="CreateContainer within sandbox \"86fba9cd6c80a99f54bb0337600c8ecc0df3266319c9fea5323e249a75bb0fb8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:31:55.240459 containerd[1629]: time="2024-12-13T01:31:55.240349400Z" level=info msg="CreateContainer within sandbox \"a3fa21c186d9b1b82a71790bb8ace29af51f6a97035b9ba4a1cc3bc49003ff79\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:31:55.251328 containerd[1629]: time="2024-12-13T01:31:55.250598925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-8-09b5250c1f,Uid:8260eff0f7b5a97cfcb883df35025561,Namespace:kube-system,Attempt:0,} returns sandbox id \"a150076ff973919424d76a82ab54ba29f19f50b8966e1674e3ec6a100583003e\"" Dec 13 01:31:55.254453 containerd[1629]: time="2024-12-13T01:31:55.254375810Z" level=info msg="CreateContainer within sandbox \"a150076ff973919424d76a82ab54ba29f19f50b8966e1674e3ec6a100583003e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:31:55.257435 containerd[1629]: time="2024-12-13T01:31:55.257415209Z" level=info msg="CreateContainer within sandbox \"86fba9cd6c80a99f54bb0337600c8ecc0df3266319c9fea5323e249a75bb0fb8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c8ddac9656e2d39328035ad87ffb9047d11115aba5713277d3632c5df393c559\"" Dec 13 01:31:55.258140 containerd[1629]: time="2024-12-13T01:31:55.258000780Z" level=info msg="StartContainer for \"c8ddac9656e2d39328035ad87ffb9047d11115aba5713277d3632c5df393c559\"" Dec 13 01:31:55.263891 containerd[1629]: time="2024-12-13T01:31:55.263666032Z" level=info msg="CreateContainer within sandbox \"a3fa21c186d9b1b82a71790bb8ace29af51f6a97035b9ba4a1cc3bc49003ff79\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f813a95dd390f06de23c84fce9d10e906132509aee284d21234f2f245f2078e7\"" Dec 13 01:31:55.264327 containerd[1629]: time="2024-12-13T01:31:55.264261691Z" level=info msg="StartContainer for \"f813a95dd390f06de23c84fce9d10e906132509aee284d21234f2f245f2078e7\"" Dec 13 01:31:55.269418 containerd[1629]: time="2024-12-13T01:31:55.269363675Z" level=info msg="CreateContainer within sandbox \"a150076ff973919424d76a82ab54ba29f19f50b8966e1674e3ec6a100583003e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ca8e00f96615f383bf0309d7a2656fdaf25c177780b2ed6a8dab689bb46eacfa\"" Dec 13 01:31:55.271036 containerd[1629]: time="2024-12-13T01:31:55.270559792Z" level=info msg="StartContainer for \"ca8e00f96615f383bf0309d7a2656fdaf25c177780b2ed6a8dab689bb46eacfa\"" Dec 13 01:31:55.375768 containerd[1629]: time="2024-12-13T01:31:55.375687109Z" level=info msg="StartContainer for \"f813a95dd390f06de23c84fce9d10e906132509aee284d21234f2f245f2078e7\" returns successfully" Dec 13 01:31:55.376552 containerd[1629]: time="2024-12-13T01:31:55.375903106Z" level=info msg="StartContainer for \"c8ddac9656e2d39328035ad87ffb9047d11115aba5713277d3632c5df393c559\" returns successfully" Dec 13 01:31:55.381595 containerd[1629]: time="2024-12-13T01:31:55.381572785Z" level=info msg="StartContainer for \"ca8e00f96615f383bf0309d7a2656fdaf25c177780b2ed6a8dab689bb46eacfa\" returns successfully" Dec 13 01:31:55.474497 kubelet[2629]: E1213 01:31:55.474464 2629 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.95.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-8-09b5250c1f?timeout=10s\": dial tcp 78.47.95.198:6443: connect: connection refused" interval="1.6s" Dec 13 01:31:55.575947 kubelet[2629]: W1213 01:31:55.575346 2629 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://78.47.95.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.95.198:6443: connect: connection refused Dec 13 01:31:55.575947 kubelet[2629]: E1213 01:31:55.575401 2629 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.47.95.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.95.198:6443: connect: connection refused Dec 13 01:31:55.584694 kubelet[2629]: I1213 01:31:55.584151 2629 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:55.584923 kubelet[2629]: E1213 01:31:55.584910 2629 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.95.198:6443/api/v1/nodes\": dial tcp 78.47.95.198:6443: connect: connection refused" node="ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:55.616393 kubelet[2629]: W1213 01:31:55.616327 2629 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://78.47.95.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-8-09b5250c1f&limit=500&resourceVersion=0": dial tcp 78.47.95.198:6443: connect: connection refused Dec 13 01:31:55.616556 kubelet[2629]: E1213 01:31:55.616545 2629 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.47.95.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-8-09b5250c1f&limit=500&resourceVersion=0": dial tcp 78.47.95.198:6443: connect: connection refused Dec 13 01:31:56.967378 kubelet[2629]: E1213 01:31:56.967331 2629 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-2-1-8-09b5250c1f" not found Dec 13 01:31:57.077891 kubelet[2629]: E1213 01:31:57.077834 2629 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-8-09b5250c1f\" not found" node="ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:57.187462 kubelet[2629]: I1213 01:31:57.187416 2629 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:57.198428 kubelet[2629]: I1213 01:31:57.198398 2629 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:57.206168 kubelet[2629]: E1213 01:31:57.206141 2629 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-8-09b5250c1f\" not found" Dec 13 01:31:57.306706 kubelet[2629]: E1213 01:31:57.306574 2629 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-8-09b5250c1f\" not found" Dec 13 01:31:57.407459 kubelet[2629]: E1213 01:31:57.407417 2629 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-8-09b5250c1f\" not found" Dec 13 01:31:57.508061 kubelet[2629]: E1213 01:31:57.508007 2629 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-8-09b5250c1f\" not found" Dec 13 01:31:57.608731 kubelet[2629]: E1213 01:31:57.608448 2629 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-8-09b5250c1f\" not found" Dec 13 01:31:57.709163 kubelet[2629]: E1213 01:31:57.709107 2629 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-8-09b5250c1f\" not found" Dec 13 01:31:57.809947 kubelet[2629]: E1213 01:31:57.809820 2629 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-8-09b5250c1f\" not found" Dec 13 01:31:57.910688 kubelet[2629]: E1213 01:31:57.910647 2629 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-8-09b5250c1f\" not found" Dec 13 01:31:58.011658 kubelet[2629]: E1213 01:31:58.011602 2629 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-8-09b5250c1f\" not found" Dec 13 01:31:58.111758 kubelet[2629]: E1213 01:31:58.111689 2629 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-8-09b5250c1f\" not found" Dec 13 01:31:58.212891 kubelet[2629]: E1213 01:31:58.212760 2629 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-8-09b5250c1f\" not found" Dec 13 01:31:58.313359 kubelet[2629]: E1213 01:31:58.313304 2629 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-8-09b5250c1f\" not found" Dec 13 01:31:58.414001 kubelet[2629]: E1213 01:31:58.413936 2629 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-8-09b5250c1f\" not found" Dec 13 01:31:58.514820 kubelet[2629]: E1213 01:31:58.514683 2629 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-8-09b5250c1f\" not found" Dec 13 01:31:59.052861 kubelet[2629]: I1213 01:31:59.052810 2629 apiserver.go:52] "Watching apiserver" Dec 13 01:31:59.070707 kubelet[2629]: I1213 01:31:59.070680 2629 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:31:59.141481 systemd[1]: Reloading requested from client PID 2904 ('systemctl') (unit session-5.scope)... Dec 13 01:31:59.141549 systemd[1]: Reloading... Dec 13 01:31:59.214544 zram_generator::config[2944]: No configuration found. Dec 13 01:31:59.337143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:31:59.406170 systemd[1]: Reloading finished in 264 ms. Dec 13 01:31:59.442185 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:59.442442 kubelet[2629]: I1213 01:31:59.442202 2629 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:31:59.463608 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:31:59.464158 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:59.469803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:59.591188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:59.594883 (kubelet)[3005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:31:59.652615 kubelet[3005]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:59.652615 kubelet[3005]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:31:59.652615 kubelet[3005]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:59.654170 kubelet[3005]: I1213 01:31:59.654050 3005 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:31:59.658378 kubelet[3005]: I1213 01:31:59.658243 3005 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:31:59.658378 kubelet[3005]: I1213 01:31:59.658258 3005 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:31:59.658480 kubelet[3005]: I1213 01:31:59.658465 3005 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:31:59.659697 kubelet[3005]: I1213 01:31:59.659672 3005 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:31:59.664030 kubelet[3005]: I1213 01:31:59.663550 3005 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:31:59.682311 kubelet[3005]: I1213 01:31:59.682279 3005 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:31:59.682967 kubelet[3005]: I1213 01:31:59.682950 3005 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:31:59.683114 kubelet[3005]: I1213 01:31:59.683094 3005 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:31:59.683203 kubelet[3005]: I1213 01:31:59.683122 3005 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:31:59.683203 kubelet[3005]: I1213 01:31:59.683131 3005 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:31:59.683203 kubelet[3005]: I1213 01:31:59.683175 3005 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:59.683303 kubelet[3005]: I1213 01:31:59.683282 3005 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:31:59.683303 kubelet[3005]: I1213 01:31:59.683299 3005 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:31:59.685302 kubelet[3005]: I1213 01:31:59.683840 3005 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:31:59.685302 kubelet[3005]: I1213 01:31:59.683856 3005 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:31:59.686703 kubelet[3005]: I1213 01:31:59.686655 3005 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:31:59.687265 kubelet[3005]: I1213 01:31:59.686999 3005 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:31:59.687982 kubelet[3005]: I1213 01:31:59.687966 3005 server.go:1256] "Started kubelet" Dec 13 01:31:59.698144 kubelet[3005]: I1213 01:31:59.698125 3005 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:31:59.698569 kubelet[3005]: I1213 01:31:59.698557 3005 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:31:59.700497 kubelet[3005]: I1213 01:31:59.700236 3005 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:31:59.700797 kubelet[3005]: I1213 01:31:59.700778 3005 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:31:59.703540 kubelet[3005]: I1213 01:31:59.703501 3005 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:31:59.707306 kubelet[3005]: I1213 01:31:59.707293 3005 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:31:59.709863 kubelet[3005]: I1213 01:31:59.709850 3005 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:31:59.710484 kubelet[3005]: I1213 01:31:59.710029 3005 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:31:59.711547 kubelet[3005]: E1213 01:31:59.711535 3005 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:31:59.712617 kubelet[3005]: I1213 01:31:59.712586 3005 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:31:59.712831 kubelet[3005]: I1213 01:31:59.712814 3005 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:31:59.717549 kubelet[3005]: I1213 01:31:59.715866 3005 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:31:59.718675 kubelet[3005]: I1213 01:31:59.718656 3005 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:31:59.719861 kubelet[3005]: I1213 01:31:59.719710 3005 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:31:59.719861 kubelet[3005]: I1213 01:31:59.719746 3005 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:31:59.719861 kubelet[3005]: I1213 01:31:59.719762 3005 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:31:59.719861 kubelet[3005]: E1213 01:31:59.719807 3005 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:31:59.798415 kubelet[3005]: I1213 01:31:59.798385 3005 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:31:59.798415 kubelet[3005]: I1213 01:31:59.798406 3005 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:31:59.798592 kubelet[3005]: I1213 01:31:59.798433 3005 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:59.799174 kubelet[3005]: I1213 01:31:59.799156 3005 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:31:59.799277 kubelet[3005]: I1213 01:31:59.799264 3005 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:31:59.799365 kubelet[3005]: I1213 01:31:59.799352 3005 policy_none.go:49] "None policy: Start" Dec 13 01:31:59.800528 kubelet[3005]: I1213 01:31:59.800496 3005 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:31:59.800615 kubelet[3005]: I1213 01:31:59.800606 3005 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:31:59.800869 kubelet[3005]: I1213 01:31:59.800854 3005 state_mem.go:75] "Updated machine memory state" Dec 13 01:31:59.802359 kubelet[3005]: I1213 01:31:59.802340 3005 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:31:59.806075 kubelet[3005]: I1213 01:31:59.806050 3005 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:31:59.820310 kubelet[3005]: I1213 01:31:59.820196 3005 topology_manager.go:215] "Topology Admit Handler" podUID="8260eff0f7b5a97cfcb883df35025561" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:59.820432 kubelet[3005]: I1213 01:31:59.820351 3005 topology_manager.go:215] "Topology Admit Handler" podUID="965f8135f8d6ebd6042442e74475e4f2" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:59.820432 kubelet[3005]: I1213 01:31:59.820417 3005 topology_manager.go:215] "Topology Admit Handler" podUID="7a3255923c842e606904472dafaeb7b3" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:59.911197 kubelet[3005]: I1213 01:31:59.910811 3005 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:59.917759 kubelet[3005]: I1213 01:31:59.917702 3005 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-8-09b5250c1f" Dec 13 01:31:59.917889 kubelet[3005]: I1213 01:31:59.917771 3005 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-8-09b5250c1f" Dec 13 01:32:00.010766 kubelet[3005]: I1213 01:32:00.010694 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8260eff0f7b5a97cfcb883df35025561-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-8-09b5250c1f\" (UID: \"8260eff0f7b5a97cfcb883df35025561\") " pod="kube-system/kube-apiserver-ci-4081-2-1-8-09b5250c1f" Dec 13 01:32:00.011014 kubelet[3005]: I1213 01:32:00.010792 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8260eff0f7b5a97cfcb883df35025561-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-8-09b5250c1f\" (UID: \"8260eff0f7b5a97cfcb883df35025561\") " pod="kube-system/kube-apiserver-ci-4081-2-1-8-09b5250c1f" Dec 13 01:32:00.011014 kubelet[3005]: I1213 01:32:00.010831 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/965f8135f8d6ebd6042442e74475e4f2-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-8-09b5250c1f\" (UID: \"965f8135f8d6ebd6042442e74475e4f2\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-8-09b5250c1f" Dec 13 01:32:00.011014 kubelet[3005]: I1213 01:32:00.010873 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7a3255923c842e606904472dafaeb7b3-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-8-09b5250c1f\" (UID: \"7a3255923c842e606904472dafaeb7b3\") " pod="kube-system/kube-scheduler-ci-4081-2-1-8-09b5250c1f" Dec 13 01:32:00.011105 kubelet[3005]: I1213 01:32:00.011087 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8260eff0f7b5a97cfcb883df35025561-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-8-09b5250c1f\" (UID: \"8260eff0f7b5a97cfcb883df35025561\") " pod="kube-system/kube-apiserver-ci-4081-2-1-8-09b5250c1f" Dec 13 01:32:00.011491 kubelet[3005]: I1213 01:32:00.011156 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/965f8135f8d6ebd6042442e74475e4f2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-8-09b5250c1f\" (UID: \"965f8135f8d6ebd6042442e74475e4f2\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-8-09b5250c1f" Dec 13 01:32:00.011491 kubelet[3005]: I1213 01:32:00.011198 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/965f8135f8d6ebd6042442e74475e4f2-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-8-09b5250c1f\" (UID: \"965f8135f8d6ebd6042442e74475e4f2\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-8-09b5250c1f" Dec 13 01:32:00.011491 kubelet[3005]: I1213 01:32:00.011278 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/965f8135f8d6ebd6042442e74475e4f2-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-8-09b5250c1f\" (UID: \"965f8135f8d6ebd6042442e74475e4f2\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-8-09b5250c1f" Dec 13 01:32:00.011491 kubelet[3005]: I1213 01:32:00.011348 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/965f8135f8d6ebd6042442e74475e4f2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-8-09b5250c1f\" (UID: \"965f8135f8d6ebd6042442e74475e4f2\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-8-09b5250c1f" Dec 13 01:32:00.510313 sudo[2068]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:00.669899 sshd[2064]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:00.676280 systemd[1]: sshd@4-78.47.95.198:22-147.75.109.163:45880.service: Deactivated successfully. Dec 13 01:32:00.677853 systemd-logind[1606]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:32:00.680498 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:32:00.682098 systemd-logind[1606]: Removed session 5. Dec 13 01:32:00.686209 kubelet[3005]: I1213 01:32:00.686175 3005 apiserver.go:52] "Watching apiserver" Dec 13 01:32:00.710109 kubelet[3005]: I1213 01:32:00.710034 3005 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:32:00.776103 kubelet[3005]: I1213 01:32:00.774979 3005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-8-09b5250c1f" podStartSLOduration=1.774923617 podStartE2EDuration="1.774923617s" podCreationTimestamp="2024-12-13 01:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:00.76585445 +0000 UTC m=+1.165356048" watchObservedRunningTime="2024-12-13 01:32:00.774923617 +0000 UTC m=+1.174425194" Dec 13 01:32:00.783730 kubelet[3005]: I1213 01:32:00.783667 3005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-8-09b5250c1f" podStartSLOduration=1.7836312539999999 podStartE2EDuration="1.783631254s" podCreationTimestamp="2024-12-13 01:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:00.776581463 +0000 UTC m=+1.176083029" watchObservedRunningTime="2024-12-13 01:32:00.783631254 +0000 UTC m=+1.183132821" Dec 13 01:32:00.796407 kubelet[3005]: I1213 01:32:00.796175 3005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-8-09b5250c1f" podStartSLOduration=1.796136749 podStartE2EDuration="1.796136749s" podCreationTimestamp="2024-12-13 01:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:00.784391121 +0000 UTC m=+1.183892689" watchObservedRunningTime="2024-12-13 01:32:00.796136749 +0000 UTC m=+1.195638416" Dec 13 01:32:13.497383 kubelet[3005]: I1213 01:32:13.497241 3005 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:32:13.500944 containerd[1629]: time="2024-12-13T01:32:13.498256968Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:32:13.501295 kubelet[3005]: I1213 01:32:13.500382 3005 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:32:14.147727 kubelet[3005]: I1213 01:32:14.147658 3005 topology_manager.go:215] "Topology Admit Handler" podUID="7cf7b406-eae4-44de-b24d-ae5ec7a7220f" podNamespace="kube-system" podName="kube-proxy-lwmfd" Dec 13 01:32:14.151662 kubelet[3005]: I1213 01:32:14.150433 3005 topology_manager.go:215] "Topology Admit Handler" podUID="a675b5ba-1070-467b-bcc0-b27022a88d2e" podNamespace="kube-flannel" podName="kube-flannel-ds-nnhm8" Dec 13 01:32:14.205886 kubelet[3005]: I1213 01:32:14.205839 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a675b5ba-1070-467b-bcc0-b27022a88d2e-run\") pod \"kube-flannel-ds-nnhm8\" (UID: \"a675b5ba-1070-467b-bcc0-b27022a88d2e\") " pod="kube-flannel/kube-flannel-ds-nnhm8" Dec 13 01:32:14.205886 kubelet[3005]: I1213 01:32:14.205896 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7cf7b406-eae4-44de-b24d-ae5ec7a7220f-kube-proxy\") pod \"kube-proxy-lwmfd\" (UID: \"7cf7b406-eae4-44de-b24d-ae5ec7a7220f\") " pod="kube-system/kube-proxy-lwmfd" Dec 13 01:32:14.206054 kubelet[3005]: I1213 01:32:14.205920 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cf7b406-eae4-44de-b24d-ae5ec7a7220f-xtables-lock\") pod \"kube-proxy-lwmfd\" (UID: \"7cf7b406-eae4-44de-b24d-ae5ec7a7220f\") " pod="kube-system/kube-proxy-lwmfd" Dec 13 01:32:14.206054 kubelet[3005]: I1213 01:32:14.205940 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cf7b406-eae4-44de-b24d-ae5ec7a7220f-lib-modules\") pod \"kube-proxy-lwmfd\" (UID: \"7cf7b406-eae4-44de-b24d-ae5ec7a7220f\") " pod="kube-system/kube-proxy-lwmfd" Dec 13 01:32:14.206054 kubelet[3005]: I1213 01:32:14.205959 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdwms\" (UniqueName: \"kubernetes.io/projected/7cf7b406-eae4-44de-b24d-ae5ec7a7220f-kube-api-access-sdwms\") pod \"kube-proxy-lwmfd\" (UID: \"7cf7b406-eae4-44de-b24d-ae5ec7a7220f\") " pod="kube-system/kube-proxy-lwmfd" Dec 13 01:32:14.206054 kubelet[3005]: I1213 01:32:14.205976 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/a675b5ba-1070-467b-bcc0-b27022a88d2e-cni-plugin\") pod \"kube-flannel-ds-nnhm8\" (UID: \"a675b5ba-1070-467b-bcc0-b27022a88d2e\") " pod="kube-flannel/kube-flannel-ds-nnhm8" Dec 13 01:32:14.206054 kubelet[3005]: I1213 01:32:14.206011 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/a675b5ba-1070-467b-bcc0-b27022a88d2e-cni\") pod \"kube-flannel-ds-nnhm8\" (UID: \"a675b5ba-1070-467b-bcc0-b27022a88d2e\") " pod="kube-flannel/kube-flannel-ds-nnhm8" Dec 13 01:32:14.206156 kubelet[3005]: I1213 01:32:14.206028 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/a675b5ba-1070-467b-bcc0-b27022a88d2e-flannel-cfg\") pod \"kube-flannel-ds-nnhm8\" (UID: \"a675b5ba-1070-467b-bcc0-b27022a88d2e\") " pod="kube-flannel/kube-flannel-ds-nnhm8" Dec 13 01:32:14.206156 kubelet[3005]: I1213 01:32:14.206044 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a675b5ba-1070-467b-bcc0-b27022a88d2e-xtables-lock\") pod \"kube-flannel-ds-nnhm8\" (UID: \"a675b5ba-1070-467b-bcc0-b27022a88d2e\") " pod="kube-flannel/kube-flannel-ds-nnhm8" Dec 13 01:32:14.206156 kubelet[3005]: I1213 01:32:14.206061 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbgbh\" (UniqueName: \"kubernetes.io/projected/a675b5ba-1070-467b-bcc0-b27022a88d2e-kube-api-access-hbgbh\") pod \"kube-flannel-ds-nnhm8\" (UID: \"a675b5ba-1070-467b-bcc0-b27022a88d2e\") " pod="kube-flannel/kube-flannel-ds-nnhm8" Dec 13 01:32:14.313661 kubelet[3005]: E1213 01:32:14.313611 3005 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:32:14.313661 kubelet[3005]: E1213 01:32:14.313665 3005 projected.go:200] Error preparing data for projected volume kube-api-access-sdwms for pod kube-system/kube-proxy-lwmfd: configmap "kube-root-ca.crt" not found Dec 13 01:32:14.313815 kubelet[3005]: E1213 01:32:14.313721 3005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7cf7b406-eae4-44de-b24d-ae5ec7a7220f-kube-api-access-sdwms podName:7cf7b406-eae4-44de-b24d-ae5ec7a7220f nodeName:}" failed. No retries permitted until 2024-12-13 01:32:14.813701004 +0000 UTC m=+15.213202571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sdwms" (UniqueName: "kubernetes.io/projected/7cf7b406-eae4-44de-b24d-ae5ec7a7220f-kube-api-access-sdwms") pod "kube-proxy-lwmfd" (UID: "7cf7b406-eae4-44de-b24d-ae5ec7a7220f") : configmap "kube-root-ca.crt" not found Dec 13 01:32:14.314054 kubelet[3005]: E1213 01:32:14.313607 3005 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:32:14.314054 kubelet[3005]: E1213 01:32:14.313991 3005 projected.go:200] Error preparing data for projected volume kube-api-access-hbgbh for pod kube-flannel/kube-flannel-ds-nnhm8: configmap "kube-root-ca.crt" not found Dec 13 01:32:14.314054 kubelet[3005]: E1213 01:32:14.314034 3005 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a675b5ba-1070-467b-bcc0-b27022a88d2e-kube-api-access-hbgbh podName:a675b5ba-1070-467b-bcc0-b27022a88d2e nodeName:}" failed. No retries permitted until 2024-12-13 01:32:14.814021157 +0000 UTC m=+15.213522734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hbgbh" (UniqueName: "kubernetes.io/projected/a675b5ba-1070-467b-bcc0-b27022a88d2e-kube-api-access-hbgbh") pod "kube-flannel-ds-nnhm8" (UID: "a675b5ba-1070-467b-bcc0-b27022a88d2e") : configmap "kube-root-ca.crt" not found Dec 13 01:32:15.064976 containerd[1629]: time="2024-12-13T01:32:15.064876116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lwmfd,Uid:7cf7b406-eae4-44de-b24d-ae5ec7a7220f,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:15.066771 containerd[1629]: time="2024-12-13T01:32:15.066697282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-nnhm8,Uid:a675b5ba-1070-467b-bcc0-b27022a88d2e,Namespace:kube-flannel,Attempt:0,}" Dec 13 01:32:15.103616 containerd[1629]: time="2024-12-13T01:32:15.103490722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:15.103616 containerd[1629]: time="2024-12-13T01:32:15.103570793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:15.103616 containerd[1629]: time="2024-12-13T01:32:15.103585250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:15.103852 containerd[1629]: time="2024-12-13T01:32:15.103683976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:15.113467 containerd[1629]: time="2024-12-13T01:32:15.113175106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:15.113467 containerd[1629]: time="2024-12-13T01:32:15.113234719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:15.113467 containerd[1629]: time="2024-12-13T01:32:15.113248925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:15.113672 containerd[1629]: time="2024-12-13T01:32:15.113363601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:15.171811 containerd[1629]: time="2024-12-13T01:32:15.171529109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lwmfd,Uid:7cf7b406-eae4-44de-b24d-ae5ec7a7220f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8569f74c4736645eef25d1e86b6ad953dce51b5b793c05f50e9c532a69ed6fd\"" Dec 13 01:32:15.179841 containerd[1629]: time="2024-12-13T01:32:15.179492405Z" level=info msg="CreateContainer within sandbox \"d8569f74c4736645eef25d1e86b6ad953dce51b5b793c05f50e9c532a69ed6fd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:32:15.186935 containerd[1629]: time="2024-12-13T01:32:15.186754380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-nnhm8,Uid:a675b5ba-1070-467b-bcc0-b27022a88d2e,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"2f3d6cee03572bdbbce5ee6a87014c2b05f7af270f18f06acdb83feaaf3b96b4\"" Dec 13 01:32:15.189554 containerd[1629]: time="2024-12-13T01:32:15.189469990Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 01:32:15.197740 containerd[1629]: time="2024-12-13T01:32:15.197701370Z" level=info msg="CreateContainer within sandbox \"d8569f74c4736645eef25d1e86b6ad953dce51b5b793c05f50e9c532a69ed6fd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"61feceada10ad6f230dd17832298852c42335a95dc371078dc58eb6f607c2ba1\"" Dec 13 01:32:15.199548 containerd[1629]: time="2024-12-13T01:32:15.198258489Z" level=info msg="StartContainer for \"61feceada10ad6f230dd17832298852c42335a95dc371078dc58eb6f607c2ba1\"" Dec 13 01:32:15.267783 containerd[1629]: time="2024-12-13T01:32:15.267738358Z" level=info msg="StartContainer for \"61feceada10ad6f230dd17832298852c42335a95dc371078dc58eb6f607c2ba1\" returns successfully" Dec 13 01:32:15.807174 kubelet[3005]: I1213 01:32:15.807091 3005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lwmfd" podStartSLOduration=1.8069477379999999 podStartE2EDuration="1.806947738s" podCreationTimestamp="2024-12-13 01:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:15.805841956 +0000 UTC m=+16.205343533" watchObservedRunningTime="2024-12-13 01:32:15.806947738 +0000 UTC m=+16.206449315" Dec 13 01:32:17.664207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3021267211.mount: Deactivated successfully. Dec 13 01:32:17.695568 containerd[1629]: time="2024-12-13T01:32:17.695520304Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:17.696530 containerd[1629]: time="2024-12-13T01:32:17.696365394Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852936" Dec 13 01:32:17.697308 containerd[1629]: time="2024-12-13T01:32:17.697265348Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:17.698964 containerd[1629]: time="2024-12-13T01:32:17.698928788Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:17.700051 containerd[1629]: time="2024-12-13T01:32:17.699652501Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.510021627s" Dec 13 01:32:17.700051 containerd[1629]: time="2024-12-13T01:32:17.699679221Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 13 01:32:17.701841 containerd[1629]: time="2024-12-13T01:32:17.701803338Z" level=info msg="CreateContainer within sandbox \"2f3d6cee03572bdbbce5ee6a87014c2b05f7af270f18f06acdb83feaaf3b96b4\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 01:32:17.715615 containerd[1629]: time="2024-12-13T01:32:17.715575818Z" level=info msg="CreateContainer within sandbox \"2f3d6cee03572bdbbce5ee6a87014c2b05f7af270f18f06acdb83feaaf3b96b4\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"19ac670521a10b4a7f2907465f5370145d7ddc2c075b00d9a4c323c18bb60937\"" Dec 13 01:32:17.716596 containerd[1629]: time="2024-12-13T01:32:17.715938070Z" level=info msg="StartContainer for \"19ac670521a10b4a7f2907465f5370145d7ddc2c075b00d9a4c323c18bb60937\"" Dec 13 01:32:17.772195 containerd[1629]: time="2024-12-13T01:32:17.772159442Z" level=info msg="StartContainer for \"19ac670521a10b4a7f2907465f5370145d7ddc2c075b00d9a4c323c18bb60937\" returns successfully" Dec 13 01:32:17.802018 containerd[1629]: time="2024-12-13T01:32:17.801941694Z" level=info msg="shim disconnected" id=19ac670521a10b4a7f2907465f5370145d7ddc2c075b00d9a4c323c18bb60937 namespace=k8s.io Dec 13 01:32:17.802307 containerd[1629]: time="2024-12-13T01:32:17.802287706Z" level=warning msg="cleaning up after shim disconnected" id=19ac670521a10b4a7f2907465f5370145d7ddc2c075b00d9a4c323c18bb60937 namespace=k8s.io Dec 13 01:32:17.802397 containerd[1629]: time="2024-12-13T01:32:17.802382594Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:17.816694 containerd[1629]: time="2024-12-13T01:32:17.816645368Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:32:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:32:18.572994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19ac670521a10b4a7f2907465f5370145d7ddc2c075b00d9a4c323c18bb60937-rootfs.mount: Deactivated successfully. Dec 13 01:32:18.807964 containerd[1629]: time="2024-12-13T01:32:18.807563489Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 01:32:21.259854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4129149140.mount: Deactivated successfully. Dec 13 01:32:21.846862 containerd[1629]: time="2024-12-13T01:32:21.846778473Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:21.847954 containerd[1629]: time="2024-12-13T01:32:21.847911977Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Dec 13 01:32:21.849128 containerd[1629]: time="2024-12-13T01:32:21.849059187Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:21.851931 containerd[1629]: time="2024-12-13T01:32:21.851872834Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:21.853748 containerd[1629]: time="2024-12-13T01:32:21.853139929Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.04552309s" Dec 13 01:32:21.853748 containerd[1629]: time="2024-12-13T01:32:21.853173061Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 13 01:32:21.855408 containerd[1629]: time="2024-12-13T01:32:21.855378842Z" level=info msg="CreateContainer within sandbox \"2f3d6cee03572bdbbce5ee6a87014c2b05f7af270f18f06acdb83feaaf3b96b4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:32:21.868473 containerd[1629]: time="2024-12-13T01:32:21.868330354Z" level=info msg="CreateContainer within sandbox \"2f3d6cee03572bdbbce5ee6a87014c2b05f7af270f18f06acdb83feaaf3b96b4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"90e17bddd825eb3f71394bb00ececcf0c64d5cb305340f8e697e85de2e595232\"" Dec 13 01:32:21.871500 containerd[1629]: time="2024-12-13T01:32:21.869190734Z" level=info msg="StartContainer for \"90e17bddd825eb3f71394bb00ececcf0c64d5cb305340f8e697e85de2e595232\"" Dec 13 01:32:21.936776 containerd[1629]: time="2024-12-13T01:32:21.936728922Z" level=info msg="StartContainer for \"90e17bddd825eb3f71394bb00ececcf0c64d5cb305340f8e697e85de2e595232\" returns successfully" Dec 13 01:32:21.999886 containerd[1629]: time="2024-12-13T01:32:21.999823657Z" level=info msg="shim disconnected" id=90e17bddd825eb3f71394bb00ececcf0c64d5cb305340f8e697e85de2e595232 namespace=k8s.io Dec 13 01:32:21.999886 containerd[1629]: time="2024-12-13T01:32:21.999875854Z" level=warning msg="cleaning up after shim disconnected" id=90e17bddd825eb3f71394bb00ececcf0c64d5cb305340f8e697e85de2e595232 namespace=k8s.io Dec 13 01:32:21.999886 containerd[1629]: time="2024-12-13T01:32:21.999886073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:22.015025 kubelet[3005]: I1213 01:32:22.014976 3005 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:32:22.048183 kubelet[3005]: I1213 01:32:22.048063 3005 topology_manager.go:215] "Topology Admit Handler" podUID="749caec1-da98-47a3-a2b3-58ea55c79ec2" podNamespace="kube-system" podName="coredns-76f75df574-7xtp5" Dec 13 01:32:22.048183 kubelet[3005]: I1213 01:32:22.048180 3005 topology_manager.go:215] "Topology Admit Handler" podUID="6de78929-cf5a-4c90-a681-79b10b2611a2" podNamespace="kube-system" podName="coredns-76f75df574-fmrtm" Dec 13 01:32:22.063749 kubelet[3005]: I1213 01:32:22.063680 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/749caec1-da98-47a3-a2b3-58ea55c79ec2-config-volume\") pod \"coredns-76f75df574-7xtp5\" (UID: \"749caec1-da98-47a3-a2b3-58ea55c79ec2\") " pod="kube-system/coredns-76f75df574-7xtp5" Dec 13 01:32:22.063749 kubelet[3005]: I1213 01:32:22.063736 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbtt7\" (UniqueName: \"kubernetes.io/projected/6de78929-cf5a-4c90-a681-79b10b2611a2-kube-api-access-qbtt7\") pod \"coredns-76f75df574-fmrtm\" (UID: \"6de78929-cf5a-4c90-a681-79b10b2611a2\") " pod="kube-system/coredns-76f75df574-fmrtm" Dec 13 01:32:22.063942 kubelet[3005]: I1213 01:32:22.063783 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57b27\" (UniqueName: \"kubernetes.io/projected/749caec1-da98-47a3-a2b3-58ea55c79ec2-kube-api-access-57b27\") pod \"coredns-76f75df574-7xtp5\" (UID: \"749caec1-da98-47a3-a2b3-58ea55c79ec2\") " pod="kube-system/coredns-76f75df574-7xtp5" Dec 13 01:32:22.063942 kubelet[3005]: I1213 01:32:22.063821 3005 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6de78929-cf5a-4c90-a681-79b10b2611a2-config-volume\") pod \"coredns-76f75df574-fmrtm\" (UID: \"6de78929-cf5a-4c90-a681-79b10b2611a2\") " pod="kube-system/coredns-76f75df574-fmrtm" Dec 13 01:32:22.164291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90e17bddd825eb3f71394bb00ececcf0c64d5cb305340f8e697e85de2e595232-rootfs.mount: Deactivated successfully. Dec 13 01:32:22.355444 containerd[1629]: time="2024-12-13T01:32:22.355372881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fmrtm,Uid:6de78929-cf5a-4c90-a681-79b10b2611a2,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:22.357102 containerd[1629]: time="2024-12-13T01:32:22.357024831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7xtp5,Uid:749caec1-da98-47a3-a2b3-58ea55c79ec2,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:22.395553 containerd[1629]: time="2024-12-13T01:32:22.395453205Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fmrtm,Uid:6de78929-cf5a-4c90-a681-79b10b2611a2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5bcfbe509d6473bd8673fea56d9d2673c2816de9a7663f4930b3962595b99f76\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:32:22.396107 kubelet[3005]: E1213 01:32:22.396079 3005 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bcfbe509d6473bd8673fea56d9d2673c2816de9a7663f4930b3962595b99f76\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:32:22.396210 kubelet[3005]: E1213 01:32:22.396162 3005 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bcfbe509d6473bd8673fea56d9d2673c2816de9a7663f4930b3962595b99f76\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-fmrtm" Dec 13 01:32:22.396210 kubelet[3005]: E1213 01:32:22.396181 3005 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bcfbe509d6473bd8673fea56d9d2673c2816de9a7663f4930b3962595b99f76\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-fmrtm" Dec 13 01:32:22.396278 kubelet[3005]: E1213 01:32:22.396228 3005 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-fmrtm_kube-system(6de78929-cf5a-4c90-a681-79b10b2611a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-fmrtm_kube-system(6de78929-cf5a-4c90-a681-79b10b2611a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bcfbe509d6473bd8673fea56d9d2673c2816de9a7663f4930b3962595b99f76\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-fmrtm" podUID="6de78929-cf5a-4c90-a681-79b10b2611a2" Dec 13 01:32:22.396735 containerd[1629]: time="2024-12-13T01:32:22.396435184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7xtp5,Uid:749caec1-da98-47a3-a2b3-58ea55c79ec2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc16742e729b319c637ce51298d9d8bbf65de56a7f0a1284c2f1c5edc18517a6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:32:22.397857 kubelet[3005]: E1213 01:32:22.397683 3005 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc16742e729b319c637ce51298d9d8bbf65de56a7f0a1284c2f1c5edc18517a6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:32:22.397857 kubelet[3005]: E1213 01:32:22.397725 3005 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc16742e729b319c637ce51298d9d8bbf65de56a7f0a1284c2f1c5edc18517a6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-7xtp5" Dec 13 01:32:22.397857 kubelet[3005]: E1213 01:32:22.397760 3005 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc16742e729b319c637ce51298d9d8bbf65de56a7f0a1284c2f1c5edc18517a6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-7xtp5" Dec 13 01:32:22.397857 kubelet[3005]: E1213 01:32:22.397818 3005 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-7xtp5_kube-system(749caec1-da98-47a3-a2b3-58ea55c79ec2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-7xtp5_kube-system(749caec1-da98-47a3-a2b3-58ea55c79ec2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc16742e729b319c637ce51298d9d8bbf65de56a7f0a1284c2f1c5edc18517a6\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-7xtp5" podUID="749caec1-da98-47a3-a2b3-58ea55c79ec2" Dec 13 01:32:22.845281 containerd[1629]: time="2024-12-13T01:32:22.845018301Z" level=info msg="CreateContainer within sandbox \"2f3d6cee03572bdbbce5ee6a87014c2b05f7af270f18f06acdb83feaaf3b96b4\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 01:32:22.870188 containerd[1629]: time="2024-12-13T01:32:22.870037070Z" level=info msg="CreateContainer within sandbox \"2f3d6cee03572bdbbce5ee6a87014c2b05f7af270f18f06acdb83feaaf3b96b4\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"523b0b0ee8d9582f7b2e138298649ec93208759dc456ea6dfab8766875088e2f\"" Dec 13 01:32:22.872369 containerd[1629]: time="2024-12-13T01:32:22.871826959Z" level=info msg="StartContainer for \"523b0b0ee8d9582f7b2e138298649ec93208759dc456ea6dfab8766875088e2f\"" Dec 13 01:32:22.924134 containerd[1629]: time="2024-12-13T01:32:22.924082434Z" level=info msg="StartContainer for \"523b0b0ee8d9582f7b2e138298649ec93208759dc456ea6dfab8766875088e2f\" returns successfully" Dec 13 01:32:23.165140 systemd[1]: run-netns-cni\x2d53df1cba\x2dc355\x2d186c\x2d140b\x2de9be3433073e.mount: Deactivated successfully. Dec 13 01:32:23.165807 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5bcfbe509d6473bd8673fea56d9d2673c2816de9a7663f4930b3962595b99f76-shm.mount: Deactivated successfully. Dec 13 01:32:23.980458 systemd-networkd[1248]: flannel.1: Link UP Dec 13 01:32:23.980592 systemd-networkd[1248]: flannel.1: Gained carrier Dec 13 01:32:25.429741 systemd-networkd[1248]: flannel.1: Gained IPv6LL Dec 13 01:32:33.721806 containerd[1629]: time="2024-12-13T01:32:33.721229668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fmrtm,Uid:6de78929-cf5a-4c90-a681-79b10b2611a2,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:33.773588 systemd-networkd[1248]: cni0: Link UP Dec 13 01:32:33.773605 systemd-networkd[1248]: cni0: Gained carrier Dec 13 01:32:33.779366 systemd-networkd[1248]: cni0: Lost carrier Dec 13 01:32:33.788770 systemd-networkd[1248]: veth271d0e8c: Link UP Dec 13 01:32:33.792535 kernel: cni0: port 1(veth271d0e8c) entered blocking state Dec 13 01:32:33.792598 kernel: cni0: port 1(veth271d0e8c) entered disabled state Dec 13 01:32:33.798560 kernel: veth271d0e8c: entered allmulticast mode Dec 13 01:32:33.802617 kernel: veth271d0e8c: entered promiscuous mode Dec 13 01:32:33.802677 kernel: cni0: port 1(veth271d0e8c) entered blocking state Dec 13 01:32:33.802707 kernel: cni0: port 1(veth271d0e8c) entered forwarding state Dec 13 01:32:33.807186 kernel: cni0: port 1(veth271d0e8c) entered disabled state Dec 13 01:32:33.817532 kernel: cni0: port 1(veth271d0e8c) entered blocking state Dec 13 01:32:33.817579 kernel: cni0: port 1(veth271d0e8c) entered forwarding state Dec 13 01:32:33.816998 systemd-networkd[1248]: veth271d0e8c: Gained carrier Dec 13 01:32:33.818295 systemd-networkd[1248]: cni0: Gained carrier Dec 13 01:32:33.826770 containerd[1629]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001e938), "name":"cbr0", "type":"bridge"} Dec 13 01:32:33.826770 containerd[1629]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:32:33.844422 containerd[1629]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:32:33.844039321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:33.844422 containerd[1629]: time="2024-12-13T01:32:33.844078896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:33.844422 containerd[1629]: time="2024-12-13T01:32:33.844088225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:33.844422 containerd[1629]: time="2024-12-13T01:32:33.844177341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:33.896738 containerd[1629]: time="2024-12-13T01:32:33.896707058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fmrtm,Uid:6de78929-cf5a-4c90-a681-79b10b2611a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"92333415ebc1dfea7561b5490c2cfa2db811be6ce8a305889be1c7a198b78842\"" Dec 13 01:32:33.900753 containerd[1629]: time="2024-12-13T01:32:33.900638162Z" level=info msg="CreateContainer within sandbox \"92333415ebc1dfea7561b5490c2cfa2db811be6ce8a305889be1c7a198b78842\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:32:33.914449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount412102337.mount: Deactivated successfully. Dec 13 01:32:33.916262 containerd[1629]: time="2024-12-13T01:32:33.916229227Z" level=info msg="CreateContainer within sandbox \"92333415ebc1dfea7561b5490c2cfa2db811be6ce8a305889be1c7a198b78842\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"441c18e0f378ccac01d23cd41c07b5241fed6e8502e2515e198bb3e36323e19b\"" Dec 13 01:32:33.916869 containerd[1629]: time="2024-12-13T01:32:33.916844656Z" level=info msg="StartContainer for \"441c18e0f378ccac01d23cd41c07b5241fed6e8502e2515e198bb3e36323e19b\"" Dec 13 01:32:33.969652 containerd[1629]: time="2024-12-13T01:32:33.969606770Z" level=info msg="StartContainer for \"441c18e0f378ccac01d23cd41c07b5241fed6e8502e2515e198bb3e36323e19b\" returns successfully" Dec 13 01:32:34.866186 kubelet[3005]: I1213 01:32:34.866084 3005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-nnhm8" podStartSLOduration=14.200779783 podStartE2EDuration="20.866051475s" podCreationTimestamp="2024-12-13 01:32:14 +0000 UTC" firstStartedPulling="2024-12-13 01:32:15.188166267 +0000 UTC m=+15.587667834" lastFinishedPulling="2024-12-13 01:32:21.85343796 +0000 UTC m=+22.252939526" observedRunningTime="2024-12-13 01:32:23.839261376 +0000 UTC m=+24.238762944" watchObservedRunningTime="2024-12-13 01:32:34.866051475 +0000 UTC m=+35.265553042" Dec 13 01:32:34.878172 kubelet[3005]: I1213 01:32:34.877970 3005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-fmrtm" podStartSLOduration=20.877929207 podStartE2EDuration="20.877929207s" podCreationTimestamp="2024-12-13 01:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:34.866393951 +0000 UTC m=+35.265895518" watchObservedRunningTime="2024-12-13 01:32:34.877929207 +0000 UTC m=+35.277430784" Dec 13 01:32:35.157706 systemd-networkd[1248]: cni0: Gained IPv6LL Dec 13 01:32:35.349802 systemd-networkd[1248]: veth271d0e8c: Gained IPv6LL Dec 13 01:32:35.722566 containerd[1629]: time="2024-12-13T01:32:35.721964887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7xtp5,Uid:749caec1-da98-47a3-a2b3-58ea55c79ec2,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:35.774202 kernel: cni0: port 2(veth475bb809) entered blocking state Dec 13 01:32:35.774295 kernel: cni0: port 2(veth475bb809) entered disabled state Dec 13 01:32:35.770773 systemd-networkd[1248]: veth475bb809: Link UP Dec 13 01:32:35.780162 kernel: veth475bb809: entered allmulticast mode Dec 13 01:32:35.780266 kernel: veth475bb809: entered promiscuous mode Dec 13 01:32:35.788091 kernel: cni0: port 2(veth475bb809) entered blocking state Dec 13 01:32:35.788190 kernel: cni0: port 2(veth475bb809) entered forwarding state Dec 13 01:32:35.788353 systemd-networkd[1248]: veth475bb809: Gained carrier Dec 13 01:32:35.790935 containerd[1629]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000018938), "name":"cbr0", "type":"bridge"} Dec 13 01:32:35.790935 containerd[1629]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:32:35.811072 containerd[1629]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:32:35.810638550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:35.811072 containerd[1629]: time="2024-12-13T01:32:35.810729862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:35.811072 containerd[1629]: time="2024-12-13T01:32:35.810742987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:35.811072 containerd[1629]: time="2024-12-13T01:32:35.810913908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:35.872202 containerd[1629]: time="2024-12-13T01:32:35.872141495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7xtp5,Uid:749caec1-da98-47a3-a2b3-58ea55c79ec2,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ec4e763a9d58074782f382d86c7e921874f35fdf3cdf6bcb22d30a1bf32e8dd\"" Dec 13 01:32:35.874810 containerd[1629]: time="2024-12-13T01:32:35.874660509Z" level=info msg="CreateContainer within sandbox \"5ec4e763a9d58074782f382d86c7e921874f35fdf3cdf6bcb22d30a1bf32e8dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:32:35.886857 containerd[1629]: time="2024-12-13T01:32:35.886820984Z" level=info msg="CreateContainer within sandbox \"5ec4e763a9d58074782f382d86c7e921874f35fdf3cdf6bcb22d30a1bf32e8dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1701a50abe428bbbb0d7bbd95526f082411c22abc08bad291b4ff8731ff466c3\"" Dec 13 01:32:35.888341 containerd[1629]: time="2024-12-13T01:32:35.887725909Z" level=info msg="StartContainer for \"1701a50abe428bbbb0d7bbd95526f082411c22abc08bad291b4ff8731ff466c3\"" Dec 13 01:32:35.940831 containerd[1629]: time="2024-12-13T01:32:35.940792794Z" level=info msg="StartContainer for \"1701a50abe428bbbb0d7bbd95526f082411c22abc08bad291b4ff8731ff466c3\" returns successfully" Dec 13 01:32:36.740934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount826878701.mount: Deactivated successfully. Dec 13 01:32:37.077701 systemd-networkd[1248]: veth475bb809: Gained IPv6LL Dec 13 01:32:42.368569 kubelet[3005]: I1213 01:32:42.368405 3005 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-7xtp5" podStartSLOduration=28.368357333 podStartE2EDuration="28.368357333s" podCreationTimestamp="2024-12-13 01:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:36.871349713 +0000 UTC m=+37.270851290" watchObservedRunningTime="2024-12-13 01:32:42.368357333 +0000 UTC m=+42.767858920" Dec 13 01:36:42.829011 systemd[1]: Started sshd@5-78.47.95.198:22-147.75.109.163:36686.service - OpenSSH per-connection server daemon (147.75.109.163:36686). Dec 13 01:36:43.801744 sshd[4940]: Accepted publickey for core from 147.75.109.163 port 36686 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:36:43.803363 sshd[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:43.808642 systemd-logind[1606]: New session 6 of user core. Dec 13 01:36:43.815689 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:36:44.536921 sshd[4940]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:44.541278 systemd[1]: sshd@5-78.47.95.198:22-147.75.109.163:36686.service: Deactivated successfully. Dec 13 01:36:44.546499 systemd-logind[1606]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:36:44.547221 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:36:44.548611 systemd-logind[1606]: Removed session 6. Dec 13 01:36:49.708651 systemd[1]: Started sshd@6-78.47.95.198:22-147.75.109.163:46868.service - OpenSSH per-connection server daemon (147.75.109.163:46868). Dec 13 01:36:50.706538 sshd[4979]: Accepted publickey for core from 147.75.109.163 port 46868 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:36:50.707837 sshd[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:50.713398 systemd-logind[1606]: New session 7 of user core. Dec 13 01:36:50.718937 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:36:51.451904 sshd[4979]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:51.457918 systemd[1]: sshd@6-78.47.95.198:22-147.75.109.163:46868.service: Deactivated successfully. Dec 13 01:36:51.467713 systemd-logind[1606]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:36:51.468130 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:36:51.471089 systemd-logind[1606]: Removed session 7. Dec 13 01:36:51.617393 systemd[1]: Started sshd@7-78.47.95.198:22-147.75.109.163:46880.service - OpenSSH per-connection server daemon (147.75.109.163:46880). Dec 13 01:36:52.614396 sshd[5015]: Accepted publickey for core from 147.75.109.163 port 46880 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:36:52.616205 sshd[5015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:52.621132 systemd-logind[1606]: New session 8 of user core. Dec 13 01:36:52.626865 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:36:53.383332 update_engine[1611]: I20241213 01:36:53.383226 1611 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 01:36:53.383332 update_engine[1611]: I20241213 01:36:53.383337 1611 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 01:36:53.384347 update_engine[1611]: I20241213 01:36:53.383833 1611 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 01:36:53.384857 update_engine[1611]: I20241213 01:36:53.384804 1611 omaha_request_params.cc:62] Current group set to stable Dec 13 01:36:53.388286 update_engine[1611]: I20241213 01:36:53.387686 1611 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 01:36:53.388286 update_engine[1611]: I20241213 01:36:53.387728 1611 update_attempter.cc:643] Scheduling an action processor start. Dec 13 01:36:53.388286 update_engine[1611]: I20241213 01:36:53.387760 1611 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:36:53.388286 update_engine[1611]: I20241213 01:36:53.387822 1611 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 01:36:53.388286 update_engine[1611]: I20241213 01:36:53.387988 1611 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:36:53.388286 update_engine[1611]: I20241213 01:36:53.388012 1611 omaha_request_action.cc:272] Request: Dec 13 01:36:53.388286 update_engine[1611]: Dec 13 01:36:53.388286 update_engine[1611]: Dec 13 01:36:53.388286 update_engine[1611]: Dec 13 01:36:53.388286 update_engine[1611]: Dec 13 01:36:53.388286 update_engine[1611]: Dec 13 01:36:53.388286 update_engine[1611]: Dec 13 01:36:53.388286 update_engine[1611]: Dec 13 01:36:53.388286 update_engine[1611]: Dec 13 01:36:53.388286 update_engine[1611]: I20241213 01:36:53.388026 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:36:53.388971 locksmithd[1650]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 01:36:53.394909 update_engine[1611]: I20241213 01:36:53.394821 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:36:53.395469 update_engine[1611]: I20241213 01:36:53.395393 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:36:53.396611 update_engine[1611]: E20241213 01:36:53.396353 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:36:53.396611 update_engine[1611]: I20241213 01:36:53.396461 1611 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 01:36:53.397015 sshd[5015]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:53.405077 systemd-logind[1606]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:36:53.406003 systemd[1]: sshd@7-78.47.95.198:22-147.75.109.163:46880.service: Deactivated successfully. Dec 13 01:36:53.413923 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:36:53.416125 systemd-logind[1606]: Removed session 8. Dec 13 01:36:53.563929 systemd[1]: Started sshd@8-78.47.95.198:22-147.75.109.163:46890.service - OpenSSH per-connection server daemon (147.75.109.163:46890). Dec 13 01:36:54.540161 sshd[5027]: Accepted publickey for core from 147.75.109.163 port 46890 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:36:54.541966 sshd[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:54.546526 systemd-logind[1606]: New session 9 of user core. Dec 13 01:36:54.550808 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:36:55.287921 sshd[5027]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:55.292647 systemd[1]: sshd@8-78.47.95.198:22-147.75.109.163:46890.service: Deactivated successfully. Dec 13 01:36:55.297492 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:36:55.298700 systemd-logind[1606]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:36:55.300017 systemd-logind[1606]: Removed session 9. Dec 13 01:37:00.454172 systemd[1]: Started sshd@9-78.47.95.198:22-147.75.109.163:38220.service - OpenSSH per-connection server daemon (147.75.109.163:38220). Dec 13 01:37:01.450946 sshd[5070]: Accepted publickey for core from 147.75.109.163 port 38220 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:01.452982 sshd[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:01.458714 systemd-logind[1606]: New session 10 of user core. Dec 13 01:37:01.463816 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:37:02.194738 sshd[5070]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:02.199133 systemd[1]: sshd@9-78.47.95.198:22-147.75.109.163:38220.service: Deactivated successfully. Dec 13 01:37:02.205324 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:37:02.206317 systemd-logind[1606]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:37:02.207280 systemd-logind[1606]: Removed session 10. Dec 13 01:37:02.360815 systemd[1]: Started sshd@10-78.47.95.198:22-147.75.109.163:38224.service - OpenSSH per-connection server daemon (147.75.109.163:38224). Dec 13 01:37:03.344971 sshd[5100]: Accepted publickey for core from 147.75.109.163 port 38224 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:03.348018 sshd[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:03.355920 systemd-logind[1606]: New session 11 of user core. Dec 13 01:37:03.361416 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:37:03.383314 update_engine[1611]: I20241213 01:37:03.383200 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:37:03.383901 update_engine[1611]: I20241213 01:37:03.383616 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:37:03.384046 update_engine[1611]: I20241213 01:37:03.383991 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:37:03.384883 update_engine[1611]: E20241213 01:37:03.384830 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:37:03.384998 update_engine[1611]: I20241213 01:37:03.384967 1611 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 01:37:04.321759 sshd[5100]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:04.328203 systemd[1]: sshd@10-78.47.95.198:22-147.75.109.163:38224.service: Deactivated successfully. Dec 13 01:37:04.337381 systemd-logind[1606]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:37:04.339148 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:37:04.341440 systemd-logind[1606]: Removed session 11. Dec 13 01:37:04.489765 systemd[1]: Started sshd@11-78.47.95.198:22-147.75.109.163:38232.service - OpenSSH per-connection server daemon (147.75.109.163:38232). Dec 13 01:37:05.476063 sshd[5112]: Accepted publickey for core from 147.75.109.163 port 38232 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:05.478044 sshd[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:05.483406 systemd-logind[1606]: New session 12 of user core. Dec 13 01:37:05.494964 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:37:07.488782 sshd[5112]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:07.497448 systemd[1]: sshd@11-78.47.95.198:22-147.75.109.163:38232.service: Deactivated successfully. Dec 13 01:37:07.504460 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:37:07.504635 systemd-logind[1606]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:37:07.509832 systemd-logind[1606]: Removed session 12. Dec 13 01:37:07.652493 systemd[1]: Started sshd@12-78.47.95.198:22-147.75.109.163:57494.service - OpenSSH per-connection server daemon (147.75.109.163:57494). Dec 13 01:37:08.649231 sshd[5155]: Accepted publickey for core from 147.75.109.163 port 57494 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:08.650931 sshd[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:08.655476 systemd-logind[1606]: New session 13 of user core. Dec 13 01:37:08.661788 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:37:09.482403 sshd[5155]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:09.486425 systemd[1]: sshd@12-78.47.95.198:22-147.75.109.163:57494.service: Deactivated successfully. Dec 13 01:37:09.491687 systemd-logind[1606]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:37:09.492899 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:37:09.494222 systemd-logind[1606]: Removed session 13. Dec 13 01:37:09.652838 systemd[1]: Started sshd@13-78.47.95.198:22-147.75.109.163:57506.service - OpenSSH per-connection server daemon (147.75.109.163:57506). Dec 13 01:37:10.638695 sshd[5167]: Accepted publickey for core from 147.75.109.163 port 57506 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:10.640325 sshd[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:10.645088 systemd-logind[1606]: New session 14 of user core. Dec 13 01:37:10.648813 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:37:11.384713 sshd[5167]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:11.389439 systemd-logind[1606]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:37:11.390766 systemd[1]: sshd@13-78.47.95.198:22-147.75.109.163:57506.service: Deactivated successfully. Dec 13 01:37:11.395970 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:37:11.397768 systemd-logind[1606]: Removed session 14. Dec 13 01:37:13.386529 update_engine[1611]: I20241213 01:37:13.385459 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:37:13.386529 update_engine[1611]: I20241213 01:37:13.385792 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:37:13.386529 update_engine[1611]: I20241213 01:37:13.386011 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:37:13.387554 update_engine[1611]: E20241213 01:37:13.387521 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:37:13.387612 update_engine[1611]: I20241213 01:37:13.387580 1611 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 01:37:16.555068 systemd[1]: Started sshd@14-78.47.95.198:22-147.75.109.163:45870.service - OpenSSH per-connection server daemon (147.75.109.163:45870). Dec 13 01:37:17.560664 sshd[5228]: Accepted publickey for core from 147.75.109.163 port 45870 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:17.563774 sshd[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:17.571448 systemd-logind[1606]: New session 15 of user core. Dec 13 01:37:17.578378 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:37:18.307113 sshd[5228]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:18.316920 systemd[1]: sshd@14-78.47.95.198:22-147.75.109.163:45870.service: Deactivated successfully. Dec 13 01:37:18.322699 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:37:18.324625 systemd-logind[1606]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:37:18.326289 systemd-logind[1606]: Removed session 15. Dec 13 01:37:23.383409 update_engine[1611]: I20241213 01:37:23.383311 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:37:23.384166 update_engine[1611]: I20241213 01:37:23.383649 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:37:23.384166 update_engine[1611]: I20241213 01:37:23.383862 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:37:23.384532 update_engine[1611]: E20241213 01:37:23.384474 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:37:23.384659 update_engine[1611]: I20241213 01:37:23.384546 1611 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:37:23.384659 update_engine[1611]: I20241213 01:37:23.384557 1611 omaha_request_action.cc:617] Omaha request response: Dec 13 01:37:23.384659 update_engine[1611]: E20241213 01:37:23.384656 1611 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 01:37:23.384761 update_engine[1611]: I20241213 01:37:23.384681 1611 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 01:37:23.384761 update_engine[1611]: I20241213 01:37:23.384688 1611 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:37:23.384761 update_engine[1611]: I20241213 01:37:23.384696 1611 update_attempter.cc:306] Processing Done. Dec 13 01:37:23.384761 update_engine[1611]: E20241213 01:37:23.384713 1611 update_attempter.cc:619] Update failed. Dec 13 01:37:23.384761 update_engine[1611]: I20241213 01:37:23.384721 1611 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 01:37:23.384761 update_engine[1611]: I20241213 01:37:23.384729 1611 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 01:37:23.384761 update_engine[1611]: I20241213 01:37:23.384737 1611 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 01:37:23.385014 update_engine[1611]: I20241213 01:37:23.384814 1611 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:37:23.385014 update_engine[1611]: I20241213 01:37:23.384837 1611 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:37:23.385014 update_engine[1611]: I20241213 01:37:23.384845 1611 omaha_request_action.cc:272] Request: Dec 13 01:37:23.385014 update_engine[1611]: Dec 13 01:37:23.385014 update_engine[1611]: Dec 13 01:37:23.385014 update_engine[1611]: Dec 13 01:37:23.385014 update_engine[1611]: Dec 13 01:37:23.385014 update_engine[1611]: Dec 13 01:37:23.385014 update_engine[1611]: Dec 13 01:37:23.385014 update_engine[1611]: I20241213 01:37:23.384853 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:37:23.385014 update_engine[1611]: I20241213 01:37:23.385010 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:37:23.385302 update_engine[1611]: I20241213 01:37:23.385206 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:37:23.385698 locksmithd[1650]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 01:37:23.386072 update_engine[1611]: E20241213 01:37:23.385870 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:37:23.386072 update_engine[1611]: I20241213 01:37:23.385918 1611 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:37:23.386072 update_engine[1611]: I20241213 01:37:23.385933 1611 omaha_request_action.cc:617] Omaha request response: Dec 13 01:37:23.386072 update_engine[1611]: I20241213 01:37:23.385947 1611 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:37:23.386072 update_engine[1611]: I20241213 01:37:23.385959 1611 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:37:23.386072 update_engine[1611]: I20241213 01:37:23.385970 1611 update_attempter.cc:306] Processing Done. Dec 13 01:37:23.386072 update_engine[1611]: I20241213 01:37:23.385982 1611 update_attempter.cc:310] Error event sent. Dec 13 01:37:23.386072 update_engine[1611]: I20241213 01:37:23.385994 1611 update_check_scheduler.cc:74] Next update check in 42m51s Dec 13 01:37:23.386346 locksmithd[1650]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 01:37:23.472793 systemd[1]: Started sshd@15-78.47.95.198:22-147.75.109.163:45880.service - OpenSSH per-connection server daemon (147.75.109.163:45880). Dec 13 01:37:24.453996 sshd[5263]: Accepted publickey for core from 147.75.109.163 port 45880 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:24.456863 sshd[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:24.465671 systemd-logind[1606]: New session 16 of user core. Dec 13 01:37:24.470087 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:37:25.222543 sshd[5263]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:25.228771 systemd[1]: sshd@15-78.47.95.198:22-147.75.109.163:45880.service: Deactivated successfully. Dec 13 01:37:25.237722 systemd-logind[1606]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:37:25.238601 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:37:25.241805 systemd-logind[1606]: Removed session 16.