Dec 13 01:29:17.071177 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:29:17.071296 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:17.071305 kernel: BIOS-provided physical RAM map: Dec 13 01:29:17.071312 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:29:17.071318 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:29:17.071324 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:29:17.071331 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Dec 13 01:29:17.071338 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Dec 13 01:29:17.071347 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:29:17.071353 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:29:17.071359 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:29:17.071365 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:29:17.071372 kernel: NX (Execute Disable) protection: active Dec 13 01:29:17.071378 kernel: APIC: Static calls initialized Dec 13 01:29:17.071388 kernel: SMBIOS 2.8 present. Dec 13 01:29:17.071395 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Dec 13 01:29:17.071402 kernel: Hypervisor detected: KVM Dec 13 01:29:17.071420 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:29:17.071427 kernel: kvm-clock: using sched offset of 2986788951 cycles Dec 13 01:29:17.071434 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:29:17.071442 kernel: tsc: Detected 2495.310 MHz processor Dec 13 01:29:17.071449 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:29:17.071456 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:29:17.071466 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Dec 13 01:29:17.071473 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:29:17.071480 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:29:17.071487 kernel: Using GB pages for direct mapping Dec 13 01:29:17.071494 kernel: ACPI: Early table checksum verification disabled Dec 13 01:29:17.071501 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Dec 13 01:29:17.071508 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:17.071515 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:17.071522 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:17.071531 kernel: ACPI: FACS 0x000000007CFE0000 000040 Dec 13 01:29:17.071538 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:17.071545 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:17.071552 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:17.071559 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:17.071567 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Dec 13 01:29:17.071574 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Dec 13 01:29:17.071581 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Dec 13 01:29:17.071593 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Dec 13 01:29:17.071601 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Dec 13 01:29:17.071608 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Dec 13 01:29:17.071615 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Dec 13 01:29:17.071622 kernel: No NUMA configuration found Dec 13 01:29:17.071630 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Dec 13 01:29:17.071639 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Dec 13 01:29:17.071646 kernel: Zone ranges: Dec 13 01:29:17.071654 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:29:17.071661 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Dec 13 01:29:17.071668 kernel: Normal empty Dec 13 01:29:17.071675 kernel: Movable zone start for each node Dec 13 01:29:17.071682 kernel: Early memory node ranges Dec 13 01:29:17.071690 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:29:17.071697 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Dec 13 01:29:17.071704 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Dec 13 01:29:17.071714 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:29:17.071721 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:29:17.071728 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:29:17.071735 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:29:17.071743 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:29:17.071750 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:29:17.071757 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:29:17.071764 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:29:17.071772 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:29:17.071781 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:29:17.071789 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:29:17.071796 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:29:17.071803 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:29:17.071811 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:29:17.071818 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:29:17.071825 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:29:17.071833 kernel: Booting paravirtualized kernel on KVM Dec 13 01:29:17.071840 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:29:17.071850 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:29:17.071857 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:29:17.071865 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:29:17.071872 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:29:17.071879 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 01:29:17.071888 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:17.071896 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:29:17.071903 kernel: random: crng init done Dec 13 01:29:17.071913 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:29:17.071920 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:29:17.071928 kernel: Fallback order for Node 0: 0 Dec 13 01:29:17.071935 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Dec 13 01:29:17.071942 kernel: Policy zone: DMA32 Dec 13 01:29:17.071949 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:29:17.071957 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 01:29:17.071965 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:29:17.071972 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:29:17.071982 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:29:17.071989 kernel: Dynamic Preempt: voluntary Dec 13 01:29:17.071996 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:29:17.072004 kernel: rcu: RCU event tracing is enabled. Dec 13 01:29:17.072012 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:29:17.072019 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:29:17.072027 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:29:17.072034 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:29:17.072042 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:29:17.072049 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:29:17.072059 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:29:17.072066 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:29:17.072073 kernel: Console: colour VGA+ 80x25 Dec 13 01:29:17.072080 kernel: printk: console [tty0] enabled Dec 13 01:29:17.072087 kernel: printk: console [ttyS0] enabled Dec 13 01:29:17.072095 kernel: ACPI: Core revision 20230628 Dec 13 01:29:17.072102 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:29:17.072110 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:29:17.072117 kernel: x2apic enabled Dec 13 01:29:17.072126 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:29:17.072134 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:29:17.072142 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:29:17.072149 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495310) Dec 13 01:29:17.072156 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:29:17.072164 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:29:17.072171 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:29:17.072178 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:29:17.073237 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:29:17.073247 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:29:17.073265 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:29:17.073275 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:29:17.073282 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:29:17.073290 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:29:17.073298 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:29:17.073305 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:29:17.073314 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:29:17.073321 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:29:17.073329 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:29:17.073339 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:29:17.073346 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:29:17.073354 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:29:17.073361 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:29:17.073369 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:29:17.073378 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:29:17.073386 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:29:17.073393 kernel: landlock: Up and running. Dec 13 01:29:17.073401 kernel: SELinux: Initializing. Dec 13 01:29:17.073418 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:29:17.073426 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:29:17.073433 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:29:17.073441 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:17.073449 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:17.073459 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:29:17.073467 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:29:17.073474 kernel: ... version: 0 Dec 13 01:29:17.073481 kernel: ... bit width: 48 Dec 13 01:29:17.073489 kernel: ... generic registers: 6 Dec 13 01:29:17.073496 kernel: ... value mask: 0000ffffffffffff Dec 13 01:29:17.073504 kernel: ... max period: 00007fffffffffff Dec 13 01:29:17.073511 kernel: ... fixed-purpose events: 0 Dec 13 01:29:17.073519 kernel: ... event mask: 000000000000003f Dec 13 01:29:17.073528 kernel: signal: max sigframe size: 1776 Dec 13 01:29:17.073536 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:29:17.073544 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:29:17.073551 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:29:17.073559 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:29:17.073566 kernel: .... node #0, CPUs: #1 Dec 13 01:29:17.073573 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:29:17.073581 kernel: smpboot: Max logical packages: 1 Dec 13 01:29:17.073588 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Dec 13 01:29:17.073598 kernel: devtmpfs: initialized Dec 13 01:29:17.073606 kernel: x86/mm: Memory block size: 128MB Dec 13 01:29:17.073613 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:29:17.073621 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:29:17.073628 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:29:17.073636 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:29:17.073643 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:29:17.073651 kernel: audit: type=2000 audit(1734053356.020:1): state=initialized audit_enabled=0 res=1 Dec 13 01:29:17.073658 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:29:17.073668 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:29:17.073675 kernel: cpuidle: using governor menu Dec 13 01:29:17.073683 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:29:17.073690 kernel: dca service started, version 1.12.1 Dec 13 01:29:17.073698 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:29:17.073705 kernel: PCI: Using configuration type 1 for base access Dec 13 01:29:17.073713 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:29:17.073720 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:29:17.073728 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:29:17.073737 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:29:17.073745 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:29:17.073752 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:29:17.073759 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:29:17.073767 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:29:17.073775 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:29:17.073782 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:29:17.073790 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:29:17.073797 kernel: ACPI: Interpreter enabled Dec 13 01:29:17.073807 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:29:17.073814 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:29:17.073821 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:29:17.073829 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:29:17.073837 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:29:17.073844 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:29:17.074030 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:29:17.074158 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:29:17.076553 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:29:17.076568 kernel: PCI host bridge to bus 0000:00 Dec 13 01:29:17.076698 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:29:17.076810 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:29:17.076921 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:29:17.077031 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Dec 13 01:29:17.077140 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:29:17.078315 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:29:17.078446 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:29:17.078587 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:29:17.078719 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Dec 13 01:29:17.078846 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Dec 13 01:29:17.078967 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Dec 13 01:29:17.079093 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Dec 13 01:29:17.079783 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Dec 13 01:29:17.079913 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:29:17.080044 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.080168 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Dec 13 01:29:17.081349 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.081490 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Dec 13 01:29:17.081631 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.081750 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Dec 13 01:29:17.081883 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.082004 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Dec 13 01:29:17.082133 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.082328 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Dec 13 01:29:17.082527 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.082655 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Dec 13 01:29:17.082799 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.082923 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Dec 13 01:29:17.083050 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.083171 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Dec 13 01:29:17.084350 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 01:29:17.084487 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Dec 13 01:29:17.084613 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:29:17.084733 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:29:17.084864 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:29:17.085083 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Dec 13 01:29:17.085285 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Dec 13 01:29:17.085424 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:29:17.085543 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:29:17.085676 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 01:29:17.085800 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Dec 13 01:29:17.085922 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 01:29:17.086046 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Dec 13 01:29:17.086171 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 01:29:17.088343 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:17.088480 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:17.088613 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 01:29:17.088738 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Dec 13 01:29:17.088858 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 01:29:17.088981 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:17.089101 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:17.089332 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 01:29:17.089471 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Dec 13 01:29:17.089595 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Dec 13 01:29:17.089714 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 01:29:17.089832 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:17.089949 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:17.090091 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 01:29:17.090232 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 01:29:17.090354 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 01:29:17.090483 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:17.090615 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:17.090766 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 01:29:17.090901 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Dec 13 01:29:17.091030 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 01:29:17.091162 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:17.091298 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:17.091449 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 01:29:17.091579 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Dec 13 01:29:17.091705 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Dec 13 01:29:17.091826 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 01:29:17.091950 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 01:29:17.092069 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:17.092079 kernel: acpiphp: Slot [0] registered Dec 13 01:29:17.093268 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 01:29:17.093403 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Dec 13 01:29:17.093545 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Dec 13 01:29:17.093668 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Dec 13 01:29:17.093789 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 01:29:17.093914 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:17.094031 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:17.094041 kernel: acpiphp: Slot [0-2] registered Dec 13 01:29:17.094161 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 01:29:17.094305 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:17.094436 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:17.094447 kernel: acpiphp: Slot [0-3] registered Dec 13 01:29:17.094568 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 01:29:17.094692 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:17.094813 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:17.094823 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:29:17.094830 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:29:17.094838 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:29:17.094846 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:29:17.094854 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:29:17.094861 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:29:17.094869 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:29:17.094880 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:29:17.094888 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:29:17.094895 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:29:17.094903 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:29:17.094911 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:29:17.094919 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:29:17.094926 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:29:17.094934 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:29:17.094941 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:29:17.094951 kernel: iommu: Default domain type: Translated Dec 13 01:29:17.094959 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:29:17.094966 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:29:17.094974 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:29:17.094982 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:29:17.094990 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Dec 13 01:29:17.095112 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:29:17.096289 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:29:17.096427 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:29:17.096438 kernel: vgaarb: loaded Dec 13 01:29:17.096446 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:29:17.096454 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:29:17.096462 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:29:17.096469 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:29:17.096478 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:29:17.096486 kernel: pnp: PnP ACPI init Dec 13 01:29:17.096615 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:29:17.096630 kernel: pnp: PnP ACPI: found 5 devices Dec 13 01:29:17.096638 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:29:17.096647 kernel: NET: Registered PF_INET protocol family Dec 13 01:29:17.096655 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:29:17.096663 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:29:17.096671 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:29:17.096679 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:29:17.096687 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:29:17.096697 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:29:17.096706 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:29:17.096716 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:29:17.096726 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:29:17.096737 kernel: NET: Registered PF_XDP protocol family Dec 13 01:29:17.098272 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 01:29:17.098401 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 01:29:17.098535 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 01:29:17.098662 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 01:29:17.098791 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 01:29:17.098909 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 01:29:17.099029 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 01:29:17.099151 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:17.100304 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:17.100442 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 01:29:17.100561 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:17.100736 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:17.100859 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 01:29:17.100977 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:17.101095 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:17.103241 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 01:29:17.103370 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:17.103496 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:17.103624 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 01:29:17.103762 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:17.103886 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:17.104007 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 01:29:17.104126 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 01:29:17.104270 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:17.104392 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 01:29:17.104521 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Dec 13 01:29:17.104639 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:17.104758 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:17.104881 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 01:29:17.104999 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Dec 13 01:29:17.105117 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:17.106274 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:17.107125 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 01:29:17.107348 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Dec 13 01:29:17.107480 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:17.107606 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:17.107722 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:29:17.107832 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:29:17.107944 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:29:17.108051 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Dec 13 01:29:17.108158 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:29:17.109384 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:29:17.109526 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 01:29:17.109642 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:29:17.109786 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 01:29:17.109909 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:29:17.110033 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 01:29:17.110156 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:29:17.110308 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 01:29:17.110436 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:29:17.110561 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 01:29:17.110683 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:29:17.110813 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 01:29:17.110933 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:29:17.111057 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Dec 13 01:29:17.111172 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 01:29:17.111351 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:29:17.111485 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Dec 13 01:29:17.111605 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Dec 13 01:29:17.111718 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:29:17.111839 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Dec 13 01:29:17.111953 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 01:29:17.112066 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:29:17.112078 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:29:17.112087 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:29:17.112098 kernel: Initialise system trusted keyrings Dec 13 01:29:17.112107 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:29:17.112115 kernel: Key type asymmetric registered Dec 13 01:29:17.112124 kernel: Asymmetric key parser 'x509' registered Dec 13 01:29:17.112132 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:29:17.112140 kernel: io scheduler mq-deadline registered Dec 13 01:29:17.112148 kernel: io scheduler kyber registered Dec 13 01:29:17.112156 kernel: io scheduler bfq registered Dec 13 01:29:17.112306 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 01:29:17.112443 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 01:29:17.112565 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 01:29:17.112685 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 01:29:17.112808 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 01:29:17.112927 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 01:29:17.113047 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 01:29:17.113167 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 01:29:17.113327 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 01:29:17.113462 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 01:29:17.113585 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 01:29:17.113705 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 01:29:17.113825 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 01:29:17.113944 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 01:29:17.114070 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 01:29:17.114189 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 01:29:17.114226 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:29:17.114347 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Dec 13 01:29:17.114484 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Dec 13 01:29:17.114496 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:29:17.114504 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Dec 13 01:29:17.114512 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:29:17.114521 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:29:17.114529 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:29:17.114537 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:29:17.114545 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:29:17.114556 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:29:17.114682 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 01:29:17.114796 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 01:29:17.114909 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T01:29:16 UTC (1734053356) Dec 13 01:29:17.115021 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:29:17.115031 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:29:17.115039 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:29:17.115047 kernel: Segment Routing with IPv6 Dec 13 01:29:17.115059 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:29:17.115067 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:29:17.115075 kernel: Key type dns_resolver registered Dec 13 01:29:17.115083 kernel: IPI shorthand broadcast: enabled Dec 13 01:29:17.115091 kernel: sched_clock: Marking stable (1331014100, 143051515)->(1531898669, -57833054) Dec 13 01:29:17.115099 kernel: registered taskstats version 1 Dec 13 01:29:17.115107 kernel: Loading compiled-in X.509 certificates Dec 13 01:29:17.115115 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:29:17.115123 kernel: Key type .fscrypt registered Dec 13 01:29:17.115133 kernel: Key type fscrypt-provisioning registered Dec 13 01:29:17.115142 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:29:17.115150 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:29:17.115158 kernel: ima: No architecture policies found Dec 13 01:29:17.115165 kernel: clk: Disabling unused clocks Dec 13 01:29:17.115174 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:29:17.115182 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:29:17.115190 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:29:17.115270 kernel: Run /init as init process Dec 13 01:29:17.115282 kernel: with arguments: Dec 13 01:29:17.115290 kernel: /init Dec 13 01:29:17.115298 kernel: with environment: Dec 13 01:29:17.115306 kernel: HOME=/ Dec 13 01:29:17.115313 kernel: TERM=linux Dec 13 01:29:17.115322 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:29:17.115332 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:17.115343 systemd[1]: Detected virtualization kvm. Dec 13 01:29:17.115354 systemd[1]: Detected architecture x86-64. Dec 13 01:29:17.115363 systemd[1]: Running in initrd. Dec 13 01:29:17.115371 systemd[1]: No hostname configured, using default hostname. Dec 13 01:29:17.115379 systemd[1]: Hostname set to . Dec 13 01:29:17.115388 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:17.115396 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:29:17.115405 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:17.115422 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:17.115434 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:29:17.115443 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:17.115452 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:29:17.115461 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:29:17.115471 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:29:17.115479 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:29:17.115488 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:17.115499 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:17.115508 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:29:17.115516 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:17.115524 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:17.115533 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:29:17.115541 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:17.115550 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:17.115558 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:29:17.115569 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:29:17.115578 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:17.115587 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:17.115595 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:17.115604 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:29:17.115612 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:29:17.115621 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:17.115629 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:29:17.115637 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:29:17.115648 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:17.115657 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:17.115665 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:17.115673 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:17.115682 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:17.115690 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:29:17.115729 systemd-journald[188]: Collecting audit messages is disabled. Dec 13 01:29:17.115751 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:29:17.115763 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:29:17.115771 kernel: Bridge firewalling registered Dec 13 01:29:17.115780 systemd-journald[188]: Journal started Dec 13 01:29:17.115798 systemd-journald[188]: Runtime Journal (/run/log/journal/db7ff7e50c054f8d82c2f16d93144ece) is 4.8M, max 38.4M, 33.6M free. Dec 13 01:29:17.081245 systemd-modules-load[189]: Inserted module 'overlay' Dec 13 01:29:17.109259 systemd-modules-load[189]: Inserted module 'br_netfilter' Dec 13 01:29:17.155224 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:17.155674 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:17.156429 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:17.167430 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:17.169384 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:17.174404 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:29:17.175214 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:17.187505 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:17.196003 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:17.203581 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:17.205684 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:17.216487 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:29:17.221323 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:29:17.222034 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:17.228400 dracut-cmdline[221]: dracut-dracut-053 Dec 13 01:29:17.232381 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:29:17.264441 systemd-resolved[222]: Positive Trust Anchors: Dec 13 01:29:17.265158 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:29:17.265191 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:29:17.271510 systemd-resolved[222]: Defaulting to hostname 'linux'. Dec 13 01:29:17.272689 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:29:17.275723 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:17.320262 kernel: SCSI subsystem initialized Dec 13 01:29:17.330228 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:29:17.355239 kernel: iscsi: registered transport (tcp) Dec 13 01:29:17.389347 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:29:17.389492 kernel: QLogic iSCSI HBA Driver Dec 13 01:29:17.477758 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:17.483337 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:29:17.541220 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:29:17.541304 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:29:17.541329 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:29:17.607288 kernel: raid6: avx2x4 gen() 13886 MB/s Dec 13 01:29:17.625288 kernel: raid6: avx2x2 gen() 20258 MB/s Dec 13 01:29:17.642432 kernel: raid6: avx2x1 gen() 19680 MB/s Dec 13 01:29:17.642504 kernel: raid6: using algorithm avx2x2 gen() 20258 MB/s Dec 13 01:29:17.660534 kernel: raid6: .... xor() 19868 MB/s, rmw enabled Dec 13 01:29:17.660586 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:29:17.682265 kernel: xor: automatically using best checksumming function avx Dec 13 01:29:17.846264 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:29:17.866300 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:17.873471 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:17.889853 systemd-udevd[406]: Using default interface naming scheme 'v255'. Dec 13 01:29:17.894625 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:17.902488 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:29:17.933942 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Dec 13 01:29:17.981155 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:17.993465 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:18.087805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:18.102650 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:29:18.141595 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:18.146003 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:18.148123 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:18.149133 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:18.157505 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:29:18.173465 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:18.192059 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:29:18.200027 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 01:29:18.233850 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:29:18.259097 kernel: ACPI: bus type USB registered Dec 13 01:29:18.272155 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:18.273291 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:18.275941 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:18.277246 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:18.278790 kernel: libata version 3.00 loaded. Dec 13 01:29:18.277441 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:18.282227 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:18.288272 kernel: usbcore: registered new interface driver usbfs Dec 13 01:29:18.288588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:18.291269 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:29:18.353973 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:29:18.353992 kernel: usbcore: registered new interface driver hub Dec 13 01:29:18.354009 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:29:18.354166 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:29:18.354473 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:29:18.354490 kernel: AES CTR mode by8 optimization enabled Dec 13 01:29:18.354501 kernel: usbcore: registered new device driver usb Dec 13 01:29:18.354511 kernel: scsi host1: ahci Dec 13 01:29:18.354674 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 13 01:29:18.354843 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 01:29:18.355012 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:29:18.355170 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 13 01:29:18.355357 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:29:18.355527 kernel: scsi host2: ahci Dec 13 01:29:18.355671 kernel: scsi host3: ahci Dec 13 01:29:18.355814 kernel: scsi host4: ahci Dec 13 01:29:18.355954 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:29:18.355969 kernel: GPT:17805311 != 80003071 Dec 13 01:29:18.355979 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:29:18.355989 kernel: GPT:17805311 != 80003071 Dec 13 01:29:18.355999 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:29:18.356009 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:18.356018 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:29:18.356177 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 01:29:18.360674 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 01:29:18.360829 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 01:29:18.360970 kernel: scsi host5: ahci Dec 13 01:29:18.361117 kernel: scsi host6: ahci Dec 13 01:29:18.362353 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 46 Dec 13 01:29:18.362367 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 46 Dec 13 01:29:18.362378 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 46 Dec 13 01:29:18.362388 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 46 Dec 13 01:29:18.362398 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 46 Dec 13 01:29:18.362424 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 46 Dec 13 01:29:18.362435 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 01:29:18.362585 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 01:29:18.362726 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 01:29:18.362865 kernel: hub 1-0:1.0: USB hub found Dec 13 01:29:18.363025 kernel: hub 1-0:1.0: 4 ports detected Dec 13 01:29:18.364269 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 01:29:18.364468 kernel: hub 2-0:1.0: USB hub found Dec 13 01:29:18.364635 kernel: hub 2-0:1.0: 4 ports detected Dec 13 01:29:18.419126 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 01:29:18.435911 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (448) Dec 13 01:29:18.441354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:18.442599 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (464) Dec 13 01:29:18.448563 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 01:29:18.455142 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 01:29:18.460710 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 01:29:18.467726 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 01:29:18.474339 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:29:18.477321 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:18.484993 disk-uuid[551]: Primary Header is updated. Dec 13 01:29:18.484993 disk-uuid[551]: Secondary Entries is updated. Dec 13 01:29:18.484993 disk-uuid[551]: Secondary Header is updated. Dec 13 01:29:18.495864 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:18.499368 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:18.510308 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:18.602402 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 01:29:18.665138 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:29:18.665212 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:18.665224 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:29:18.665243 kernel: ata1.00: applying bridge limits Dec 13 01:29:18.665253 kernel: ata1.00: configured for UDMA/100 Dec 13 01:29:18.670229 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:29:18.670314 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:18.670326 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:18.671830 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:18.672250 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:29:18.708298 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:29:18.722395 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:29:18.722448 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:29:18.754271 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:29:18.766475 kernel: usbcore: registered new interface driver usbhid Dec 13 01:29:18.766530 kernel: usbhid: USB HID core driver Dec 13 01:29:18.777261 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 01:29:18.777340 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 01:29:19.515670 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:29:19.516884 disk-uuid[555]: The operation has completed successfully. Dec 13 01:29:19.609512 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:29:19.609696 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:29:19.639341 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:29:19.645296 sh[593]: Success Dec 13 01:29:19.662244 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:29:19.730218 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:29:19.743291 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:29:19.745362 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:29:19.776221 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:29:19.776321 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:19.776347 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:29:19.779960 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:29:19.780015 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:29:19.795314 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:29:19.798011 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:29:19.800069 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:29:19.804388 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:29:19.808351 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:29:19.833279 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:19.833361 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:19.836621 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:19.843836 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:19.843918 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:19.860907 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:19.860560 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:29:19.870356 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:29:19.879569 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:29:19.969900 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:19.977464 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:19.982549 ignition[690]: Ignition 2.19.0 Dec 13 01:29:19.982565 ignition[690]: Stage: fetch-offline Dec 13 01:29:19.982623 ignition[690]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:19.982635 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:19.982726 ignition[690]: parsed url from cmdline: "" Dec 13 01:29:19.985649 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:19.982729 ignition[690]: no config URL provided Dec 13 01:29:19.982735 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:29:19.982743 ignition[690]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:29:19.982749 ignition[690]: failed to fetch config: resource requires networking Dec 13 01:29:19.982911 ignition[690]: Ignition finished successfully Dec 13 01:29:20.003894 systemd-networkd[779]: lo: Link UP Dec 13 01:29:20.003905 systemd-networkd[779]: lo: Gained carrier Dec 13 01:29:20.006684 systemd-networkd[779]: Enumeration completed Dec 13 01:29:20.006794 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:20.007226 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:20.007230 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:20.008274 systemd[1]: Reached target network.target - Network. Dec 13 01:29:20.008521 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:20.008525 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:20.009179 systemd-networkd[779]: eth0: Link UP Dec 13 01:29:20.009183 systemd-networkd[779]: eth0: Gained carrier Dec 13 01:29:20.009190 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:20.014167 systemd-networkd[779]: eth1: Link UP Dec 13 01:29:20.014170 systemd-networkd[779]: eth1: Gained carrier Dec 13 01:29:20.014180 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:20.018354 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:29:20.034321 ignition[783]: Ignition 2.19.0 Dec 13 01:29:20.034336 ignition[783]: Stage: fetch Dec 13 01:29:20.034607 ignition[783]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:20.034623 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:20.034739 ignition[783]: parsed url from cmdline: "" Dec 13 01:29:20.034744 ignition[783]: no config URL provided Dec 13 01:29:20.034751 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:29:20.034764 ignition[783]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:29:20.034792 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 01:29:20.035007 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 01:29:20.060325 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:29:20.072310 systemd-networkd[779]: eth0: DHCPv4 address 157.90.151.24/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 01:29:20.235860 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 01:29:20.241575 ignition[783]: GET result: OK Dec 13 01:29:20.241695 ignition[783]: parsing config with SHA512: 1e5e0f8712df94f3b029f30e4b472a4be4b63b3fc7d7fb98bfdf80cb461f71c30be86cfeca18bbedaab78144dc94b79caa2e901c267885e5c1b002fa71c9fd55 Dec 13 01:29:20.252257 unknown[783]: fetched base config from "system" Dec 13 01:29:20.252281 unknown[783]: fetched base config from "system" Dec 13 01:29:20.253018 ignition[783]: fetch: fetch complete Dec 13 01:29:20.252295 unknown[783]: fetched user config from "hetzner" Dec 13 01:29:20.253029 ignition[783]: fetch: fetch passed Dec 13 01:29:20.253108 ignition[783]: Ignition finished successfully Dec 13 01:29:20.259571 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:29:20.268648 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:29:20.308688 ignition[791]: Ignition 2.19.0 Dec 13 01:29:20.308709 ignition[791]: Stage: kargs Dec 13 01:29:20.309027 ignition[791]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:20.309050 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:20.311155 ignition[791]: kargs: kargs passed Dec 13 01:29:20.311272 ignition[791]: Ignition finished successfully Dec 13 01:29:20.315373 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:29:20.335658 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:29:20.360813 ignition[799]: Ignition 2.19.0 Dec 13 01:29:20.360835 ignition[799]: Stage: disks Dec 13 01:29:20.361157 ignition[799]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:20.361179 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:20.363274 ignition[799]: disks: disks passed Dec 13 01:29:20.363360 ignition[799]: Ignition finished successfully Dec 13 01:29:20.367243 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:29:20.370181 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:20.371491 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:29:20.373737 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:20.375880 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:20.377806 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:20.387491 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:29:20.412876 systemd-fsck[808]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:29:20.418584 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:29:20.426345 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:29:20.531227 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:29:20.531828 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:29:20.533176 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:20.548409 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:20.551323 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:29:20.553500 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:29:20.555017 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:29:20.556022 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:20.566235 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (816) Dec 13 01:29:20.570717 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:20.570772 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:20.570784 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:20.576260 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:29:20.582410 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:20.582505 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:20.585455 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:29:20.592039 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:20.649609 coreos-metadata[818]: Dec 13 01:29:20.649 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 01:29:20.651037 coreos-metadata[818]: Dec 13 01:29:20.650 INFO Fetch successful Dec 13 01:29:20.654213 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:29:20.655114 coreos-metadata[818]: Dec 13 01:29:20.653 INFO wrote hostname ci-4081-2-1-b-0bf1a54360 to /sysroot/etc/hostname Dec 13 01:29:20.656848 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:29:20.664031 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:29:20.670263 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:29:20.677875 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:29:20.836699 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:20.843321 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:29:20.846420 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:29:20.866306 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:20.867175 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:29:20.901015 ignition[934]: INFO : Ignition 2.19.0 Dec 13 01:29:20.903165 ignition[934]: INFO : Stage: mount Dec 13 01:29:20.903165 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:20.903165 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:20.902312 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:29:20.907631 ignition[934]: INFO : mount: mount passed Dec 13 01:29:20.907631 ignition[934]: INFO : Ignition finished successfully Dec 13 01:29:20.907573 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:29:20.915364 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:29:20.929448 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:20.949502 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (945) Dec 13 01:29:20.949562 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:29:20.954962 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:29:20.955034 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:29:20.964408 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:29:20.964517 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:29:20.970751 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:21.003372 ignition[962]: INFO : Ignition 2.19.0 Dec 13 01:29:21.004509 ignition[962]: INFO : Stage: files Dec 13 01:29:21.005129 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:21.005129 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:21.007289 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:29:21.008320 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:29:21.008320 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:29:21.011909 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:29:21.013030 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:29:21.013030 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:29:21.012630 unknown[962]: wrote ssh authorized keys file for user: core Dec 13 01:29:21.016272 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:29:21.016272 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:29:21.111321 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:29:21.261634 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:29:21.261634 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:21.266666 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:21.266666 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:21.266666 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:29:21.266666 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:21.266666 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:29:21.266666 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:21.266666 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:29:21.266666 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:21.266666 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:21.266666 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:29:21.266666 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:29:21.266666 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:29:21.266666 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 01:29:21.484603 systemd-networkd[779]: eth1: Gained IPv6LL Dec 13 01:29:21.804531 systemd-networkd[779]: eth0: Gained IPv6LL Dec 13 01:29:21.824534 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:29:22.111040 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:29:22.111040 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:29:22.114199 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:22.115123 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:29:22.115123 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:29:22.115123 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 01:29:22.115123 ignition[962]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 01:29:22.118331 ignition[962]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 01:29:22.118331 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 01:29:22.118331 ignition[962]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:22.118331 ignition[962]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:29:22.118331 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:22.118331 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:22.118331 ignition[962]: INFO : files: files passed Dec 13 01:29:22.118331 ignition[962]: INFO : Ignition finished successfully Dec 13 01:29:22.121110 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:29:22.128335 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:29:22.131342 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:29:22.150157 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:29:22.151006 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:29:22.158503 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:22.158503 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:22.161033 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:22.162957 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:22.163804 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:29:22.169398 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:29:22.196606 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:29:22.196867 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:29:22.199538 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:29:22.202272 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:29:22.203647 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:29:22.208369 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:29:22.238007 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:22.250544 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:29:22.269074 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:22.270372 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:22.272509 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:29:22.274600 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:29:22.274790 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:22.277255 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:29:22.278589 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:29:22.280733 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:29:22.282677 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:22.284521 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:22.286705 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:29:22.288834 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:22.291213 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:29:22.293383 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:29:22.295546 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:29:22.297533 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:29:22.297735 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:22.300030 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:22.301309 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:22.303265 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:29:22.303977 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:22.305673 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:29:22.305925 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:22.308653 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:29:22.308915 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:22.311102 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:29:22.311459 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:29:22.313325 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:29:22.313673 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:29:22.324996 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:29:22.327109 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:29:22.327404 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:22.333503 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:29:22.335946 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:29:22.337901 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:22.339232 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:29:22.339494 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:22.349606 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:29:22.350491 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:29:22.358871 ignition[1015]: INFO : Ignition 2.19.0 Dec 13 01:29:22.358871 ignition[1015]: INFO : Stage: umount Dec 13 01:29:22.358871 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:22.358871 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:29:22.358871 ignition[1015]: INFO : umount: umount passed Dec 13 01:29:22.358871 ignition[1015]: INFO : Ignition finished successfully Dec 13 01:29:22.359475 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:29:22.359629 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:29:22.363033 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:29:22.363098 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:29:22.363877 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:29:22.363934 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:29:22.365012 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:29:22.365071 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:29:22.365734 systemd[1]: Stopped target network.target - Network. Dec 13 01:29:22.366278 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:29:22.366340 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:22.367422 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:29:22.367909 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:29:22.372398 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:22.375943 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:29:22.376451 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:29:22.377409 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:29:22.377482 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:22.378004 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:29:22.378048 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:22.378599 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:29:22.378668 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:29:22.381534 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:29:22.381596 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:22.383377 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:29:22.385355 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:29:22.395664 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:29:22.401963 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:29:22.402078 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:29:22.405361 systemd-networkd[779]: eth0: DHCPv6 lease lost Dec 13 01:29:22.409106 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:29:22.409251 systemd-networkd[779]: eth1: DHCPv6 lease lost Dec 13 01:29:22.410746 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:22.417459 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:29:22.417611 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:29:22.423255 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:29:22.423321 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:22.431280 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:29:22.432316 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:29:22.432959 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:22.434150 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:29:22.434207 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:22.435263 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:29:22.435309 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:22.436575 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:22.437797 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:29:22.438782 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:29:22.444977 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:29:22.445067 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:22.450655 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:29:22.450848 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:29:22.460260 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:29:22.460583 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:22.462039 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:29:22.462112 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:22.463168 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:29:22.463255 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:22.464734 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:29:22.464833 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:22.466730 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:29:22.466815 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:22.468218 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:22.468301 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:22.481831 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:29:22.482672 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:29:22.482751 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:22.483602 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:22.483669 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:22.494098 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:29:22.494291 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:29:22.496291 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:29:22.501372 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:29:22.511477 systemd[1]: Switching root. Dec 13 01:29:22.547551 systemd-journald[188]: Journal stopped Dec 13 01:29:23.828009 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Dec 13 01:29:23.828082 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:29:23.828096 kernel: SELinux: policy capability open_perms=1 Dec 13 01:29:23.828107 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:29:23.828126 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:29:23.828141 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:29:23.828152 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:29:23.828163 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:29:23.828174 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:29:23.829743 kernel: audit: type=1403 audit(1734053362.759:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:29:23.829766 systemd[1]: Successfully loaded SELinux policy in 66.298ms. Dec 13 01:29:23.829785 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.875ms. Dec 13 01:29:23.829822 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:23.829835 systemd[1]: Detected virtualization kvm. Dec 13 01:29:23.829847 systemd[1]: Detected architecture x86-64. Dec 13 01:29:23.829858 systemd[1]: Detected first boot. Dec 13 01:29:23.829870 systemd[1]: Hostname set to . Dec 13 01:29:23.829885 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:23.829897 zram_generator::config[1057]: No configuration found. Dec 13 01:29:23.829910 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:29:23.829922 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:29:23.829938 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:29:23.829950 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:29:23.829962 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:29:23.829974 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:29:23.829988 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:29:23.830000 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:29:23.830016 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:29:23.830028 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:29:23.830039 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:29:23.830051 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:29:23.830063 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:23.830095 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:23.830108 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:29:23.830123 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:29:23.830134 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:29:23.830147 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:23.830159 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:29:23.830171 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:23.830182 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:29:23.830215 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:29:23.830230 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:23.830242 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:29:23.830254 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:23.830266 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:23.830277 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:23.830289 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:23.830301 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:29:23.830313 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:29:23.830327 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:23.830339 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:23.830350 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:23.830362 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:29:23.830373 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:29:23.830385 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:29:23.830397 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:29:23.830409 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:23.830420 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:29:23.830446 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:29:23.830460 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:29:23.830472 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:29:23.830484 systemd[1]: Reached target machines.target - Containers. Dec 13 01:29:23.830497 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:29:23.830514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:23.830527 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:23.830540 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:29:23.830553 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:23.830565 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:23.830576 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:23.830588 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:29:23.830600 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:23.830612 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:29:23.830626 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:29:23.830638 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:29:23.830650 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:29:23.830662 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:29:23.830673 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:23.830685 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:23.830697 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:29:23.830709 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:29:23.830721 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:23.830735 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:29:23.830746 systemd[1]: Stopped verity-setup.service. Dec 13 01:29:23.830759 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:23.830771 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:29:23.830783 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:29:23.830798 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:29:23.830810 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:29:23.830824 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:29:23.830835 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:29:23.830847 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:23.830883 systemd-journald[1126]: Collecting audit messages is disabled. Dec 13 01:29:23.830905 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:29:23.830920 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:29:23.830931 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:23.830943 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:23.830955 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:23.830967 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:23.830983 systemd-journald[1126]: Journal started Dec 13 01:29:23.831004 systemd-journald[1126]: Runtime Journal (/run/log/journal/db7ff7e50c054f8d82c2f16d93144ece) is 4.8M, max 38.4M, 33.6M free. Dec 13 01:29:23.460458 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:29:23.486077 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:29:23.487321 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:29:23.836234 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:23.836364 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:23.837813 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:29:23.838890 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:29:23.854212 kernel: loop: module loaded Dec 13 01:29:23.857281 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:23.857483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:23.861262 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:29:23.869212 kernel: fuse: init (API version 7.39) Dec 13 01:29:23.870265 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:29:23.870983 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:29:23.871028 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:23.873250 kernel: ACPI: bus type drm_connector registered Dec 13 01:29:23.877380 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:29:23.881405 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:29:23.890740 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:29:23.891374 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:23.897301 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:29:23.900374 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:29:23.901330 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:23.903345 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:29:23.903917 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:23.906364 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:23.909330 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:29:23.913090 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:29:23.914714 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:23.914947 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:23.918540 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:29:23.918695 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:29:23.919427 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:29:23.920142 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:29:23.948480 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:29:23.958351 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:29:23.962826 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:29:23.969706 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:29:23.973094 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:29:23.980538 kernel: loop0: detected capacity change from 0 to 8 Dec 13 01:29:23.980720 systemd-journald[1126]: Time spent on flushing to /var/log/journal/db7ff7e50c054f8d82c2f16d93144ece is 40.118ms for 1136 entries. Dec 13 01:29:23.980720 systemd-journald[1126]: System Journal (/var/log/journal/db7ff7e50c054f8d82c2f16d93144ece) is 8.0M, max 584.8M, 576.8M free. Dec 13 01:29:24.043795 systemd-journald[1126]: Received client request to flush runtime journal. Dec 13 01:29:24.043833 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:29:24.043850 kernel: loop1: detected capacity change from 0 to 142488 Dec 13 01:29:23.982375 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:29:24.045274 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:29:24.057271 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:24.059658 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:29:24.063409 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:29:24.066110 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:29:24.069703 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:24.081366 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:24.085357 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:29:24.107513 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:29:24.120226 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 01:29:24.122229 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Dec 13 01:29:24.122249 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Dec 13 01:29:24.128555 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:24.172227 kernel: loop3: detected capacity change from 0 to 210664 Dec 13 01:29:24.218242 kernel: loop4: detected capacity change from 0 to 8 Dec 13 01:29:24.224399 kernel: loop5: detected capacity change from 0 to 142488 Dec 13 01:29:24.247221 kernel: loop6: detected capacity change from 0 to 140768 Dec 13 01:29:24.275246 kernel: loop7: detected capacity change from 0 to 210664 Dec 13 01:29:24.303901 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 01:29:24.305156 (sd-merge)[1202]: Merged extensions into '/usr'. Dec 13 01:29:24.311612 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:29:24.311705 systemd[1]: Reloading... Dec 13 01:29:24.418238 zram_generator::config[1227]: No configuration found. Dec 13 01:29:24.543214 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:29:24.557431 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:24.611275 systemd[1]: Reloading finished in 298 ms. Dec 13 01:29:24.640297 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:29:24.643743 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:29:24.651423 systemd[1]: Starting ensure-sysext.service... Dec 13 01:29:24.655159 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:29:24.664044 systemd[1]: Reloading requested from client PID 1271 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:29:24.664058 systemd[1]: Reloading... Dec 13 01:29:24.703606 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:29:24.705815 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:29:24.706966 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:29:24.708937 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Dec 13 01:29:24.709366 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Dec 13 01:29:24.716800 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:24.716968 systemd-tmpfiles[1272]: Skipping /boot Dec 13 01:29:24.737394 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:29:24.738357 systemd-tmpfiles[1272]: Skipping /boot Dec 13 01:29:24.766076 zram_generator::config[1301]: No configuration found. Dec 13 01:29:24.896777 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:24.950559 systemd[1]: Reloading finished in 286 ms. Dec 13 01:29:24.971038 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:29:24.972033 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:24.990318 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:24.995420 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:29:25.006688 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:29:25.008130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:25.013316 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:25.015410 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:25.025998 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:25.028362 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:25.031711 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:29:25.036876 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:29:25.040985 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:25.051558 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:29:25.052646 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:25.057090 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:25.057280 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:25.057432 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:25.057525 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:25.058878 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:25.061388 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:25.067142 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:25.067555 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:25.072171 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:25.073237 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:25.086509 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:25.086772 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:25.093585 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:25.104304 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:25.110692 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:25.121556 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:25.123395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:25.129717 augenrules[1374]: No rules Dec 13 01:29:25.130892 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:29:25.131577 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:25.134100 systemd[1]: Finished ensure-sysext.service. Dec 13 01:29:25.135003 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:29:25.136734 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:29:25.137873 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:25.138037 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:25.139838 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:25.140004 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:25.145724 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:29:25.147760 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:25.147945 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:25.149565 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:25.149747 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:25.157160 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Dec 13 01:29:25.166460 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:25.166555 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:25.178398 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:29:25.182385 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:29:25.184428 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:29:25.188410 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:29:25.201603 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:25.209407 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:25.209994 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:29:25.212274 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:29:25.309089 systemd-networkd[1393]: lo: Link UP Dec 13 01:29:25.309245 systemd-networkd[1393]: lo: Gained carrier Dec 13 01:29:25.310032 systemd-networkd[1393]: Enumeration completed Dec 13 01:29:25.310146 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:25.322475 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:29:25.354374 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:29:25.366227 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1409) Dec 13 01:29:25.372224 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1409) Dec 13 01:29:25.376603 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:29:25.377465 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:29:25.394621 systemd-resolved[1355]: Positive Trust Anchors: Dec 13 01:29:25.394634 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:29:25.394665 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:29:25.400680 systemd-resolved[1355]: Using system hostname 'ci-4081-2-1-b-0bf1a54360'. Dec 13 01:29:25.402835 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:29:25.403481 systemd[1]: Reached target network.target - Network. Dec 13 01:29:25.404011 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:25.436857 systemd-networkd[1393]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:25.436867 systemd-networkd[1393]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:25.439224 systemd-networkd[1393]: eth1: Link UP Dec 13 01:29:25.439296 systemd-networkd[1393]: eth1: Gained carrier Dec 13 01:29:25.439377 systemd-networkd[1393]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:25.455405 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:25.455855 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:25.456282 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:29:25.457968 systemd-networkd[1393]: eth0: Link UP Dec 13 01:29:25.458029 systemd-networkd[1393]: eth0: Gained carrier Dec 13 01:29:25.458083 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:25.466232 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:29:25.468218 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:29:25.493818 systemd-networkd[1393]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:29:25.496421 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Dec 13 01:29:25.497399 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 13 01:29:25.497510 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:25.497615 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:25.502410 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:25.505350 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:25.509353 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:25.509925 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:25.509962 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:29:25.509975 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:29:25.513492 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:25.514146 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:25.514951 systemd-networkd[1393]: eth0: DHCPv4 address 157.90.151.24/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 01:29:25.516579 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Dec 13 01:29:25.535684 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1410) Dec 13 01:29:25.534767 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:25.534940 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:25.535613 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:25.536872 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:25.537345 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:25.539932 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:25.557242 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:29:25.565320 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:29:25.568107 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:29:25.568305 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:29:25.611224 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:29:25.619603 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:25.625750 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 01:29:25.633411 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:29:25.639176 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Dec 13 01:29:25.639257 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Dec 13 01:29:25.644931 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:29:25.648893 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 01:29:25.648954 kernel: [drm] features: -context_init Dec 13 01:29:25.648971 kernel: [drm] number of scanouts: 1 Dec 13 01:29:25.648989 kernel: [drm] number of cap sets: 0 Dec 13 01:29:25.651240 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:25.651591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:25.655218 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 01:29:25.659466 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:25.663277 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 01:29:25.663326 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 01:29:25.662787 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:29:25.681241 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 01:29:25.685078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:25.685401 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:25.693714 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:25.770931 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:29:25.771913 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:25.779473 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:29:25.792786 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:25.827174 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:29:25.828525 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:25.828666 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:25.828903 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:29:25.829071 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:29:25.829490 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:29:25.829757 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:29:25.829874 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:29:25.830018 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:29:25.830075 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:29:25.830180 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:29:25.834828 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:29:25.838516 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:29:25.847658 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:29:25.851696 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:29:25.854743 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:29:25.856722 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:29:25.856891 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:25.857092 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:25.857155 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:29:25.865392 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:29:25.874471 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:29:25.877756 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:29:25.883454 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:29:25.896314 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:29:25.906428 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:29:25.907153 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:29:25.916413 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:29:25.933271 jq[1470]: false Dec 13 01:29:25.931945 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:29:25.939541 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 01:29:25.951390 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:29:25.957329 extend-filesystems[1472]: Found loop4 Dec 13 01:29:25.957329 extend-filesystems[1472]: Found loop5 Dec 13 01:29:25.957329 extend-filesystems[1472]: Found loop6 Dec 13 01:29:25.957329 extend-filesystems[1472]: Found loop7 Dec 13 01:29:25.957329 extend-filesystems[1472]: Found sda Dec 13 01:29:25.957329 extend-filesystems[1472]: Found sda1 Dec 13 01:29:25.957329 extend-filesystems[1472]: Found sda2 Dec 13 01:29:25.957329 extend-filesystems[1472]: Found sda3 Dec 13 01:29:25.957329 extend-filesystems[1472]: Found usr Dec 13 01:29:25.957329 extend-filesystems[1472]: Found sda4 Dec 13 01:29:25.957329 extend-filesystems[1472]: Found sda6 Dec 13 01:29:25.957329 extend-filesystems[1472]: Found sda7 Dec 13 01:29:25.957329 extend-filesystems[1472]: Found sda9 Dec 13 01:29:25.957329 extend-filesystems[1472]: Checking size of /dev/sda9 Dec 13 01:29:26.011921 extend-filesystems[1472]: Resized partition /dev/sda9 Dec 13 01:29:26.014377 coreos-metadata[1468]: Dec 13 01:29:25.988 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 01:29:26.014377 coreos-metadata[1468]: Dec 13 01:29:25.992 INFO Fetch successful Dec 13 01:29:26.014377 coreos-metadata[1468]: Dec 13 01:29:26.000 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 01:29:26.014377 coreos-metadata[1468]: Dec 13 01:29:26.009 INFO Fetch successful Dec 13 01:29:25.960509 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:29:26.014766 extend-filesystems[1496]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:29:26.043227 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 01:29:25.982403 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:29:26.028736 dbus-daemon[1469]: [system] SELinux support is enabled Dec 13 01:29:25.988590 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:29:25.989116 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:29:25.995504 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:29:26.011336 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:29:26.015251 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:29:26.036742 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:29:26.055625 jq[1494]: true Dec 13 01:29:26.057427 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:29:26.057654 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:29:26.057996 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:29:26.058290 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:29:26.075642 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:29:26.075854 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:29:26.091986 update_engine[1490]: I20241213 01:29:26.091689 1490 main.cc:92] Flatcar Update Engine starting Dec 13 01:29:26.107573 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:29:26.107613 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:29:26.110083 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:29:26.110105 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:29:26.121066 systemd-logind[1483]: New seat seat0. Dec 13 01:29:26.125681 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:29:26.126623 update_engine[1490]: I20241213 01:29:26.124377 1490 update_check_scheduler.cc:74] Next update check in 5m40s Dec 13 01:29:26.137242 jq[1503]: true Dec 13 01:29:26.137975 systemd-logind[1483]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 01:29:26.137995 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:29:26.143864 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:29:26.144664 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:29:26.153633 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:29:26.159927 tar[1502]: linux-amd64/helm Dec 13 01:29:26.176085 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1404) Dec 13 01:29:26.190422 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 01:29:26.230595 extend-filesystems[1496]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 01:29:26.230595 extend-filesystems[1496]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 01:29:26.230595 extend-filesystems[1496]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 01:29:26.234104 extend-filesystems[1472]: Resized filesystem in /dev/sda9 Dec 13 01:29:26.234104 extend-filesystems[1472]: Found sr0 Dec 13 01:29:26.236573 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:29:26.236796 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:29:26.270934 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:29:26.279887 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:29:26.330561 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:29:26.332624 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:29:26.346506 systemd[1]: Starting sshkeys.service... Dec 13 01:29:26.386159 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:29:26.399478 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:29:26.434288 coreos-metadata[1551]: Dec 13 01:29:26.434 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 01:29:26.436215 coreos-metadata[1551]: Dec 13 01:29:26.435 INFO Fetch successful Dec 13 01:29:26.438369 unknown[1551]: wrote ssh authorized keys file for user: core Dec 13 01:29:26.461379 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:29:26.475989 update-ssh-keys[1556]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:29:26.476452 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:29:26.482227 systemd[1]: Finished sshkeys.service. Dec 13 01:29:26.520416 containerd[1508]: time="2024-12-13T01:29:26.518591148Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:29:26.547152 sshd_keygen[1497]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:29:26.566290 containerd[1508]: time="2024-12-13T01:29:26.566235800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:26.568215 containerd[1508]: time="2024-12-13T01:29:26.568180037Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:26.568289 containerd[1508]: time="2024-12-13T01:29:26.568276098Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:29:26.568341 containerd[1508]: time="2024-12-13T01:29:26.568330309Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:29:26.568579 containerd[1508]: time="2024-12-13T01:29:26.568562936Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:29:26.568645 containerd[1508]: time="2024-12-13T01:29:26.568632295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:26.568759 containerd[1508]: time="2024-12-13T01:29:26.568742873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:26.568805 containerd[1508]: time="2024-12-13T01:29:26.568793698Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:26.569036 containerd[1508]: time="2024-12-13T01:29:26.569018390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:26.569084 containerd[1508]: time="2024-12-13T01:29:26.569073763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:26.569140 containerd[1508]: time="2024-12-13T01:29:26.569127083Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:26.569181 containerd[1508]: time="2024-12-13T01:29:26.569170725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:26.569323 containerd[1508]: time="2024-12-13T01:29:26.569309015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:26.569671 containerd[1508]: time="2024-12-13T01:29:26.569654833Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:29:26.569832 containerd[1508]: time="2024-12-13T01:29:26.569816186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:29:26.569878 containerd[1508]: time="2024-12-13T01:29:26.569867202Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:29:26.570006 containerd[1508]: time="2024-12-13T01:29:26.569992236Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:29:26.570098 containerd[1508]: time="2024-12-13T01:29:26.570086233Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:29:26.577019 containerd[1508]: time="2024-12-13T01:29:26.576646978Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:29:26.577019 containerd[1508]: time="2024-12-13T01:29:26.576699626Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:29:26.577019 containerd[1508]: time="2024-12-13T01:29:26.576717861Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:29:26.577019 containerd[1508]: time="2024-12-13T01:29:26.576732007Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:29:26.577019 containerd[1508]: time="2024-12-13T01:29:26.576778364Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:29:26.577019 containerd[1508]: time="2024-12-13T01:29:26.576904401Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:29:26.578821 containerd[1508]: time="2024-12-13T01:29:26.577346530Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:29:26.578821 containerd[1508]: time="2024-12-13T01:29:26.578332479Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:29:26.578821 containerd[1508]: time="2024-12-13T01:29:26.578351154Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:29:26.578821 containerd[1508]: time="2024-12-13T01:29:26.578362796Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:29:26.578821 containerd[1508]: time="2024-12-13T01:29:26.578381932Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:29:26.578821 containerd[1508]: time="2024-12-13T01:29:26.578394776Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:29:26.578821 containerd[1508]: time="2024-12-13T01:29:26.578406047Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:29:26.578821 containerd[1508]: time="2024-12-13T01:29:26.578418510Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:29:26.578821 containerd[1508]: time="2024-12-13T01:29:26.578431655Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:29:26.578821 containerd[1508]: time="2024-12-13T01:29:26.578454969Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:29:26.578821 containerd[1508]: time="2024-12-13T01:29:26.578467773Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:29:26.578821 containerd[1508]: time="2024-12-13T01:29:26.578478783Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:29:26.578821 containerd[1508]: time="2024-12-13T01:29:26.578496928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:26.578821 containerd[1508]: time="2024-12-13T01:29:26.578511645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:29:26.579074 containerd[1508]: time="2024-12-13T01:29:26.578523778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:29:26.579074 containerd[1508]: time="2024-12-13T01:29:26.578536351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:29:26.579074 containerd[1508]: time="2024-12-13T01:29:26.578548173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:29:26.579074 containerd[1508]: time="2024-12-13T01:29:26.578561298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:29:26.583034 containerd[1508]: time="2024-12-13T01:29:26.578572049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:29:26.583276 containerd[1508]: time="2024-12-13T01:29:26.583126391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:29:26.584143 containerd[1508]: time="2024-12-13T01:29:26.584120866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:29:26.584186 containerd[1508]: time="2024-12-13T01:29:26.584149159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:29:26.584186 containerd[1508]: time="2024-12-13T01:29:26.584161873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:29:26.584186 containerd[1508]: time="2024-12-13T01:29:26.584173825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:29:26.584259 containerd[1508]: time="2024-12-13T01:29:26.584186259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:29:26.584584 containerd[1508]: time="2024-12-13T01:29:26.584564348Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:29:26.584612 containerd[1508]: time="2024-12-13T01:29:26.584596047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:29:26.584638 containerd[1508]: time="2024-12-13T01:29:26.584610865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:29:26.584638 containerd[1508]: time="2024-12-13T01:29:26.584621024Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:29:26.584677 containerd[1508]: time="2024-12-13T01:29:26.584663173Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:29:26.584696 containerd[1508]: time="2024-12-13T01:29:26.584679053Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:29:26.584696 containerd[1508]: time="2024-12-13T01:29:26.584689652Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:29:26.584736 containerd[1508]: time="2024-12-13T01:29:26.584700924Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:29:26.584736 containerd[1508]: time="2024-12-13T01:29:26.584709981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:29:26.584736 containerd[1508]: time="2024-12-13T01:29:26.584721462Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:29:26.584736 containerd[1508]: time="2024-12-13T01:29:26.584730800Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:29:26.584809 containerd[1508]: time="2024-12-13T01:29:26.584739897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:29:26.585030 containerd[1508]: time="2024-12-13T01:29:26.584980738Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:29:26.585151 containerd[1508]: time="2024-12-13T01:29:26.585034149Z" level=info msg="Connect containerd service" Dec 13 01:29:26.585151 containerd[1508]: time="2024-12-13T01:29:26.585061661Z" level=info msg="using legacy CRI server" Dec 13 01:29:26.585151 containerd[1508]: time="2024-12-13T01:29:26.585067521Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:29:26.585151 containerd[1508]: time="2024-12-13T01:29:26.585149225Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:29:26.585833 containerd[1508]: time="2024-12-13T01:29:26.585767524Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:29:26.587498 containerd[1508]: time="2024-12-13T01:29:26.585982417Z" level=info msg="Start subscribing containerd event" Dec 13 01:29:26.587498 containerd[1508]: time="2024-12-13T01:29:26.586026650Z" level=info msg="Start recovering state" Dec 13 01:29:26.587498 containerd[1508]: time="2024-12-13T01:29:26.586065834Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:29:26.587498 containerd[1508]: time="2024-12-13T01:29:26.586082645Z" level=info msg="Start event monitor" Dec 13 01:29:26.587498 containerd[1508]: time="2024-12-13T01:29:26.586101581Z" level=info msg="Start snapshots syncer" Dec 13 01:29:26.587498 containerd[1508]: time="2024-12-13T01:29:26.586110077Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:29:26.587498 containerd[1508]: time="2024-12-13T01:29:26.586117120Z" level=info msg="Start streaming server" Dec 13 01:29:26.587498 containerd[1508]: time="2024-12-13T01:29:26.586111950Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:29:26.586345 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:29:26.592217 containerd[1508]: time="2024-12-13T01:29:26.589736449Z" level=info msg="containerd successfully booted in 0.075237s" Dec 13 01:29:26.601613 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:29:26.613772 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:29:26.632650 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:29:26.632900 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:29:26.642166 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:29:26.653957 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:29:26.668671 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:29:26.677714 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:29:26.680970 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:29:26.733389 systemd-networkd[1393]: eth0: Gained IPv6LL Dec 13 01:29:26.734707 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Dec 13 01:29:26.742235 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:29:26.745753 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:29:26.753483 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:26.761483 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:29:26.798023 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:29:26.842155 tar[1502]: linux-amd64/LICENSE Dec 13 01:29:26.843697 tar[1502]: linux-amd64/README.md Dec 13 01:29:26.854104 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:29:27.246138 systemd-networkd[1393]: eth1: Gained IPv6LL Dec 13 01:29:27.247341 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Dec 13 01:29:27.839443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:27.842980 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:29:27.847402 systemd[1]: Startup finished in 1.540s (kernel) + 5.932s (initrd) + 5.153s (userspace) = 12.627s. Dec 13 01:29:27.851340 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:28.901593 kubelet[1600]: E1213 01:29:28.901418 1600 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:28.910922 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:28.911407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:28.912127 systemd[1]: kubelet.service: Consumed 1.548s CPU time. Dec 13 01:29:39.040243 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:29:39.047859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:39.194478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:39.196030 (kubelet)[1620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:39.241085 kubelet[1620]: E1213 01:29:39.241008 1620 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:39.254946 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:39.255275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:49.290187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:29:49.296563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:49.460459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:49.462815 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:29:49.506535 kubelet[1636]: E1213 01:29:49.506456 1636 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:29:49.511169 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:29:49.511443 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:29:58.074866 systemd-timesyncd[1388]: Contacted time server 45.132.246.84:123 (2.flatcar.pool.ntp.org). Dec 13 01:29:58.074966 systemd-timesyncd[1388]: Initial clock synchronization to Fri 2024-12-13 01:29:58.074589 UTC. Dec 13 01:29:58.075200 systemd-resolved[1355]: Clock change detected. Flushing caches. Dec 13 01:30:00.318117 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:30:00.330388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:00.553404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:00.554538 (kubelet)[1652]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:00.617556 kubelet[1652]: E1213 01:30:00.617378 1652 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:00.622971 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:00.623384 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:10.818102 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:30:10.829607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:11.033154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:11.037381 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:11.079948 kubelet[1668]: E1213 01:30:11.079816 1668 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:11.084447 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:11.084711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:12.013829 update_engine[1490]: I20241213 01:30:12.013682 1490 update_attempter.cc:509] Updating boot flags... Dec 13 01:30:12.092115 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1685) Dec 13 01:30:12.163207 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1686) Dec 13 01:30:21.317896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:30:21.326453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:21.567615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:21.572467 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:21.618228 kubelet[1702]: E1213 01:30:21.618048 1702 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:21.622108 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:21.622523 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:31.818125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 01:30:31.826783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:32.011310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:32.011377 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:32.049503 kubelet[1718]: E1213 01:30:32.049421 1718 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:32.053349 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:32.053818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:42.068162 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 01:30:42.078441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:42.281478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:42.285889 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:42.363247 kubelet[1735]: E1213 01:30:42.363036 1735 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:42.370668 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:42.370956 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:52.568135 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 01:30:52.575353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:52.802375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:52.806475 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:52.846168 kubelet[1751]: E1213 01:30:52.845972 1751 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:52.854150 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:52.854395 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:03.067948 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 01:31:03.075355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:03.284296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:03.285631 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:03.328514 kubelet[1768]: E1213 01:31:03.328341 1768 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:03.336231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:03.336471 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:13.568115 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 01:31:13.578447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:13.804366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:13.806855 (kubelet)[1784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:13.878171 kubelet[1784]: E1213 01:31:13.877945 1784 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:13.882812 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:13.883285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:24.068004 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 13 01:31:24.074364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:24.269645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:24.273700 (kubelet)[1800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:24.311276 kubelet[1800]: E1213 01:31:24.311198 1800 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:24.319172 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:24.319369 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:26.190718 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:31:26.196571 systemd[1]: Started sshd@0-157.90.151.24:22-147.75.109.163:45642.service - OpenSSH per-connection server daemon (147.75.109.163:45642). Dec 13 01:31:27.224677 sshd[1809]: Accepted publickey for core from 147.75.109.163 port 45642 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:27.229299 sshd[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:27.245608 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:31:27.254462 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:31:27.260769 systemd-logind[1483]: New session 1 of user core. Dec 13 01:31:27.287152 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:31:27.296551 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:31:27.321543 (systemd)[1813]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:31:27.495404 systemd[1813]: Queued start job for default target default.target. Dec 13 01:31:27.503096 systemd[1813]: Created slice app.slice - User Application Slice. Dec 13 01:31:27.503160 systemd[1813]: Reached target paths.target - Paths. Dec 13 01:31:27.503175 systemd[1813]: Reached target timers.target - Timers. Dec 13 01:31:27.505393 systemd[1813]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:31:27.527007 systemd[1813]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:31:27.527267 systemd[1813]: Reached target sockets.target - Sockets. Dec 13 01:31:27.527291 systemd[1813]: Reached target basic.target - Basic System. Dec 13 01:31:27.527352 systemd[1813]: Reached target default.target - Main User Target. Dec 13 01:31:27.527402 systemd[1813]: Startup finished in 191ms. Dec 13 01:31:27.527895 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:31:27.536422 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:31:28.243502 systemd[1]: Started sshd@1-157.90.151.24:22-147.75.109.163:37304.service - OpenSSH per-connection server daemon (147.75.109.163:37304). Dec 13 01:31:29.257966 sshd[1824]: Accepted publickey for core from 147.75.109.163 port 37304 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:29.259921 sshd[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:29.266036 systemd-logind[1483]: New session 2 of user core. Dec 13 01:31:29.275366 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:31:29.948583 sshd[1824]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:29.954825 systemd[1]: sshd@1-157.90.151.24:22-147.75.109.163:37304.service: Deactivated successfully. Dec 13 01:31:29.959146 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:31:29.963570 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:31:29.966234 systemd-logind[1483]: Removed session 2. Dec 13 01:31:30.127938 systemd[1]: Started sshd@2-157.90.151.24:22-147.75.109.163:37314.service - OpenSSH per-connection server daemon (147.75.109.163:37314). Dec 13 01:31:31.136114 sshd[1831]: Accepted publickey for core from 147.75.109.163 port 37314 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:31.139243 sshd[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:31.147873 systemd-logind[1483]: New session 3 of user core. Dec 13 01:31:31.158412 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:31:31.823021 sshd[1831]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:31.830820 systemd[1]: sshd@2-157.90.151.24:22-147.75.109.163:37314.service: Deactivated successfully. Dec 13 01:31:31.835018 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:31:31.836607 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:31:31.838504 systemd-logind[1483]: Removed session 3. Dec 13 01:31:32.005960 systemd[1]: Started sshd@3-157.90.151.24:22-147.75.109.163:37328.service - OpenSSH per-connection server daemon (147.75.109.163:37328). Dec 13 01:31:33.021100 sshd[1838]: Accepted publickey for core from 147.75.109.163 port 37328 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:33.024460 sshd[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:33.033188 systemd-logind[1483]: New session 4 of user core. Dec 13 01:31:33.045390 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:31:33.717911 sshd[1838]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:33.725662 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:31:33.726561 systemd[1]: sshd@3-157.90.151.24:22-147.75.109.163:37328.service: Deactivated successfully. Dec 13 01:31:33.730501 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:31:33.732280 systemd-logind[1483]: Removed session 4. Dec 13 01:31:33.893665 systemd[1]: Started sshd@4-157.90.151.24:22-147.75.109.163:37330.service - OpenSSH per-connection server daemon (147.75.109.163:37330). Dec 13 01:31:34.567815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 13 01:31:34.577512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:34.750310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:34.750470 (kubelet)[1855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:34.799575 kubelet[1855]: E1213 01:31:34.799405 1855 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:34.806749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:34.806946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:34.906203 sshd[1845]: Accepted publickey for core from 147.75.109.163 port 37330 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:31:34.908874 sshd[1845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:34.918370 systemd-logind[1483]: New session 5 of user core. Dec 13 01:31:34.930372 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:31:35.450167 sudo[1864]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:31:35.450855 sudo[1864]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:35.850516 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:31:35.851956 (dockerd)[1879]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:31:36.349409 dockerd[1879]: time="2024-12-13T01:31:36.348983278Z" level=info msg="Starting up" Dec 13 01:31:36.470447 dockerd[1879]: time="2024-12-13T01:31:36.470390072Z" level=info msg="Loading containers: start." Dec 13 01:31:36.615081 kernel: Initializing XFRM netlink socket Dec 13 01:31:36.734173 systemd-networkd[1393]: docker0: Link UP Dec 13 01:31:36.749726 dockerd[1879]: time="2024-12-13T01:31:36.749667258Z" level=info msg="Loading containers: done." Dec 13 01:31:36.769905 dockerd[1879]: time="2024-12-13T01:31:36.769795732Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:31:36.770221 dockerd[1879]: time="2024-12-13T01:31:36.769954996Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:31:36.770269 dockerd[1879]: time="2024-12-13T01:31:36.770242258Z" level=info msg="Daemon has completed initialization" Dec 13 01:31:36.829434 dockerd[1879]: time="2024-12-13T01:31:36.829280738Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:31:36.830255 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:31:38.243355 containerd[1508]: time="2024-12-13T01:31:38.243253704Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 01:31:38.973324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount379103111.mount: Deactivated successfully. Dec 13 01:31:41.614238 containerd[1508]: time="2024-12-13T01:31:41.614169821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:41.615205 containerd[1508]: time="2024-12-13T01:31:41.615155679Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675734" Dec 13 01:31:41.616483 containerd[1508]: time="2024-12-13T01:31:41.616439670Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:41.619159 containerd[1508]: time="2024-12-13T01:31:41.619116101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:41.620253 containerd[1508]: time="2024-12-13T01:31:41.619967631Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 3.376658985s" Dec 13 01:31:41.620253 containerd[1508]: time="2024-12-13T01:31:41.620028053Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 01:31:41.644919 containerd[1508]: time="2024-12-13T01:31:41.644858430Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 01:31:44.091294 containerd[1508]: time="2024-12-13T01:31:44.091239664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:44.094912 containerd[1508]: time="2024-12-13T01:31:44.094880361Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606429" Dec 13 01:31:44.095811 containerd[1508]: time="2024-12-13T01:31:44.095774780Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:44.098371 containerd[1508]: time="2024-12-13T01:31:44.098345431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:44.099449 containerd[1508]: time="2024-12-13T01:31:44.099316393Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.454418139s" Dec 13 01:31:44.099449 containerd[1508]: time="2024-12-13T01:31:44.099341720Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 01:31:44.125657 containerd[1508]: time="2024-12-13T01:31:44.125423571Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 01:31:44.818042 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Dec 13 01:31:44.829685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:45.041876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:45.053381 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:45.106587 kubelet[2098]: E1213 01:31:45.106416 2098 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:45.111436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:45.111736 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:45.846186 containerd[1508]: time="2024-12-13T01:31:45.846134805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:45.847223 containerd[1508]: time="2024-12-13T01:31:45.847173883Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783055" Dec 13 01:31:45.848080 containerd[1508]: time="2024-12-13T01:31:45.848025674Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:45.850948 containerd[1508]: time="2024-12-13T01:31:45.850622675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:45.851511 containerd[1508]: time="2024-12-13T01:31:45.851485717Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.726023504s" Dec 13 01:31:45.851552 containerd[1508]: time="2024-12-13T01:31:45.851512888Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 01:31:45.872403 containerd[1508]: time="2024-12-13T01:31:45.872364574Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:31:46.999081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3472605365.mount: Deactivated successfully. Dec 13 01:31:47.582329 containerd[1508]: time="2024-12-13T01:31:47.581679049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:47.582846 containerd[1508]: time="2024-12-13T01:31:47.582792578Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057496" Dec 13 01:31:47.583384 containerd[1508]: time="2024-12-13T01:31:47.583317602Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:47.585263 containerd[1508]: time="2024-12-13T01:31:47.585228883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:47.586156 containerd[1508]: time="2024-12-13T01:31:47.585690420Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.713286162s" Dec 13 01:31:47.586156 containerd[1508]: time="2024-12-13T01:31:47.585722159Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 01:31:47.610392 containerd[1508]: time="2024-12-13T01:31:47.610343689Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:31:48.215266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount468953755.mount: Deactivated successfully. Dec 13 01:31:49.062785 containerd[1508]: time="2024-12-13T01:31:49.062721550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:49.063774 containerd[1508]: time="2024-12-13T01:31:49.063732879Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Dec 13 01:31:49.064480 containerd[1508]: time="2024-12-13T01:31:49.064436166Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:49.066924 containerd[1508]: time="2024-12-13T01:31:49.066861293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:49.068041 containerd[1508]: time="2024-12-13T01:31:49.067888251Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.457507113s" Dec 13 01:31:49.068041 containerd[1508]: time="2024-12-13T01:31:49.067926502Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:31:49.090513 containerd[1508]: time="2024-12-13T01:31:49.090470043Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:31:49.641619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1828354248.mount: Deactivated successfully. Dec 13 01:31:49.650696 containerd[1508]: time="2024-12-13T01:31:49.650590417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:49.652245 containerd[1508]: time="2024-12-13T01:31:49.652166376Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Dec 13 01:31:49.655384 containerd[1508]: time="2024-12-13T01:31:49.653457084Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:49.656472 containerd[1508]: time="2024-12-13T01:31:49.656437553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:49.658691 containerd[1508]: time="2024-12-13T01:31:49.658630448Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 568.119559ms" Dec 13 01:31:49.658691 containerd[1508]: time="2024-12-13T01:31:49.658691240Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:31:49.688650 containerd[1508]: time="2024-12-13T01:31:49.688612201Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 01:31:50.324459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167091060.mount: Deactivated successfully. Dec 13 01:31:52.827317 containerd[1508]: time="2024-12-13T01:31:52.827259058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:52.828597 containerd[1508]: time="2024-12-13T01:31:52.828422952Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238651" Dec 13 01:31:52.829365 containerd[1508]: time="2024-12-13T01:31:52.829321874Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:52.832943 containerd[1508]: time="2024-12-13T01:31:52.831806403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:52.832943 containerd[1508]: time="2024-12-13T01:31:52.832808727Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.143995272s" Dec 13 01:31:52.832943 containerd[1508]: time="2024-12-13T01:31:52.832832723Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 01:31:55.317869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Dec 13 01:31:55.328376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:55.568181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:55.572539 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:55.614986 kubelet[2296]: E1213 01:31:55.614788 2296 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:55.619314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:55.620146 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:55.798632 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:55.809493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:55.843283 systemd[1]: Reloading requested from client PID 2310 ('systemctl') (unit session-5.scope)... Dec 13 01:31:55.843317 systemd[1]: Reloading... Dec 13 01:31:55.988681 zram_generator::config[2350]: No configuration found. Dec 13 01:31:56.103458 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:31:56.188930 systemd[1]: Reloading finished in 344 ms. Dec 13 01:31:56.275111 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:31:56.275213 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:31:56.275576 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:56.284122 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:56.476201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:56.487396 (kubelet)[2403]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:31:56.525681 kubelet[2403]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:56.525681 kubelet[2403]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:31:56.525681 kubelet[2403]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:31:56.526972 kubelet[2403]: I1213 01:31:56.526909 2403 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:31:56.850692 kubelet[2403]: I1213 01:31:56.850635 2403 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:31:56.850692 kubelet[2403]: I1213 01:31:56.850661 2403 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:31:56.850951 kubelet[2403]: I1213 01:31:56.850879 2403 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:31:56.872839 kubelet[2403]: I1213 01:31:56.872753 2403 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:31:56.876339 kubelet[2403]: E1213 01:31:56.876301 2403 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://157.90.151.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 157.90.151.24:6443: connect: connection refused Dec 13 01:31:56.904290 kubelet[2403]: I1213 01:31:56.904238 2403 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:31:56.907581 kubelet[2403]: I1213 01:31:56.907488 2403 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:31:56.910976 kubelet[2403]: I1213 01:31:56.907585 2403 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-b-0bf1a54360","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:31:56.911097 kubelet[2403]: I1213 01:31:56.911028 2403 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:31:56.911167 kubelet[2403]: I1213 01:31:56.911130 2403 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:31:56.915024 kubelet[2403]: I1213 01:31:56.914960 2403 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:56.917145 kubelet[2403]: I1213 01:31:56.917086 2403 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:31:56.917145 kubelet[2403]: I1213 01:31:56.917138 2403 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:31:56.917256 kubelet[2403]: I1213 01:31:56.917207 2403 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:31:56.917256 kubelet[2403]: I1213 01:31:56.917235 2403 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:31:56.924472 kubelet[2403]: W1213 01:31:56.923832 2403 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.90.151.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.90.151.24:6443: connect: connection refused Dec 13 01:31:56.924472 kubelet[2403]: E1213 01:31:56.923970 2403 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://157.90.151.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.90.151.24:6443: connect: connection refused Dec 13 01:31:56.924472 kubelet[2403]: I1213 01:31:56.924194 2403 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:31:56.928133 kubelet[2403]: I1213 01:31:56.927704 2403 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:31:56.928133 kubelet[2403]: W1213 01:31:56.927869 2403 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:31:56.930102 kubelet[2403]: I1213 01:31:56.929896 2403 server.go:1264] "Started kubelet" Dec 13 01:31:56.930987 kubelet[2403]: W1213 01:31:56.930900 2403 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.90.151.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-0bf1a54360&limit=500&resourceVersion=0": dial tcp 157.90.151.24:6443: connect: connection refused Dec 13 01:31:56.930987 kubelet[2403]: E1213 01:31:56.930949 2403 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://157.90.151.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-0bf1a54360&limit=500&resourceVersion=0": dial tcp 157.90.151.24:6443: connect: connection refused Dec 13 01:31:56.931237 kubelet[2403]: I1213 01:31:56.931207 2403 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:31:56.933018 kubelet[2403]: I1213 01:31:56.932111 2403 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:31:56.935271 kubelet[2403]: I1213 01:31:56.934641 2403 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:31:56.937545 kubelet[2403]: I1213 01:31:56.936597 2403 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:31:56.937545 kubelet[2403]: I1213 01:31:56.936830 2403 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:31:56.937545 kubelet[2403]: E1213 01:31:56.936960 2403 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.90.151.24:6443/api/v1/namespaces/default/events\": dial tcp 157.90.151.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-b-0bf1a54360.1810987c7a1ecd46 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-b-0bf1a54360,UID:ci-4081-2-1-b-0bf1a54360,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-b-0bf1a54360,},FirstTimestamp:2024-12-13 01:31:56.929846598 +0000 UTC m=+0.438250058,LastTimestamp:2024-12-13 01:31:56.929846598 +0000 UTC m=+0.438250058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-b-0bf1a54360,}" Dec 13 01:31:56.947449 kubelet[2403]: E1213 01:31:56.947411 2403 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-b-0bf1a54360\" not found" Dec 13 01:31:56.947696 kubelet[2403]: I1213 01:31:56.947686 2403 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:31:56.949153 kubelet[2403]: I1213 01:31:56.949137 2403 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:31:56.949293 kubelet[2403]: I1213 01:31:56.949280 2403 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:31:56.949956 kubelet[2403]: E1213 01:31:56.949913 2403 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:31:56.950446 kubelet[2403]: W1213 01:31:56.950389 2403 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.90.151.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.90.151.24:6443: connect: connection refused Dec 13 01:31:56.950511 kubelet[2403]: E1213 01:31:56.950483 2403 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://157.90.151.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.90.151.24:6443: connect: connection refused Dec 13 01:31:56.950594 kubelet[2403]: E1213 01:31:56.950557 2403 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.151.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-0bf1a54360?timeout=10s\": dial tcp 157.90.151.24:6443: connect: connection refused" interval="200ms" Dec 13 01:31:56.952073 kubelet[2403]: I1213 01:31:56.952038 2403 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:31:56.952229 kubelet[2403]: I1213 01:31:56.952152 2403 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:31:56.952369 kubelet[2403]: I1213 01:31:56.952317 2403 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:31:56.981614 kubelet[2403]: I1213 01:31:56.981357 2403 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:31:56.981614 kubelet[2403]: I1213 01:31:56.981386 2403 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:31:56.981614 kubelet[2403]: I1213 01:31:56.981426 2403 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:31:56.984318 kubelet[2403]: I1213 01:31:56.983864 2403 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:31:56.985649 kubelet[2403]: I1213 01:31:56.985611 2403 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:31:56.985649 kubelet[2403]: I1213 01:31:56.985653 2403 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:31:56.985753 kubelet[2403]: I1213 01:31:56.985678 2403 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:31:56.985753 kubelet[2403]: E1213 01:31:56.985725 2403 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:31:56.987876 kubelet[2403]: I1213 01:31:56.987454 2403 policy_none.go:49] "None policy: Start" Dec 13 01:31:56.988558 kubelet[2403]: W1213 01:31:56.988184 2403 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.90.151.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.151.24:6443: connect: connection refused Dec 13 01:31:56.988558 kubelet[2403]: E1213 01:31:56.988225 2403 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://157.90.151.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.151.24:6443: connect: connection refused Dec 13 01:31:56.989664 kubelet[2403]: I1213 01:31:56.988906 2403 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:31:56.989664 kubelet[2403]: I1213 01:31:56.988934 2403 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:31:57.001624 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:31:57.010327 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:31:57.013578 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:31:57.025005 kubelet[2403]: I1213 01:31:57.024880 2403 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:31:57.025360 kubelet[2403]: I1213 01:31:57.025187 2403 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:31:57.025803 kubelet[2403]: I1213 01:31:57.025413 2403 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:31:57.027347 kubelet[2403]: E1213 01:31:57.027325 2403 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-b-0bf1a54360\" not found" Dec 13 01:31:57.050581 kubelet[2403]: I1213 01:31:57.050550 2403 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.050888 kubelet[2403]: E1213 01:31:57.050848 2403 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.90.151.24:6443/api/v1/nodes\": dial tcp 157.90.151.24:6443: connect: connection refused" node="ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.086399 kubelet[2403]: I1213 01:31:57.086323 2403 topology_manager.go:215] "Topology Admit Handler" podUID="1c87784ba6b275d3ce680bf4728328e0" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.088735 kubelet[2403]: I1213 01:31:57.088699 2403 topology_manager.go:215] "Topology Admit Handler" podUID="b225f2befb260ac63b1770fc08be2cbb" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.090346 kubelet[2403]: I1213 01:31:57.090307 2403 topology_manager.go:215] "Topology Admit Handler" podUID="656cca59f23415744c7526bdc77c3258" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.100193 systemd[1]: Created slice kubepods-burstable-pod1c87784ba6b275d3ce680bf4728328e0.slice - libcontainer container kubepods-burstable-pod1c87784ba6b275d3ce680bf4728328e0.slice. Dec 13 01:31:57.117372 systemd[1]: Created slice kubepods-burstable-podb225f2befb260ac63b1770fc08be2cbb.slice - libcontainer container kubepods-burstable-podb225f2befb260ac63b1770fc08be2cbb.slice. Dec 13 01:31:57.135358 systemd[1]: Created slice kubepods-burstable-pod656cca59f23415744c7526bdc77c3258.slice - libcontainer container kubepods-burstable-pod656cca59f23415744c7526bdc77c3258.slice. Dec 13 01:31:57.151808 kubelet[2403]: E1213 01:31:57.151730 2403 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.151.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-0bf1a54360?timeout=10s\": dial tcp 157.90.151.24:6443: connect: connection refused" interval="400ms" Dec 13 01:31:57.250608 kubelet[2403]: I1213 01:31:57.250532 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c87784ba6b275d3ce680bf4728328e0-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-b-0bf1a54360\" (UID: \"1c87784ba6b275d3ce680bf4728328e0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.250608 kubelet[2403]: I1213 01:31:57.250595 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c87784ba6b275d3ce680bf4728328e0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-b-0bf1a54360\" (UID: \"1c87784ba6b275d3ce680bf4728328e0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.251035 kubelet[2403]: I1213 01:31:57.250639 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b225f2befb260ac63b1770fc08be2cbb-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-b-0bf1a54360\" (UID: \"b225f2befb260ac63b1770fc08be2cbb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.251035 kubelet[2403]: I1213 01:31:57.250671 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b225f2befb260ac63b1770fc08be2cbb-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-0bf1a54360\" (UID: \"b225f2befb260ac63b1770fc08be2cbb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.251035 kubelet[2403]: I1213 01:31:57.250703 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/656cca59f23415744c7526bdc77c3258-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-b-0bf1a54360\" (UID: \"656cca59f23415744c7526bdc77c3258\") " pod="kube-system/kube-scheduler-ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.251035 kubelet[2403]: I1213 01:31:57.250751 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c87784ba6b275d3ce680bf4728328e0-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-b-0bf1a54360\" (UID: \"1c87784ba6b275d3ce680bf4728328e0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.251035 kubelet[2403]: I1213 01:31:57.250829 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b225f2befb260ac63b1770fc08be2cbb-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-b-0bf1a54360\" (UID: \"b225f2befb260ac63b1770fc08be2cbb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.251253 kubelet[2403]: I1213 01:31:57.250876 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b225f2befb260ac63b1770fc08be2cbb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-b-0bf1a54360\" (UID: \"b225f2befb260ac63b1770fc08be2cbb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.251423 kubelet[2403]: I1213 01:31:57.251348 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b225f2befb260ac63b1770fc08be2cbb-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-0bf1a54360\" (UID: \"b225f2befb260ac63b1770fc08be2cbb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.253329 kubelet[2403]: I1213 01:31:57.253235 2403 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.253696 kubelet[2403]: E1213 01:31:57.253646 2403 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.90.151.24:6443/api/v1/nodes\": dial tcp 157.90.151.24:6443: connect: connection refused" node="ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.416338 containerd[1508]: time="2024-12-13T01:31:57.416114194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-b-0bf1a54360,Uid:1c87784ba6b275d3ce680bf4728328e0,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:57.442855 containerd[1508]: time="2024-12-13T01:31:57.442399553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-b-0bf1a54360,Uid:b225f2befb260ac63b1770fc08be2cbb,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:57.442855 containerd[1508]: time="2024-12-13T01:31:57.442407168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-b-0bf1a54360,Uid:656cca59f23415744c7526bdc77c3258,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:57.553109 kubelet[2403]: E1213 01:31:57.552983 2403 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.151.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-0bf1a54360?timeout=10s\": dial tcp 157.90.151.24:6443: connect: connection refused" interval="800ms" Dec 13 01:31:57.657481 kubelet[2403]: I1213 01:31:57.657408 2403 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.657957 kubelet[2403]: E1213 01:31:57.657896 2403 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.90.151.24:6443/api/v1/nodes\": dial tcp 157.90.151.24:6443: connect: connection refused" node="ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:57.920177 kubelet[2403]: W1213 01:31:57.920022 2403 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://157.90.151.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.90.151.24:6443: connect: connection refused Dec 13 01:31:57.920177 kubelet[2403]: E1213 01:31:57.920183 2403 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://157.90.151.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 157.90.151.24:6443: connect: connection refused Dec 13 01:31:57.971771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount768151135.mount: Deactivated successfully. Dec 13 01:31:57.986456 containerd[1508]: time="2024-12-13T01:31:57.986350282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:57.988232 containerd[1508]: time="2024-12-13T01:31:57.988125386Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:57.990149 containerd[1508]: time="2024-12-13T01:31:57.990092999Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:57.992243 containerd[1508]: time="2024-12-13T01:31:57.992159486Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:31:57.992853 containerd[1508]: time="2024-12-13T01:31:57.992722303Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:31:57.994389 containerd[1508]: time="2024-12-13T01:31:57.994292497Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Dec 13 01:31:57.997097 containerd[1508]: time="2024-12-13T01:31:57.995906631Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:58.003229 containerd[1508]: time="2024-12-13T01:31:58.002944693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:31:58.005075 containerd[1508]: time="2024-12-13T01:31:58.004998837Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 562.477638ms" Dec 13 01:31:58.011738 containerd[1508]: time="2024-12-13T01:31:58.011682101Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 595.422225ms" Dec 13 01:31:58.012259 containerd[1508]: time="2024-12-13T01:31:58.012178204Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 569.50911ms" Dec 13 01:31:58.087013 kubelet[2403]: W1213 01:31:58.086525 2403 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://157.90.151.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-0bf1a54360&limit=500&resourceVersion=0": dial tcp 157.90.151.24:6443: connect: connection refused Dec 13 01:31:58.087013 kubelet[2403]: E1213 01:31:58.086620 2403 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://157.90.151.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-b-0bf1a54360&limit=500&resourceVersion=0": dial tcp 157.90.151.24:6443: connect: connection refused Dec 13 01:31:58.128309 kubelet[2403]: W1213 01:31:58.128129 2403 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://157.90.151.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.90.151.24:6443: connect: connection refused Dec 13 01:31:58.128309 kubelet[2403]: E1213 01:31:58.128273 2403 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://157.90.151.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 157.90.151.24:6443: connect: connection refused Dec 13 01:31:58.216562 containerd[1508]: time="2024-12-13T01:31:58.216328443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:58.216562 containerd[1508]: time="2024-12-13T01:31:58.216506014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:58.217864 containerd[1508]: time="2024-12-13T01:31:58.217812847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:58.221302 containerd[1508]: time="2024-12-13T01:31:58.221211615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:58.224477 containerd[1508]: time="2024-12-13T01:31:58.223908667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:58.224898 containerd[1508]: time="2024-12-13T01:31:58.224851462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:58.225817 containerd[1508]: time="2024-12-13T01:31:58.225757590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:58.225946 containerd[1508]: time="2024-12-13T01:31:58.225909101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:58.230680 containerd[1508]: time="2024-12-13T01:31:58.230365558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:58.230680 containerd[1508]: time="2024-12-13T01:31:58.230434777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:58.230680 containerd[1508]: time="2024-12-13T01:31:58.230458311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:58.230680 containerd[1508]: time="2024-12-13T01:31:58.230558999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:58.257229 systemd[1]: Started cri-containerd-f412f8e5c0382068c84c453833bf91c7a5dd2aa59b21250077f7065f8e5b4285.scope - libcontainer container f412f8e5c0382068c84c453833bf91c7a5dd2aa59b21250077f7065f8e5b4285. Dec 13 01:31:58.262801 systemd[1]: Started cri-containerd-f0090217eb775267f75ccaf18c83e809d80d1701985b1bea0ba353b286be0dbb.scope - libcontainer container f0090217eb775267f75ccaf18c83e809d80d1701985b1bea0ba353b286be0dbb. Dec 13 01:31:58.266596 systemd[1]: Started cri-containerd-0eabf31996028ea1b72e2e393fa0b0a09ca43a4f917f1e110e5bbc699e252ab4.scope - libcontainer container 0eabf31996028ea1b72e2e393fa0b0a09ca43a4f917f1e110e5bbc699e252ab4. Dec 13 01:31:58.291077 kubelet[2403]: W1213 01:31:58.291003 2403 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://157.90.151.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.151.24:6443: connect: connection refused Dec 13 01:31:58.291646 kubelet[2403]: E1213 01:31:58.291624 2403 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://157.90.151.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 157.90.151.24:6443: connect: connection refused Dec 13 01:31:58.327466 containerd[1508]: time="2024-12-13T01:31:58.327421883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-b-0bf1a54360,Uid:b225f2befb260ac63b1770fc08be2cbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eabf31996028ea1b72e2e393fa0b0a09ca43a4f917f1e110e5bbc699e252ab4\"" Dec 13 01:31:58.339383 containerd[1508]: time="2024-12-13T01:31:58.338808243Z" level=info msg="CreateContainer within sandbox \"0eabf31996028ea1b72e2e393fa0b0a09ca43a4f917f1e110e5bbc699e252ab4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:31:58.341633 containerd[1508]: time="2024-12-13T01:31:58.341458117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-b-0bf1a54360,Uid:1c87784ba6b275d3ce680bf4728328e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0090217eb775267f75ccaf18c83e809d80d1701985b1bea0ba353b286be0dbb\"" Dec 13 01:31:58.345037 containerd[1508]: time="2024-12-13T01:31:58.345008498Z" level=info msg="CreateContainer within sandbox \"f0090217eb775267f75ccaf18c83e809d80d1701985b1bea0ba353b286be0dbb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:31:58.353897 kubelet[2403]: E1213 01:31:58.353691 2403 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.90.151.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-b-0bf1a54360?timeout=10s\": dial tcp 157.90.151.24:6443: connect: connection refused" interval="1.6s" Dec 13 01:31:58.360156 containerd[1508]: time="2024-12-13T01:31:58.360124982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-b-0bf1a54360,Uid:656cca59f23415744c7526bdc77c3258,Namespace:kube-system,Attempt:0,} returns sandbox id \"f412f8e5c0382068c84c453833bf91c7a5dd2aa59b21250077f7065f8e5b4285\"" Dec 13 01:31:58.365745 containerd[1508]: time="2024-12-13T01:31:58.365704099Z" level=info msg="CreateContainer within sandbox \"f412f8e5c0382068c84c453833bf91c7a5dd2aa59b21250077f7065f8e5b4285\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:31:58.379239 containerd[1508]: time="2024-12-13T01:31:58.379186492Z" level=info msg="CreateContainer within sandbox \"f0090217eb775267f75ccaf18c83e809d80d1701985b1bea0ba353b286be0dbb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a171a0c4ff3be8728dbcebaa7d0ed771da49a518dabf5963c067cd885b9cd884\"" Dec 13 01:31:58.380282 containerd[1508]: time="2024-12-13T01:31:58.380238651Z" level=info msg="StartContainer for \"a171a0c4ff3be8728dbcebaa7d0ed771da49a518dabf5963c067cd885b9cd884\"" Dec 13 01:31:58.388657 containerd[1508]: time="2024-12-13T01:31:58.388006995Z" level=info msg="CreateContainer within sandbox \"0eabf31996028ea1b72e2e393fa0b0a09ca43a4f917f1e110e5bbc699e252ab4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c1f8d03b2ffad7e597ec8c9ad346e98fc3b05f0b645389c140f8dce43aeaf52f\"" Dec 13 01:31:58.389407 containerd[1508]: time="2024-12-13T01:31:58.389328075Z" level=info msg="StartContainer for \"c1f8d03b2ffad7e597ec8c9ad346e98fc3b05f0b645389c140f8dce43aeaf52f\"" Dec 13 01:31:58.392307 containerd[1508]: time="2024-12-13T01:31:58.392235889Z" level=info msg="CreateContainer within sandbox \"f412f8e5c0382068c84c453833bf91c7a5dd2aa59b21250077f7065f8e5b4285\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5ff9adc84c383664d2334c55bbf7c3bc1c40bfc9d9357f1f9fbfa6fb0c8d2261\"" Dec 13 01:31:58.393239 containerd[1508]: time="2024-12-13T01:31:58.393169957Z" level=info msg="StartContainer for \"5ff9adc84c383664d2334c55bbf7c3bc1c40bfc9d9357f1f9fbfa6fb0c8d2261\"" Dec 13 01:31:58.424189 systemd[1]: Started cri-containerd-a171a0c4ff3be8728dbcebaa7d0ed771da49a518dabf5963c067cd885b9cd884.scope - libcontainer container a171a0c4ff3be8728dbcebaa7d0ed771da49a518dabf5963c067cd885b9cd884. Dec 13 01:31:58.439190 systemd[1]: Started cri-containerd-c1f8d03b2ffad7e597ec8c9ad346e98fc3b05f0b645389c140f8dce43aeaf52f.scope - libcontainer container c1f8d03b2ffad7e597ec8c9ad346e98fc3b05f0b645389c140f8dce43aeaf52f. Dec 13 01:31:58.443002 systemd[1]: Started cri-containerd-5ff9adc84c383664d2334c55bbf7c3bc1c40bfc9d9357f1f9fbfa6fb0c8d2261.scope - libcontainer container 5ff9adc84c383664d2334c55bbf7c3bc1c40bfc9d9357f1f9fbfa6fb0c8d2261. Dec 13 01:31:58.461773 kubelet[2403]: I1213 01:31:58.461339 2403 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:58.463248 kubelet[2403]: E1213 01:31:58.462302 2403 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://157.90.151.24:6443/api/v1/nodes\": dial tcp 157.90.151.24:6443: connect: connection refused" node="ci-4081-2-1-b-0bf1a54360" Dec 13 01:31:58.506878 containerd[1508]: time="2024-12-13T01:31:58.506620786Z" level=info msg="StartContainer for \"a171a0c4ff3be8728dbcebaa7d0ed771da49a518dabf5963c067cd885b9cd884\" returns successfully" Dec 13 01:31:58.519884 containerd[1508]: time="2024-12-13T01:31:58.519839087Z" level=info msg="StartContainer for \"5ff9adc84c383664d2334c55bbf7c3bc1c40bfc9d9357f1f9fbfa6fb0c8d2261\" returns successfully" Dec 13 01:31:58.526538 containerd[1508]: time="2024-12-13T01:31:58.526467699Z" level=info msg="StartContainer for \"c1f8d03b2ffad7e597ec8c9ad346e98fc3b05f0b645389c140f8dce43aeaf52f\" returns successfully" Dec 13 01:32:00.069444 kubelet[2403]: I1213 01:32:00.069012 2403 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:00.306350 kubelet[2403]: E1213 01:32:00.306294 2403 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-b-0bf1a54360\" not found" node="ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:00.396434 kubelet[2403]: I1213 01:32:00.396027 2403 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:00.926265 kubelet[2403]: I1213 01:32:00.926192 2403 apiserver.go:52] "Watching apiserver" Dec 13 01:32:00.949653 kubelet[2403]: I1213 01:32:00.949581 2403 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:32:02.439140 systemd[1]: Reloading requested from client PID 2681 ('systemctl') (unit session-5.scope)... Dec 13 01:32:02.439169 systemd[1]: Reloading... Dec 13 01:32:02.639122 zram_generator::config[2724]: No configuration found. Dec 13 01:32:02.752272 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:02.856385 systemd[1]: Reloading finished in 416 ms. Dec 13 01:32:02.908840 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:02.910006 kubelet[2403]: E1213 01:32:02.908980 2403 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081-2-1-b-0bf1a54360.1810987c7a1ecd46 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-b-0bf1a54360,UID:ci-4081-2-1-b-0bf1a54360,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-b-0bf1a54360,},FirstTimestamp:2024-12-13 01:31:56.929846598 +0000 UTC m=+0.438250058,LastTimestamp:2024-12-13 01:31:56.929846598 +0000 UTC m=+0.438250058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-b-0bf1a54360,}" Dec 13 01:32:02.910006 kubelet[2403]: I1213 01:32:02.909325 2403 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:32:02.925669 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:32:02.926012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:02.931244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:03.139979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:03.149456 (kubelet)[2771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:32:03.230704 kubelet[2771]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:03.230704 kubelet[2771]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:32:03.230704 kubelet[2771]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:03.231794 kubelet[2771]: I1213 01:32:03.231724 2771 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:32:03.237190 kubelet[2771]: I1213 01:32:03.237164 2771 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:32:03.238091 kubelet[2771]: I1213 01:32:03.237254 2771 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:32:03.238091 kubelet[2771]: I1213 01:32:03.237472 2771 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:32:03.238711 kubelet[2771]: I1213 01:32:03.238676 2771 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:32:03.239975 kubelet[2771]: I1213 01:32:03.239938 2771 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:32:03.246264 kubelet[2771]: I1213 01:32:03.246233 2771 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:32:03.246540 kubelet[2771]: I1213 01:32:03.246504 2771 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:32:03.246678 kubelet[2771]: I1213 01:32:03.246533 2771 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-b-0bf1a54360","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:32:03.246764 kubelet[2771]: I1213 01:32:03.246689 2771 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:32:03.246764 kubelet[2771]: I1213 01:32:03.246699 2771 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:32:03.246764 kubelet[2771]: I1213 01:32:03.246761 2771 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:03.246925 kubelet[2771]: I1213 01:32:03.246901 2771 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:32:03.247541 kubelet[2771]: I1213 01:32:03.247266 2771 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:32:03.247541 kubelet[2771]: I1213 01:32:03.247297 2771 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:32:03.247541 kubelet[2771]: I1213 01:32:03.247318 2771 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:32:03.250232 kubelet[2771]: I1213 01:32:03.250013 2771 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:32:03.250336 kubelet[2771]: I1213 01:32:03.250325 2771 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:32:03.251018 kubelet[2771]: I1213 01:32:03.251006 2771 server.go:1264] "Started kubelet" Dec 13 01:32:03.255943 kubelet[2771]: I1213 01:32:03.255912 2771 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:32:03.262423 kubelet[2771]: I1213 01:32:03.261620 2771 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:32:03.264264 kubelet[2771]: I1213 01:32:03.263752 2771 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:32:03.264971 kubelet[2771]: I1213 01:32:03.264953 2771 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:32:03.266419 kubelet[2771]: I1213 01:32:03.265522 2771 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:32:03.266419 kubelet[2771]: I1213 01:32:03.264953 2771 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:32:03.266419 kubelet[2771]: I1213 01:32:03.265717 2771 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:32:03.266419 kubelet[2771]: I1213 01:32:03.265861 2771 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:32:03.279572 kubelet[2771]: I1213 01:32:03.279520 2771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:32:03.280242 kubelet[2771]: I1213 01:32:03.280223 2771 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:32:03.281223 kubelet[2771]: I1213 01:32:03.281183 2771 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:32:03.284396 kubelet[2771]: I1213 01:32:03.280624 2771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:32:03.284589 kubelet[2771]: I1213 01:32:03.284532 2771 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:32:03.284656 kubelet[2771]: I1213 01:32:03.284647 2771 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:32:03.284775 kubelet[2771]: E1213 01:32:03.284728 2771 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:32:03.291083 kubelet[2771]: E1213 01:32:03.291001 2771 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:32:03.294003 kubelet[2771]: I1213 01:32:03.293970 2771 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:32:03.346565 kubelet[2771]: I1213 01:32:03.346525 2771 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:32:03.346700 kubelet[2771]: I1213 01:32:03.346578 2771 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:32:03.346700 kubelet[2771]: I1213 01:32:03.346605 2771 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:03.346831 kubelet[2771]: I1213 01:32:03.346761 2771 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:32:03.346831 kubelet[2771]: I1213 01:32:03.346820 2771 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:32:03.346881 kubelet[2771]: I1213 01:32:03.346838 2771 policy_none.go:49] "None policy: Start" Dec 13 01:32:03.347435 kubelet[2771]: I1213 01:32:03.347407 2771 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:32:03.347553 kubelet[2771]: I1213 01:32:03.347530 2771 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:32:03.347705 kubelet[2771]: I1213 01:32:03.347681 2771 state_mem.go:75] "Updated machine memory state" Dec 13 01:32:03.352645 kubelet[2771]: I1213 01:32:03.352616 2771 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:32:03.353349 kubelet[2771]: I1213 01:32:03.353009 2771 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:32:03.353349 kubelet[2771]: I1213 01:32:03.353145 2771 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:32:03.368469 kubelet[2771]: I1213 01:32:03.368421 2771 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:03.377551 kubelet[2771]: I1213 01:32:03.377343 2771 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:03.377551 kubelet[2771]: I1213 01:32:03.377430 2771 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:03.387261 kubelet[2771]: I1213 01:32:03.386956 2771 topology_manager.go:215] "Topology Admit Handler" podUID="1c87784ba6b275d3ce680bf4728328e0" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:03.387261 kubelet[2771]: I1213 01:32:03.387038 2771 topology_manager.go:215] "Topology Admit Handler" podUID="b225f2befb260ac63b1770fc08be2cbb" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:03.388009 kubelet[2771]: I1213 01:32:03.387970 2771 topology_manager.go:215] "Topology Admit Handler" podUID="656cca59f23415744c7526bdc77c3258" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:03.468973 kubelet[2771]: I1213 01:32:03.467043 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b225f2befb260ac63b1770fc08be2cbb-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-b-0bf1a54360\" (UID: \"b225f2befb260ac63b1770fc08be2cbb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:03.468973 kubelet[2771]: I1213 01:32:03.467107 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b225f2befb260ac63b1770fc08be2cbb-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-0bf1a54360\" (UID: \"b225f2befb260ac63b1770fc08be2cbb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:03.468973 kubelet[2771]: I1213 01:32:03.467141 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b225f2befb260ac63b1770fc08be2cbb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-b-0bf1a54360\" (UID: \"b225f2befb260ac63b1770fc08be2cbb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:03.468973 kubelet[2771]: I1213 01:32:03.467159 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/656cca59f23415744c7526bdc77c3258-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-b-0bf1a54360\" (UID: \"656cca59f23415744c7526bdc77c3258\") " pod="kube-system/kube-scheduler-ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:03.468973 kubelet[2771]: I1213 01:32:03.467175 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c87784ba6b275d3ce680bf4728328e0-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-b-0bf1a54360\" (UID: \"1c87784ba6b275d3ce680bf4728328e0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:03.469238 kubelet[2771]: I1213 01:32:03.467197 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c87784ba6b275d3ce680bf4728328e0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-b-0bf1a54360\" (UID: \"1c87784ba6b275d3ce680bf4728328e0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:03.469238 kubelet[2771]: I1213 01:32:03.467216 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b225f2befb260ac63b1770fc08be2cbb-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-b-0bf1a54360\" (UID: \"b225f2befb260ac63b1770fc08be2cbb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:03.469238 kubelet[2771]: I1213 01:32:03.467234 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b225f2befb260ac63b1770fc08be2cbb-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-b-0bf1a54360\" (UID: \"b225f2befb260ac63b1770fc08be2cbb\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:03.469238 kubelet[2771]: I1213 01:32:03.467250 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c87784ba6b275d3ce680bf4728328e0-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-b-0bf1a54360\" (UID: \"1c87784ba6b275d3ce680bf4728328e0\") " pod="kube-system/kube-apiserver-ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:04.249974 kubelet[2771]: I1213 01:32:04.249923 2771 apiserver.go:52] "Watching apiserver" Dec 13 01:32:04.267075 kubelet[2771]: I1213 01:32:04.266284 2771 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:32:04.323642 kubelet[2771]: E1213 01:32:04.323595 2771 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-b-0bf1a54360\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-b-0bf1a54360" Dec 13 01:32:04.350638 kubelet[2771]: I1213 01:32:04.350562 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-b-0bf1a54360" podStartSLOduration=1.350542924 podStartE2EDuration="1.350542924s" podCreationTimestamp="2024-12-13 01:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:04.341779841 +0000 UTC m=+1.186026614" watchObservedRunningTime="2024-12-13 01:32:04.350542924 +0000 UTC m=+1.194789696" Dec 13 01:32:04.358956 kubelet[2771]: I1213 01:32:04.358909 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-b-0bf1a54360" podStartSLOduration=1.358892335 podStartE2EDuration="1.358892335s" podCreationTimestamp="2024-12-13 01:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:04.351365517 +0000 UTC m=+1.195612288" watchObservedRunningTime="2024-12-13 01:32:04.358892335 +0000 UTC m=+1.203139097" Dec 13 01:32:04.368075 kubelet[2771]: I1213 01:32:04.366617 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-b-0bf1a54360" podStartSLOduration=1.366599108 podStartE2EDuration="1.366599108s" podCreationTimestamp="2024-12-13 01:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:04.359533881 +0000 UTC m=+1.203780652" watchObservedRunningTime="2024-12-13 01:32:04.366599108 +0000 UTC m=+1.210845880" Dec 13 01:32:04.595278 sudo[1864]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:04.756106 sshd[1845]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:04.760339 systemd[1]: sshd@4-157.90.151.24:22-147.75.109.163:37330.service: Deactivated successfully. Dec 13 01:32:04.764773 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:32:04.765296 systemd[1]: session-5.scope: Consumed 4.692s CPU time, 192.5M memory peak, 0B memory swap peak. Dec 13 01:32:04.768173 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:32:04.770208 systemd-logind[1483]: Removed session 5. Dec 13 01:32:16.169889 kubelet[2771]: I1213 01:32:16.169786 2771 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:32:16.171373 containerd[1508]: time="2024-12-13T01:32:16.170505082Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:32:16.172508 kubelet[2771]: I1213 01:32:16.172096 2771 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:32:17.058511 kubelet[2771]: I1213 01:32:17.057620 2771 topology_manager.go:215] "Topology Admit Handler" podUID="5249bbc2-4023-4742-9757-fb2a2f9aa946" podNamespace="kube-system" podName="kube-proxy-clgbk" Dec 13 01:32:17.068610 kubelet[2771]: I1213 01:32:17.068513 2771 topology_manager.go:215] "Topology Admit Handler" podUID="5c89ca0b-14f4-4f42-adf8-f36e37ebdafa" podNamespace="kube-flannel" podName="kube-flannel-ds-bk8ff" Dec 13 01:32:17.077708 systemd[1]: Created slice kubepods-besteffort-pod5249bbc2_4023_4742_9757_fb2a2f9aa946.slice - libcontainer container kubepods-besteffort-pod5249bbc2_4023_4742_9757_fb2a2f9aa946.slice. Dec 13 01:32:17.094710 systemd[1]: Created slice kubepods-burstable-pod5c89ca0b_14f4_4f42_adf8_f36e37ebdafa.slice - libcontainer container kubepods-burstable-pod5c89ca0b_14f4_4f42_adf8_f36e37ebdafa.slice. Dec 13 01:32:17.155816 kubelet[2771]: I1213 01:32:17.155561 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5249bbc2-4023-4742-9757-fb2a2f9aa946-lib-modules\") pod \"kube-proxy-clgbk\" (UID: \"5249bbc2-4023-4742-9757-fb2a2f9aa946\") " pod="kube-system/kube-proxy-clgbk" Dec 13 01:32:17.155816 kubelet[2771]: I1213 01:32:17.155610 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5c89ca0b-14f4-4f42-adf8-f36e37ebdafa-run\") pod \"kube-flannel-ds-bk8ff\" (UID: \"5c89ca0b-14f4-4f42-adf8-f36e37ebdafa\") " pod="kube-flannel/kube-flannel-ds-bk8ff" Dec 13 01:32:17.155816 kubelet[2771]: I1213 01:32:17.155636 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c89ca0b-14f4-4f42-adf8-f36e37ebdafa-xtables-lock\") pod \"kube-flannel-ds-bk8ff\" (UID: \"5c89ca0b-14f4-4f42-adf8-f36e37ebdafa\") " pod="kube-flannel/kube-flannel-ds-bk8ff" Dec 13 01:32:17.155816 kubelet[2771]: I1213 01:32:17.155674 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5249bbc2-4023-4742-9757-fb2a2f9aa946-kube-proxy\") pod \"kube-proxy-clgbk\" (UID: \"5249bbc2-4023-4742-9757-fb2a2f9aa946\") " pod="kube-system/kube-proxy-clgbk" Dec 13 01:32:17.155816 kubelet[2771]: I1213 01:32:17.155694 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5249bbc2-4023-4742-9757-fb2a2f9aa946-xtables-lock\") pod \"kube-proxy-clgbk\" (UID: \"5249bbc2-4023-4742-9757-fb2a2f9aa946\") " pod="kube-system/kube-proxy-clgbk" Dec 13 01:32:17.156208 kubelet[2771]: I1213 01:32:17.155737 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sbzz\" (UniqueName: \"kubernetes.io/projected/5249bbc2-4023-4742-9757-fb2a2f9aa946-kube-api-access-6sbzz\") pod \"kube-proxy-clgbk\" (UID: \"5249bbc2-4023-4742-9757-fb2a2f9aa946\") " pod="kube-system/kube-proxy-clgbk" Dec 13 01:32:17.156208 kubelet[2771]: I1213 01:32:17.155787 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/5c89ca0b-14f4-4f42-adf8-f36e37ebdafa-cni-plugin\") pod \"kube-flannel-ds-bk8ff\" (UID: \"5c89ca0b-14f4-4f42-adf8-f36e37ebdafa\") " pod="kube-flannel/kube-flannel-ds-bk8ff" Dec 13 01:32:17.156208 kubelet[2771]: I1213 01:32:17.155819 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/5c89ca0b-14f4-4f42-adf8-f36e37ebdafa-cni\") pod \"kube-flannel-ds-bk8ff\" (UID: \"5c89ca0b-14f4-4f42-adf8-f36e37ebdafa\") " pod="kube-flannel/kube-flannel-ds-bk8ff" Dec 13 01:32:17.156208 kubelet[2771]: I1213 01:32:17.155862 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/5c89ca0b-14f4-4f42-adf8-f36e37ebdafa-flannel-cfg\") pod \"kube-flannel-ds-bk8ff\" (UID: \"5c89ca0b-14f4-4f42-adf8-f36e37ebdafa\") " pod="kube-flannel/kube-flannel-ds-bk8ff" Dec 13 01:32:17.156208 kubelet[2771]: I1213 01:32:17.155886 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvgvk\" (UniqueName: \"kubernetes.io/projected/5c89ca0b-14f4-4f42-adf8-f36e37ebdafa-kube-api-access-lvgvk\") pod \"kube-flannel-ds-bk8ff\" (UID: \"5c89ca0b-14f4-4f42-adf8-f36e37ebdafa\") " pod="kube-flannel/kube-flannel-ds-bk8ff" Dec 13 01:32:17.390009 containerd[1508]: time="2024-12-13T01:32:17.389885682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-clgbk,Uid:5249bbc2-4023-4742-9757-fb2a2f9aa946,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:17.401901 containerd[1508]: time="2024-12-13T01:32:17.401840379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-bk8ff,Uid:5c89ca0b-14f4-4f42-adf8-f36e37ebdafa,Namespace:kube-flannel,Attempt:0,}" Dec 13 01:32:17.456704 containerd[1508]: time="2024-12-13T01:32:17.454469377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:17.456704 containerd[1508]: time="2024-12-13T01:32:17.454563443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:17.456704 containerd[1508]: time="2024-12-13T01:32:17.454596946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:17.456704 containerd[1508]: time="2024-12-13T01:32:17.454773604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:17.487239 systemd[1]: Started cri-containerd-1783c888ec9c350586df4e7eb539b26d86badfb0b9b967c4fe3e4b3168c4f3bf.scope - libcontainer container 1783c888ec9c350586df4e7eb539b26d86badfb0b9b967c4fe3e4b3168c4f3bf. Dec 13 01:32:17.497216 containerd[1508]: time="2024-12-13T01:32:17.496040020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:17.497216 containerd[1508]: time="2024-12-13T01:32:17.496131822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:17.497216 containerd[1508]: time="2024-12-13T01:32:17.496171506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:17.497216 containerd[1508]: time="2024-12-13T01:32:17.496275691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:17.519233 systemd[1]: Started cri-containerd-3426cdf27080687913072bc0d4afbf5acf0832e32274d1f7d2a440a9374b34fc.scope - libcontainer container 3426cdf27080687913072bc0d4afbf5acf0832e32274d1f7d2a440a9374b34fc. Dec 13 01:32:17.524908 containerd[1508]: time="2024-12-13T01:32:17.524871193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-clgbk,Uid:5249bbc2-4023-4742-9757-fb2a2f9aa946,Namespace:kube-system,Attempt:0,} returns sandbox id \"1783c888ec9c350586df4e7eb539b26d86badfb0b9b967c4fe3e4b3168c4f3bf\"" Dec 13 01:32:17.528950 containerd[1508]: time="2024-12-13T01:32:17.528905464Z" level=info msg="CreateContainer within sandbox \"1783c888ec9c350586df4e7eb539b26d86badfb0b9b967c4fe3e4b3168c4f3bf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:32:17.557216 containerd[1508]: time="2024-12-13T01:32:17.557183623Z" level=info msg="CreateContainer within sandbox \"1783c888ec9c350586df4e7eb539b26d86badfb0b9b967c4fe3e4b3168c4f3bf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8f4222cb3b52a0da0ccac27015f347567303d7cbd8ee8ea1b73330b628aa2e67\"" Dec 13 01:32:17.559311 containerd[1508]: time="2024-12-13T01:32:17.559266672Z" level=info msg="StartContainer for \"8f4222cb3b52a0da0ccac27015f347567303d7cbd8ee8ea1b73330b628aa2e67\"" Dec 13 01:32:17.583655 containerd[1508]: time="2024-12-13T01:32:17.583602662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-bk8ff,Uid:5c89ca0b-14f4-4f42-adf8-f36e37ebdafa,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"3426cdf27080687913072bc0d4afbf5acf0832e32274d1f7d2a440a9374b34fc\"" Dec 13 01:32:17.591520 containerd[1508]: time="2024-12-13T01:32:17.591300613Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 01:32:17.599226 systemd[1]: Started cri-containerd-8f4222cb3b52a0da0ccac27015f347567303d7cbd8ee8ea1b73330b628aa2e67.scope - libcontainer container 8f4222cb3b52a0da0ccac27015f347567303d7cbd8ee8ea1b73330b628aa2e67. Dec 13 01:32:17.630759 containerd[1508]: time="2024-12-13T01:32:17.630588967Z" level=info msg="StartContainer for \"8f4222cb3b52a0da0ccac27015f347567303d7cbd8ee8ea1b73330b628aa2e67\" returns successfully" Dec 13 01:32:18.363390 kubelet[2771]: I1213 01:32:18.363267 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-clgbk" podStartSLOduration=1.363239216 podStartE2EDuration="1.363239216s" podCreationTimestamp="2024-12-13 01:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:18.362970534 +0000 UTC m=+15.207217347" watchObservedRunningTime="2024-12-13 01:32:18.363239216 +0000 UTC m=+15.207486029" Dec 13 01:32:20.135821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4161888525.mount: Deactivated successfully. Dec 13 01:32:20.173565 containerd[1508]: time="2024-12-13T01:32:20.173506455Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:20.174751 containerd[1508]: time="2024-12-13T01:32:20.174697178Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852936" Dec 13 01:32:20.175683 containerd[1508]: time="2024-12-13T01:32:20.175612367Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:20.177824 containerd[1508]: time="2024-12-13T01:32:20.177770758Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:20.178999 containerd[1508]: time="2024-12-13T01:32:20.178480764Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.587143211s" Dec 13 01:32:20.178999 containerd[1508]: time="2024-12-13T01:32:20.178506551Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 13 01:32:20.181410 containerd[1508]: time="2024-12-13T01:32:20.181373254Z" level=info msg="CreateContainer within sandbox \"3426cdf27080687913072bc0d4afbf5acf0832e32274d1f7d2a440a9374b34fc\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 01:32:20.195920 containerd[1508]: time="2024-12-13T01:32:20.195869484Z" level=info msg="CreateContainer within sandbox \"3426cdf27080687913072bc0d4afbf5acf0832e32274d1f7d2a440a9374b34fc\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"5f4389d4e72e7c8cfb2fe53d22609565d658210da35476ef734c66ddc66101d4\"" Dec 13 01:32:20.197722 containerd[1508]: time="2024-12-13T01:32:20.196709372Z" level=info msg="StartContainer for \"5f4389d4e72e7c8cfb2fe53d22609565d658210da35476ef734c66ddc66101d4\"" Dec 13 01:32:20.232242 systemd[1]: Started cri-containerd-5f4389d4e72e7c8cfb2fe53d22609565d658210da35476ef734c66ddc66101d4.scope - libcontainer container 5f4389d4e72e7c8cfb2fe53d22609565d658210da35476ef734c66ddc66101d4. Dec 13 01:32:20.257751 containerd[1508]: time="2024-12-13T01:32:20.257700483Z" level=info msg="StartContainer for \"5f4389d4e72e7c8cfb2fe53d22609565d658210da35476ef734c66ddc66101d4\" returns successfully" Dec 13 01:32:20.260972 systemd[1]: cri-containerd-5f4389d4e72e7c8cfb2fe53d22609565d658210da35476ef734c66ddc66101d4.scope: Deactivated successfully. Dec 13 01:32:20.301146 containerd[1508]: time="2024-12-13T01:32:20.301032349Z" level=info msg="shim disconnected" id=5f4389d4e72e7c8cfb2fe53d22609565d658210da35476ef734c66ddc66101d4 namespace=k8s.io Dec 13 01:32:20.301612 containerd[1508]: time="2024-12-13T01:32:20.301517355Z" level=warning msg="cleaning up after shim disconnected" id=5f4389d4e72e7c8cfb2fe53d22609565d658210da35476ef734c66ddc66101d4 namespace=k8s.io Dec 13 01:32:20.301612 containerd[1508]: time="2024-12-13T01:32:20.301553492Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:20.356279 containerd[1508]: time="2024-12-13T01:32:20.355968303Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 01:32:21.009093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f4389d4e72e7c8cfb2fe53d22609565d658210da35476ef734c66ddc66101d4-rootfs.mount: Deactivated successfully. Dec 13 01:32:23.003920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount269175914.mount: Deactivated successfully. Dec 13 01:32:23.662531 containerd[1508]: time="2024-12-13T01:32:23.662466109Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:23.663580 containerd[1508]: time="2024-12-13T01:32:23.663541788Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Dec 13 01:32:23.664596 containerd[1508]: time="2024-12-13T01:32:23.664555151Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:23.667507 containerd[1508]: time="2024-12-13T01:32:23.667462181Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:23.668883 containerd[1508]: time="2024-12-13T01:32:23.668269338Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.312258565s" Dec 13 01:32:23.668883 containerd[1508]: time="2024-12-13T01:32:23.668298603Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 13 01:32:23.671112 containerd[1508]: time="2024-12-13T01:32:23.671072664Z" level=info msg="CreateContainer within sandbox \"3426cdf27080687913072bc0d4afbf5acf0832e32274d1f7d2a440a9374b34fc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:32:23.697648 containerd[1508]: time="2024-12-13T01:32:23.697572411Z" level=info msg="CreateContainer within sandbox \"3426cdf27080687913072bc0d4afbf5acf0832e32274d1f7d2a440a9374b34fc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3a4289d55ffcb6495b6a8107136ab440ddea285849ae852b33caa53e0fd0dacd\"" Dec 13 01:32:23.698461 containerd[1508]: time="2024-12-13T01:32:23.698265145Z" level=info msg="StartContainer for \"3a4289d55ffcb6495b6a8107136ab440ddea285849ae852b33caa53e0fd0dacd\"" Dec 13 01:32:23.739171 systemd[1]: Started cri-containerd-3a4289d55ffcb6495b6a8107136ab440ddea285849ae852b33caa53e0fd0dacd.scope - libcontainer container 3a4289d55ffcb6495b6a8107136ab440ddea285849ae852b33caa53e0fd0dacd. Dec 13 01:32:23.779903 systemd[1]: cri-containerd-3a4289d55ffcb6495b6a8107136ab440ddea285849ae852b33caa53e0fd0dacd.scope: Deactivated successfully. Dec 13 01:32:23.782710 containerd[1508]: time="2024-12-13T01:32:23.782206634Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c89ca0b_14f4_4f42_adf8_f36e37ebdafa.slice/cri-containerd-3a4289d55ffcb6495b6a8107136ab440ddea285849ae852b33caa53e0fd0dacd.scope/memory.events\": no such file or directory" Dec 13 01:32:23.785471 containerd[1508]: time="2024-12-13T01:32:23.785344125Z" level=info msg="StartContainer for \"3a4289d55ffcb6495b6a8107136ab440ddea285849ae852b33caa53e0fd0dacd\" returns successfully" Dec 13 01:32:23.813316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a4289d55ffcb6495b6a8107136ab440ddea285849ae852b33caa53e0fd0dacd-rootfs.mount: Deactivated successfully. Dec 13 01:32:23.850853 kubelet[2771]: I1213 01:32:23.850817 2771 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:32:23.868469 containerd[1508]: time="2024-12-13T01:32:23.868385072Z" level=info msg="shim disconnected" id=3a4289d55ffcb6495b6a8107136ab440ddea285849ae852b33caa53e0fd0dacd namespace=k8s.io Dec 13 01:32:23.868469 containerd[1508]: time="2024-12-13T01:32:23.868463408Z" level=warning msg="cleaning up after shim disconnected" id=3a4289d55ffcb6495b6a8107136ab440ddea285849ae852b33caa53e0fd0dacd namespace=k8s.io Dec 13 01:32:23.869140 containerd[1508]: time="2024-12-13T01:32:23.868477645Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:32:23.908070 kubelet[2771]: I1213 01:32:23.908001 2771 topology_manager.go:215] "Topology Admit Handler" podUID="825e3885-d238-43a4-a967-bc14b904216e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-c852l" Dec 13 01:32:23.912304 kubelet[2771]: I1213 01:32:23.912099 2771 topology_manager.go:215] "Topology Admit Handler" podUID="681f446a-48bb-478a-8a53-843263137665" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4s7cf" Dec 13 01:32:23.916377 systemd[1]: Created slice kubepods-burstable-pod825e3885_d238_43a4_a967_bc14b904216e.slice - libcontainer container kubepods-burstable-pod825e3885_d238_43a4_a967_bc14b904216e.slice. Dec 13 01:32:23.925039 systemd[1]: Created slice kubepods-burstable-pod681f446a_48bb_478a_8a53_843263137665.slice - libcontainer container kubepods-burstable-pod681f446a_48bb_478a_8a53_843263137665.slice. Dec 13 01:32:23.999342 kubelet[2771]: I1213 01:32:23.999242 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vc6c\" (UniqueName: \"kubernetes.io/projected/825e3885-d238-43a4-a967-bc14b904216e-kube-api-access-2vc6c\") pod \"coredns-7db6d8ff4d-c852l\" (UID: \"825e3885-d238-43a4-a967-bc14b904216e\") " pod="kube-system/coredns-7db6d8ff4d-c852l" Dec 13 01:32:23.999342 kubelet[2771]: I1213 01:32:23.999322 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/681f446a-48bb-478a-8a53-843263137665-config-volume\") pod \"coredns-7db6d8ff4d-4s7cf\" (UID: \"681f446a-48bb-478a-8a53-843263137665\") " pod="kube-system/coredns-7db6d8ff4d-4s7cf" Dec 13 01:32:23.999646 kubelet[2771]: I1213 01:32:23.999361 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/825e3885-d238-43a4-a967-bc14b904216e-config-volume\") pod \"coredns-7db6d8ff4d-c852l\" (UID: \"825e3885-d238-43a4-a967-bc14b904216e\") " pod="kube-system/coredns-7db6d8ff4d-c852l" Dec 13 01:32:23.999646 kubelet[2771]: I1213 01:32:23.999391 2771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrqkz\" (UniqueName: \"kubernetes.io/projected/681f446a-48bb-478a-8a53-843263137665-kube-api-access-vrqkz\") pod \"coredns-7db6d8ff4d-4s7cf\" (UID: \"681f446a-48bb-478a-8a53-843263137665\") " pod="kube-system/coredns-7db6d8ff4d-4s7cf" Dec 13 01:32:24.222850 containerd[1508]: time="2024-12-13T01:32:24.222652779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c852l,Uid:825e3885-d238-43a4-a967-bc14b904216e,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:24.229905 containerd[1508]: time="2024-12-13T01:32:24.229527721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4s7cf,Uid:681f446a-48bb-478a-8a53-843263137665,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:24.303468 containerd[1508]: time="2024-12-13T01:32:24.302619976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c852l,Uid:825e3885-d238-43a4-a967-bc14b904216e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f50634114836d030d749f4aeb059824684da64b7baf067f0fa5e8464436e851\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:32:24.303641 kubelet[2771]: E1213 01:32:24.302934 2771 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f50634114836d030d749f4aeb059824684da64b7baf067f0fa5e8464436e851\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:32:24.303796 kubelet[2771]: E1213 01:32:24.303743 2771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f50634114836d030d749f4aeb059824684da64b7baf067f0fa5e8464436e851\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-c852l" Dec 13 01:32:24.303970 kubelet[2771]: E1213 01:32:24.303783 2771 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f50634114836d030d749f4aeb059824684da64b7baf067f0fa5e8464436e851\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-c852l" Dec 13 01:32:24.303970 kubelet[2771]: E1213 01:32:24.303877 2771 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-c852l_kube-system(825e3885-d238-43a4-a967-bc14b904216e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-c852l_kube-system(825e3885-d238-43a4-a967-bc14b904216e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f50634114836d030d749f4aeb059824684da64b7baf067f0fa5e8464436e851\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-c852l" podUID="825e3885-d238-43a4-a967-bc14b904216e" Dec 13 01:32:24.304836 containerd[1508]: time="2024-12-13T01:32:24.304758451Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4s7cf,Uid:681f446a-48bb-478a-8a53-843263137665,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e97206c677fa3058df5c06177182f4a9fbb3db35560bb1e29f39cd9625c71e5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:32:24.305512 kubelet[2771]: E1213 01:32:24.305305 2771 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e97206c677fa3058df5c06177182f4a9fbb3db35560bb1e29f39cd9625c71e5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:32:24.305512 kubelet[2771]: E1213 01:32:24.305371 2771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e97206c677fa3058df5c06177182f4a9fbb3db35560bb1e29f39cd9625c71e5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-4s7cf" Dec 13 01:32:24.305512 kubelet[2771]: E1213 01:32:24.305398 2771 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e97206c677fa3058df5c06177182f4a9fbb3db35560bb1e29f39cd9625c71e5\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-4s7cf" Dec 13 01:32:24.305512 kubelet[2771]: E1213 01:32:24.305447 2771 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4s7cf_kube-system(681f446a-48bb-478a-8a53-843263137665)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4s7cf_kube-system(681f446a-48bb-478a-8a53-843263137665)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e97206c677fa3058df5c06177182f4a9fbb3db35560bb1e29f39cd9625c71e5\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-4s7cf" podUID="681f446a-48bb-478a-8a53-843263137665" Dec 13 01:32:24.372762 containerd[1508]: time="2024-12-13T01:32:24.372684877Z" level=info msg="CreateContainer within sandbox \"3426cdf27080687913072bc0d4afbf5acf0832e32274d1f7d2a440a9374b34fc\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 01:32:24.395938 containerd[1508]: time="2024-12-13T01:32:24.395869647Z" level=info msg="CreateContainer within sandbox \"3426cdf27080687913072bc0d4afbf5acf0832e32274d1f7d2a440a9374b34fc\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"4cbf7e3a8883af7221a2ed3d8f7e3a1b869d73e7db509ef0e949c8df3e453dea\"" Dec 13 01:32:24.397387 containerd[1508]: time="2024-12-13T01:32:24.397143496Z" level=info msg="StartContainer for \"4cbf7e3a8883af7221a2ed3d8f7e3a1b869d73e7db509ef0e949c8df3e453dea\"" Dec 13 01:32:24.446253 systemd[1]: Started cri-containerd-4cbf7e3a8883af7221a2ed3d8f7e3a1b869d73e7db509ef0e949c8df3e453dea.scope - libcontainer container 4cbf7e3a8883af7221a2ed3d8f7e3a1b869d73e7db509ef0e949c8df3e453dea. Dec 13 01:32:24.491181 containerd[1508]: time="2024-12-13T01:32:24.490907777Z" level=info msg="StartContainer for \"4cbf7e3a8883af7221a2ed3d8f7e3a1b869d73e7db509ef0e949c8df3e453dea\" returns successfully" Dec 13 01:32:24.815945 systemd[1]: run-netns-cni\x2d21d3d87e\x2d8543\x2d2d14\x2d33a0\x2dcf4bccb00613.mount: Deactivated successfully. Dec 13 01:32:24.816176 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f50634114836d030d749f4aeb059824684da64b7baf067f0fa5e8464436e851-shm.mount: Deactivated successfully. Dec 13 01:32:25.574968 systemd-networkd[1393]: flannel.1: Link UP Dec 13 01:32:25.575102 systemd-networkd[1393]: flannel.1: Gained carrier Dec 13 01:32:26.646476 systemd-networkd[1393]: flannel.1: Gained IPv6LL Dec 13 01:32:36.286909 containerd[1508]: time="2024-12-13T01:32:36.286599591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4s7cf,Uid:681f446a-48bb-478a-8a53-843263137665,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:36.286909 containerd[1508]: time="2024-12-13T01:32:36.286612755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c852l,Uid:825e3885-d238-43a4-a967-bc14b904216e,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:36.350540 systemd-networkd[1393]: cni0: Link UP Dec 13 01:32:36.350549 systemd-networkd[1393]: cni0: Gained carrier Dec 13 01:32:36.361197 systemd-networkd[1393]: cni0: Lost carrier Dec 13 01:32:36.365950 systemd-networkd[1393]: veth82462f95: Link UP Dec 13 01:32:36.366650 systemd-networkd[1393]: vethd6961213: Link UP Dec 13 01:32:36.369594 kernel: cni0: port 1(veth82462f95) entered blocking state Dec 13 01:32:36.369706 kernel: cni0: port 1(veth82462f95) entered disabled state Dec 13 01:32:36.373380 kernel: veth82462f95: entered allmulticast mode Dec 13 01:32:36.377366 kernel: veth82462f95: entered promiscuous mode Dec 13 01:32:36.379363 kernel: cni0: port 1(veth82462f95) entered blocking state Dec 13 01:32:36.381185 kernel: cni0: port 1(veth82462f95) entered forwarding state Dec 13 01:32:36.381212 kernel: cni0: port 1(veth82462f95) entered disabled state Dec 13 01:32:36.385642 kernel: cni0: port 2(vethd6961213) entered blocking state Dec 13 01:32:36.385747 kernel: cni0: port 2(vethd6961213) entered disabled state Dec 13 01:32:36.390552 kernel: vethd6961213: entered allmulticast mode Dec 13 01:32:36.393090 kernel: vethd6961213: entered promiscuous mode Dec 13 01:32:36.406487 kernel: cni0: port 1(veth82462f95) entered blocking state Dec 13 01:32:36.406565 kernel: cni0: port 1(veth82462f95) entered forwarding state Dec 13 01:32:36.407768 systemd-networkd[1393]: veth82462f95: Gained carrier Dec 13 01:32:36.408602 systemd-networkd[1393]: cni0: Gained carrier Dec 13 01:32:36.415548 kernel: cni0: port 2(vethd6961213) entered blocking state Dec 13 01:32:36.415657 kernel: cni0: port 2(vethd6961213) entered forwarding state Dec 13 01:32:36.415389 systemd-networkd[1393]: vethd6961213: Gained carrier Dec 13 01:32:36.415751 containerd[1508]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Dec 13 01:32:36.415751 containerd[1508]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:32:36.443046 containerd[1508]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:32:36.442749494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:36.443046 containerd[1508]: time="2024-12-13T01:32:36.442816359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:36.443046 containerd[1508]: time="2024-12-13T01:32:36.442829343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:36.443046 containerd[1508]: time="2024-12-13T01:32:36.442916306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:36.458344 containerd[1508]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Dec 13 01:32:36.458344 containerd[1508]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:32:36.476357 systemd[1]: Started cri-containerd-fd26b4266bb81f40e4e1e28358cb93c19bafc2d3f351dc44e593411fce6937d6.scope - libcontainer container fd26b4266bb81f40e4e1e28358cb93c19bafc2d3f351dc44e593411fce6937d6. Dec 13 01:32:36.485081 containerd[1508]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:32:36.484630992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:36.485081 containerd[1508]: time="2024-12-13T01:32:36.484693890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:36.485081 containerd[1508]: time="2024-12-13T01:32:36.484706694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:36.485081 containerd[1508]: time="2024-12-13T01:32:36.484784749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:36.518330 systemd[1]: Started cri-containerd-3443119da71cc818c01460ff0e74ec50f1e87d5f476020ec03574babbd531587.scope - libcontainer container 3443119da71cc818c01460ff0e74ec50f1e87d5f476020ec03574babbd531587. Dec 13 01:32:36.562317 containerd[1508]: time="2024-12-13T01:32:36.562150235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c852l,Uid:825e3885-d238-43a4-a967-bc14b904216e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd26b4266bb81f40e4e1e28358cb93c19bafc2d3f351dc44e593411fce6937d6\"" Dec 13 01:32:36.573234 containerd[1508]: time="2024-12-13T01:32:36.573042978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4s7cf,Uid:681f446a-48bb-478a-8a53-843263137665,Namespace:kube-system,Attempt:0,} returns sandbox id \"3443119da71cc818c01460ff0e74ec50f1e87d5f476020ec03574babbd531587\"" Dec 13 01:32:36.597141 containerd[1508]: time="2024-12-13T01:32:36.597077470Z" level=info msg="CreateContainer within sandbox \"3443119da71cc818c01460ff0e74ec50f1e87d5f476020ec03574babbd531587\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:32:36.597269 containerd[1508]: time="2024-12-13T01:32:36.597238041Z" level=info msg="CreateContainer within sandbox \"fd26b4266bb81f40e4e1e28358cb93c19bafc2d3f351dc44e593411fce6937d6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:32:36.621686 containerd[1508]: time="2024-12-13T01:32:36.621621426Z" level=info msg="CreateContainer within sandbox \"fd26b4266bb81f40e4e1e28358cb93c19bafc2d3f351dc44e593411fce6937d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"75a30aa3ef49897cbfe4834abd6e9448e4a7807ea48a640197639d2eb038703f\"" Dec 13 01:32:36.622592 containerd[1508]: time="2024-12-13T01:32:36.622244672Z" level=info msg="StartContainer for \"75a30aa3ef49897cbfe4834abd6e9448e4a7807ea48a640197639d2eb038703f\"" Dec 13 01:32:36.627282 containerd[1508]: time="2024-12-13T01:32:36.627191689Z" level=info msg="CreateContainer within sandbox \"3443119da71cc818c01460ff0e74ec50f1e87d5f476020ec03574babbd531587\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4bf4477e59dfa5ac4a9145cf338caa3551461aa69c3318650148d7212093c34\"" Dec 13 01:32:36.629039 containerd[1508]: time="2024-12-13T01:32:36.629004338Z" level=info msg="StartContainer for \"d4bf4477e59dfa5ac4a9145cf338caa3551461aa69c3318650148d7212093c34\"" Dec 13 01:32:36.670226 systemd[1]: Started cri-containerd-75a30aa3ef49897cbfe4834abd6e9448e4a7807ea48a640197639d2eb038703f.scope - libcontainer container 75a30aa3ef49897cbfe4834abd6e9448e4a7807ea48a640197639d2eb038703f. Dec 13 01:32:36.679590 systemd[1]: Started cri-containerd-d4bf4477e59dfa5ac4a9145cf338caa3551461aa69c3318650148d7212093c34.scope - libcontainer container d4bf4477e59dfa5ac4a9145cf338caa3551461aa69c3318650148d7212093c34. Dec 13 01:32:36.708493 containerd[1508]: time="2024-12-13T01:32:36.708459961Z" level=info msg="StartContainer for \"75a30aa3ef49897cbfe4834abd6e9448e4a7807ea48a640197639d2eb038703f\" returns successfully" Dec 13 01:32:36.726764 containerd[1508]: time="2024-12-13T01:32:36.726723945Z" level=info msg="StartContainer for \"d4bf4477e59dfa5ac4a9145cf338caa3551461aa69c3318650148d7212093c34\" returns successfully" Dec 13 01:32:37.458738 kubelet[2771]: I1213 01:32:37.455081 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-bk8ff" podStartSLOduration=14.373084607 podStartE2EDuration="20.455039371s" podCreationTimestamp="2024-12-13 01:32:17 +0000 UTC" firstStartedPulling="2024-12-13 01:32:17.587451187 +0000 UTC m=+14.431697959" lastFinishedPulling="2024-12-13 01:32:23.669405951 +0000 UTC m=+20.513652723" observedRunningTime="2024-12-13 01:32:25.388323058 +0000 UTC m=+22.232569910" watchObservedRunningTime="2024-12-13 01:32:37.455039371 +0000 UTC m=+34.299286173" Dec 13 01:32:37.460002 kubelet[2771]: I1213 01:32:37.459721 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4s7cf" podStartSLOduration=20.459696187 podStartE2EDuration="20.459696187s" podCreationTimestamp="2024-12-13 01:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:37.453797729 +0000 UTC m=+34.298044521" watchObservedRunningTime="2024-12-13 01:32:37.459696187 +0000 UTC m=+34.303942959" Dec 13 01:32:37.480899 kubelet[2771]: I1213 01:32:37.480833 2771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-c852l" podStartSLOduration=20.480812746 podStartE2EDuration="20.480812746s" podCreationTimestamp="2024-12-13 01:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:37.480544596 +0000 UTC m=+34.324791378" watchObservedRunningTime="2024-12-13 01:32:37.480812746 +0000 UTC m=+34.325059519" Dec 13 01:32:37.590253 systemd-networkd[1393]: cni0: Gained IPv6LL Dec 13 01:32:37.782391 systemd-networkd[1393]: vethd6961213: Gained IPv6LL Dec 13 01:32:38.230436 systemd-networkd[1393]: veth82462f95: Gained IPv6LL Dec 13 01:33:37.339963 kubelet[2771]: I1213 01:33:37.339876 2771 log.go:245] http: TLS handshake error from 185.247.137.238:60045: read tcp 157.90.151.24:10250->185.247.137.238:60045: read: connection reset by peer Dec 13 01:34:00.779368 systemd[1]: Started sshd@5-157.90.151.24:22-185.213.174.118:51052.service - OpenSSH per-connection server daemon (185.213.174.118:51052). Dec 13 01:34:00.940620 sshd[4002]: Invalid user git from 185.213.174.118 port 51052 Dec 13 01:34:00.958633 sshd[4002]: Connection closed by invalid user git 185.213.174.118 port 51052 [preauth] Dec 13 01:34:00.964526 systemd[1]: sshd@5-157.90.151.24:22-185.213.174.118:51052.service: Deactivated successfully. Dec 13 01:34:10.393462 kubelet[2771]: I1213 01:34:10.393352 2771 log.go:245] http: TLS handshake error from 185.247.137.251:45699: read tcp 157.90.151.24:10250->185.247.137.251:45699: read: connection reset by peer Dec 13 01:34:12.994506 systemd[1]: Started sshd@6-157.90.151.24:22-185.213.174.118:35310.service - OpenSSH per-connection server daemon (185.213.174.118:35310). Dec 13 01:34:13.197570 sshd[4072]: Invalid user emcali from 185.213.174.118 port 35310 Dec 13 01:34:13.216928 sshd[4072]: Connection closed by invalid user emcali 185.213.174.118 port 35310 [preauth] Dec 13 01:34:13.223010 systemd[1]: sshd@6-157.90.151.24:22-185.213.174.118:35310.service: Deactivated successfully. Dec 13 01:34:25.268274 systemd[1]: Started sshd@7-157.90.151.24:22-185.213.174.118:43604.service - OpenSSH per-connection server daemon (185.213.174.118:43604). Dec 13 01:34:25.599409 sshd[4122]: Connection closed by authenticating user root 185.213.174.118 port 43604 [preauth] Dec 13 01:34:25.604617 systemd[1]: sshd@7-157.90.151.24:22-185.213.174.118:43604.service: Deactivated successfully. Dec 13 01:34:38.892706 systemd[1]: Started sshd@8-157.90.151.24:22-185.213.174.118:60742.service - OpenSSH per-connection server daemon (185.213.174.118:60742). Dec 13 01:34:39.078193 sshd[4193]: Connection closed by authenticating user root 185.213.174.118 port 60742 [preauth] Dec 13 01:34:39.083686 systemd[1]: sshd@8-157.90.151.24:22-185.213.174.118:60742.service: Deactivated successfully. Dec 13 01:34:43.448333 kubelet[2771]: I1213 01:34:43.448238 2771 log.go:245] http: TLS handshake error from 87.236.176.236:58505: read tcp 157.90.151.24:10250->87.236.176.236:58505: read: connection reset by peer Dec 13 01:34:58.884483 systemd[1]: Started sshd@9-157.90.151.24:22-185.213.174.118:56018.service - OpenSSH per-connection server daemon (185.213.174.118:56018). Dec 13 01:34:59.093862 sshd[4284]: Invalid user max from 185.213.174.118 port 56018 Dec 13 01:34:59.124690 sshd[4284]: Connection closed by invalid user max 185.213.174.118 port 56018 [preauth] Dec 13 01:34:59.130411 systemd[1]: sshd@9-157.90.151.24:22-185.213.174.118:56018.service: Deactivated successfully. Dec 13 01:35:07.093999 update_engine[1490]: I20241213 01:35:07.093879 1490 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 01:35:07.093999 update_engine[1490]: I20241213 01:35:07.093969 1490 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 01:35:07.095133 update_engine[1490]: I20241213 01:35:07.094470 1490 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 01:35:07.095646 update_engine[1490]: I20241213 01:35:07.095568 1490 omaha_request_params.cc:62] Current group set to stable Dec 13 01:35:07.098541 update_engine[1490]: I20241213 01:35:07.097990 1490 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 01:35:07.098541 update_engine[1490]: I20241213 01:35:07.098032 1490 update_attempter.cc:643] Scheduling an action processor start. Dec 13 01:35:07.098541 update_engine[1490]: I20241213 01:35:07.098104 1490 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:35:07.098541 update_engine[1490]: I20241213 01:35:07.098176 1490 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 01:35:07.098541 update_engine[1490]: I20241213 01:35:07.098325 1490 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:35:07.098541 update_engine[1490]: I20241213 01:35:07.098343 1490 omaha_request_action.cc:272] Request: Dec 13 01:35:07.098541 update_engine[1490]: Dec 13 01:35:07.098541 update_engine[1490]: Dec 13 01:35:07.098541 update_engine[1490]: Dec 13 01:35:07.098541 update_engine[1490]: Dec 13 01:35:07.098541 update_engine[1490]: Dec 13 01:35:07.098541 update_engine[1490]: Dec 13 01:35:07.098541 update_engine[1490]: Dec 13 01:35:07.098541 update_engine[1490]: Dec 13 01:35:07.098541 update_engine[1490]: I20241213 01:35:07.098359 1490 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:35:07.099342 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 01:35:07.104238 update_engine[1490]: I20241213 01:35:07.104182 1490 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:35:07.104844 update_engine[1490]: I20241213 01:35:07.104751 1490 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:35:07.107234 update_engine[1490]: E20241213 01:35:07.107165 1490 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:35:07.107337 update_engine[1490]: I20241213 01:35:07.107285 1490 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 01:35:16.473550 kubelet[2771]: I1213 01:35:16.473447 2771 log.go:245] http: TLS handshake error from 87.236.176.241:44049: tls: client requested unsupported application protocols ([http/0.9 http/1.0 spdy/1 spdy/2 spdy/3 h2c hq]) Dec 13 01:35:17.006305 update_engine[1490]: I20241213 01:35:17.006141 1490 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:35:17.007241 update_engine[1490]: I20241213 01:35:17.006573 1490 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:35:17.007241 update_engine[1490]: I20241213 01:35:17.006942 1490 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:35:17.007736 update_engine[1490]: E20241213 01:35:17.007678 1490 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:35:17.007795 update_engine[1490]: I20241213 01:35:17.007766 1490 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 01:35:26.452550 systemd[1]: Started sshd@10-157.90.151.24:22-185.213.174.118:52994.service - OpenSSH per-connection server daemon (185.213.174.118:52994). Dec 13 01:35:26.654413 sshd[4405]: Connection closed by authenticating user root 185.213.174.118 port 52994 [preauth] Dec 13 01:35:26.658266 systemd[1]: sshd@10-157.90.151.24:22-185.213.174.118:52994.service: Deactivated successfully. Dec 13 01:35:27.007956 update_engine[1490]: I20241213 01:35:27.007776 1490 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:35:27.008692 update_engine[1490]: I20241213 01:35:27.008364 1490 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:35:27.008883 update_engine[1490]: I20241213 01:35:27.008807 1490 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:35:27.009919 update_engine[1490]: E20241213 01:35:27.009852 1490 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:35:27.009999 update_engine[1490]: I20241213 01:35:27.009965 1490 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 01:35:37.004402 update_engine[1490]: I20241213 01:35:37.004262 1490 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:35:37.005185 update_engine[1490]: I20241213 01:35:37.004796 1490 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:35:37.005377 update_engine[1490]: I20241213 01:35:37.005267 1490 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:35:37.006184 update_engine[1490]: E20241213 01:35:37.006099 1490 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:35:37.006341 update_engine[1490]: I20241213 01:35:37.006197 1490 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:35:37.006341 update_engine[1490]: I20241213 01:35:37.006215 1490 omaha_request_action.cc:617] Omaha request response: Dec 13 01:35:37.006486 update_engine[1490]: E20241213 01:35:37.006347 1490 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 01:35:37.006486 update_engine[1490]: I20241213 01:35:37.006378 1490 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 01:35:37.006486 update_engine[1490]: I20241213 01:35:37.006393 1490 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:35:37.006486 update_engine[1490]: I20241213 01:35:37.006407 1490 update_attempter.cc:306] Processing Done. Dec 13 01:35:37.006486 update_engine[1490]: E20241213 01:35:37.006430 1490 update_attempter.cc:619] Update failed. Dec 13 01:35:37.006486 update_engine[1490]: I20241213 01:35:37.006445 1490 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 01:35:37.006486 update_engine[1490]: I20241213 01:35:37.006460 1490 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 01:35:37.006486 update_engine[1490]: I20241213 01:35:37.006479 1490 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 01:35:37.007037 update_engine[1490]: I20241213 01:35:37.006592 1490 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:35:37.007037 update_engine[1490]: I20241213 01:35:37.006660 1490 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:35:37.007037 update_engine[1490]: I20241213 01:35:37.006676 1490 omaha_request_action.cc:272] Request: Dec 13 01:35:37.007037 update_engine[1490]: Dec 13 01:35:37.007037 update_engine[1490]: Dec 13 01:35:37.007037 update_engine[1490]: Dec 13 01:35:37.007037 update_engine[1490]: Dec 13 01:35:37.007037 update_engine[1490]: Dec 13 01:35:37.007037 update_engine[1490]: Dec 13 01:35:37.007037 update_engine[1490]: I20241213 01:35:37.006691 1490 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:35:37.007037 update_engine[1490]: I20241213 01:35:37.006973 1490 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:35:37.007834 update_engine[1490]: I20241213 01:35:37.007263 1490 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:35:37.007902 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 01:35:37.008715 update_engine[1490]: E20241213 01:35:37.008351 1490 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:35:37.008715 update_engine[1490]: I20241213 01:35:37.008433 1490 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:35:37.008715 update_engine[1490]: I20241213 01:35:37.008452 1490 omaha_request_action.cc:617] Omaha request response: Dec 13 01:35:37.008715 update_engine[1490]: I20241213 01:35:37.008471 1490 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:35:37.008715 update_engine[1490]: I20241213 01:35:37.008486 1490 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:35:37.008715 update_engine[1490]: I20241213 01:35:37.008502 1490 update_attempter.cc:306] Processing Done. Dec 13 01:35:37.008715 update_engine[1490]: I20241213 01:35:37.008519 1490 update_attempter.cc:310] Error event sent. Dec 13 01:35:37.008715 update_engine[1490]: I20241213 01:35:37.008543 1490 update_check_scheduler.cc:74] Next update check in 45m21s Dec 13 01:35:37.009375 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 01:35:49.527311 kubelet[2771]: I1213 01:35:49.527217 2771 log.go:245] http: TLS handshake error from 87.236.176.232:53563: tls: client requested unsupported application protocols ([hq h2c spdy/3 spdy/2 spdy/1 http/1.0 http/0.9]) Dec 13 01:36:03.305596 systemd[1]: Started sshd@11-157.90.151.24:22-185.213.174.118:59972.service - OpenSSH per-connection server daemon (185.213.174.118:59972). Dec 13 01:36:03.482019 sshd[4576]: Invalid user kam from 185.213.174.118 port 59972 Dec 13 01:36:03.501232 sshd[4576]: Connection closed by invalid user kam 185.213.174.118 port 59972 [preauth] Dec 13 01:36:03.507173 systemd[1]: sshd@11-157.90.151.24:22-185.213.174.118:59972.service: Deactivated successfully. Dec 13 01:36:22.579265 kubelet[2771]: I1213 01:36:22.579201 2771 log.go:245] http: TLS handshake error from 185.247.137.239:56741: tls: client offered only unsupported versions: [302 301] Dec 13 01:36:45.458552 systemd[1]: Started sshd@12-157.90.151.24:22-147.75.109.163:36830.service - OpenSSH per-connection server daemon (147.75.109.163:36830). Dec 13 01:36:46.460742 sshd[4752]: Accepted publickey for core from 147.75.109.163 port 36830 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:36:46.464255 sshd[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:46.475289 systemd-logind[1483]: New session 6 of user core. Dec 13 01:36:46.481428 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:36:47.259265 sshd[4752]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:47.268789 systemd[1]: sshd@12-157.90.151.24:22-147.75.109.163:36830.service: Deactivated successfully. Dec 13 01:36:47.273373 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:36:47.275622 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:36:47.277934 systemd-logind[1483]: Removed session 6. Dec 13 01:36:52.443630 systemd[1]: Started sshd@13-157.90.151.24:22-147.75.109.163:35282.service - OpenSSH per-connection server daemon (147.75.109.163:35282). Dec 13 01:36:53.446629 sshd[4795]: Accepted publickey for core from 147.75.109.163 port 35282 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:36:53.450645 sshd[4795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:36:53.462569 systemd-logind[1483]: New session 7 of user core. Dec 13 01:36:53.467391 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:36:54.250494 sshd[4795]: pam_unix(sshd:session): session closed for user core Dec 13 01:36:54.255931 systemd[1]: sshd@13-157.90.151.24:22-147.75.109.163:35282.service: Deactivated successfully. Dec 13 01:36:54.260299 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:36:54.265126 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:36:54.267197 systemd-logind[1483]: Removed session 7. Dec 13 01:36:55.659015 kubelet[2771]: I1213 01:36:55.658854 2771 log.go:245] http: TLS handshake error from 185.247.137.220:55211: read tcp 157.90.151.24:10250->185.247.137.220:55211: read: connection reset by peer Dec 13 01:36:59.432564 systemd[1]: Started sshd@14-157.90.151.24:22-147.75.109.163:43398.service - OpenSSH per-connection server daemon (147.75.109.163:43398). Dec 13 01:37:00.444555 sshd[4845]: Accepted publickey for core from 147.75.109.163 port 43398 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:00.448457 sshd[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:00.458221 systemd-logind[1483]: New session 8 of user core. Dec 13 01:37:00.465416 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:37:01.247221 sshd[4845]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:01.254976 systemd[1]: sshd@14-157.90.151.24:22-147.75.109.163:43398.service: Deactivated successfully. Dec 13 01:37:01.260286 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:37:01.261995 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:37:01.264537 systemd-logind[1483]: Removed session 8. Dec 13 01:37:01.428383 systemd[1]: Started sshd@15-157.90.151.24:22-147.75.109.163:43410.service - OpenSSH per-connection server daemon (147.75.109.163:43410). Dec 13 01:37:02.430938 sshd[4859]: Accepted publickey for core from 147.75.109.163 port 43410 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:02.434579 sshd[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:02.445154 systemd-logind[1483]: New session 9 of user core. Dec 13 01:37:02.452397 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:37:03.295795 sshd[4859]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:03.301863 systemd[1]: sshd@15-157.90.151.24:22-147.75.109.163:43410.service: Deactivated successfully. Dec 13 01:37:03.301998 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:37:03.304889 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:37:03.307992 systemd-logind[1483]: Removed session 9. Dec 13 01:37:03.470544 systemd[1]: Started sshd@16-157.90.151.24:22-147.75.109.163:43418.service - OpenSSH per-connection server daemon (147.75.109.163:43418). Dec 13 01:37:04.473603 sshd[4894]: Accepted publickey for core from 147.75.109.163 port 43418 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:04.476594 sshd[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:04.483143 systemd-logind[1483]: New session 10 of user core. Dec 13 01:37:04.493280 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:37:05.268755 sshd[4894]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:05.277293 systemd[1]: sshd@16-157.90.151.24:22-147.75.109.163:43418.service: Deactivated successfully. Dec 13 01:37:05.281753 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:37:05.283800 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:37:05.287795 systemd-logind[1483]: Removed session 10. Dec 13 01:37:10.445618 systemd[1]: Started sshd@17-157.90.151.24:22-147.75.109.163:56630.service - OpenSSH per-connection server daemon (147.75.109.163:56630). Dec 13 01:37:11.454690 sshd[4928]: Accepted publickey for core from 147.75.109.163 port 56630 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:11.458131 sshd[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:11.467784 systemd-logind[1483]: New session 11 of user core. Dec 13 01:37:11.475296 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:37:12.244110 sshd[4928]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:12.250610 systemd[1]: sshd@17-157.90.151.24:22-147.75.109.163:56630.service: Deactivated successfully. Dec 13 01:37:12.255405 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:37:12.256737 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:37:12.258653 systemd-logind[1483]: Removed session 11. Dec 13 01:37:12.425642 systemd[1]: Started sshd@18-157.90.151.24:22-147.75.109.163:56640.service - OpenSSH per-connection server daemon (147.75.109.163:56640). Dec 13 01:37:13.430391 sshd[4947]: Accepted publickey for core from 147.75.109.163 port 56640 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:13.433472 sshd[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:13.441317 systemd-logind[1483]: New session 12 of user core. Dec 13 01:37:13.449397 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:37:14.503634 sshd[4947]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:14.508708 systemd[1]: sshd@18-157.90.151.24:22-147.75.109.163:56640.service: Deactivated successfully. Dec 13 01:37:14.513376 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:37:14.516776 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:37:14.519809 systemd-logind[1483]: Removed session 12. Dec 13 01:37:14.677657 systemd[1]: Started sshd@19-157.90.151.24:22-147.75.109.163:56656.service - OpenSSH per-connection server daemon (147.75.109.163:56656). Dec 13 01:37:15.699803 sshd[4972]: Accepted publickey for core from 147.75.109.163 port 56656 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:15.703496 sshd[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:15.709818 systemd-logind[1483]: New session 13 of user core. Dec 13 01:37:15.716195 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:37:18.173837 sshd[4972]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:18.183534 systemd[1]: sshd@19-157.90.151.24:22-147.75.109.163:56656.service: Deactivated successfully. Dec 13 01:37:18.191095 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:37:18.193133 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:37:18.195537 systemd-logind[1483]: Removed session 13. Dec 13 01:37:18.356800 systemd[1]: Started sshd@20-157.90.151.24:22-147.75.109.163:33960.service - OpenSSH per-connection server daemon (147.75.109.163:33960). Dec 13 01:37:19.366546 sshd[5001]: Accepted publickey for core from 147.75.109.163 port 33960 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:19.369745 sshd[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:19.376105 systemd-logind[1483]: New session 14 of user core. Dec 13 01:37:19.384380 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:37:20.342967 sshd[5001]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:20.346369 systemd[1]: sshd@20-157.90.151.24:22-147.75.109.163:33960.service: Deactivated successfully. Dec 13 01:37:20.348236 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:37:20.351176 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:37:20.352612 systemd-logind[1483]: Removed session 14. Dec 13 01:37:20.521228 systemd[1]: Started sshd@21-157.90.151.24:22-147.75.109.163:33966.service - OpenSSH per-connection server daemon (147.75.109.163:33966). Dec 13 01:37:21.512366 sshd[5027]: Accepted publickey for core from 147.75.109.163 port 33966 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:21.515905 sshd[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:21.526252 systemd-logind[1483]: New session 15 of user core. Dec 13 01:37:21.531295 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:37:22.303325 sshd[5027]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:22.308726 systemd[1]: sshd@21-157.90.151.24:22-147.75.109.163:33966.service: Deactivated successfully. Dec 13 01:37:22.313614 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:37:22.316940 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:37:22.320037 systemd-logind[1483]: Removed session 15. Dec 13 01:37:27.484150 systemd[1]: Started sshd@22-157.90.151.24:22-147.75.109.163:52616.service - OpenSSH per-connection server daemon (147.75.109.163:52616). Dec 13 01:37:28.475826 sshd[5070]: Accepted publickey for core from 147.75.109.163 port 52616 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:28.478426 sshd[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:28.487779 systemd-logind[1483]: New session 16 of user core. Dec 13 01:37:28.497475 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:37:28.717300 kubelet[2771]: I1213 01:37:28.717206 2771 log.go:245] http: TLS handshake error from 87.236.176.234:58519: read tcp 157.90.151.24:10250->87.236.176.234:58519: read: connection reset by peer Dec 13 01:37:29.269197 sshd[5070]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:29.277520 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:37:29.280021 systemd[1]: sshd@22-157.90.151.24:22-147.75.109.163:52616.service: Deactivated successfully. Dec 13 01:37:29.286533 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:37:29.289501 systemd-logind[1483]: Removed session 16. Dec 13 01:37:34.438126 systemd[1]: Started sshd@23-157.90.151.24:22-147.75.109.163:52630.service - OpenSSH per-connection server daemon (147.75.109.163:52630). Dec 13 01:37:35.428563 sshd[5119]: Accepted publickey for core from 147.75.109.163 port 52630 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:37:35.432006 sshd[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:37:35.440122 systemd-logind[1483]: New session 17 of user core. Dec 13 01:37:35.447392 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:37:36.203307 sshd[5119]: pam_unix(sshd:session): session closed for user core Dec 13 01:37:36.208449 systemd[1]: sshd@23-157.90.151.24:22-147.75.109.163:52630.service: Deactivated successfully. Dec 13 01:37:36.213193 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:37:36.216672 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:37:36.219334 systemd-logind[1483]: Removed session 17. Dec 13 01:37:51.441466 systemd[1]: cri-containerd-c1f8d03b2ffad7e597ec8c9ad346e98fc3b05f0b645389c140f8dce43aeaf52f.scope: Deactivated successfully. Dec 13 01:37:51.442083 systemd[1]: cri-containerd-c1f8d03b2ffad7e597ec8c9ad346e98fc3b05f0b645389c140f8dce43aeaf52f.scope: Consumed 7.355s CPU time, 23.0M memory peak, 0B memory swap peak. Dec 13 01:37:51.532082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1f8d03b2ffad7e597ec8c9ad346e98fc3b05f0b645389c140f8dce43aeaf52f-rootfs.mount: Deactivated successfully. Dec 13 01:37:51.538259 containerd[1508]: time="2024-12-13T01:37:51.538141072Z" level=info msg="shim disconnected" id=c1f8d03b2ffad7e597ec8c9ad346e98fc3b05f0b645389c140f8dce43aeaf52f namespace=k8s.io Dec 13 01:37:51.539121 containerd[1508]: time="2024-12-13T01:37:51.538819512Z" level=warning msg="cleaning up after shim disconnected" id=c1f8d03b2ffad7e597ec8c9ad346e98fc3b05f0b645389c140f8dce43aeaf52f namespace=k8s.io Dec 13 01:37:51.539121 containerd[1508]: time="2024-12-13T01:37:51.538868915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:51.642031 kubelet[2771]: E1213 01:37:51.641961 2771 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:42294->10.0.0.2:2379: read: connection timed out" Dec 13 01:37:51.651945 systemd[1]: cri-containerd-5ff9adc84c383664d2334c55bbf7c3bc1c40bfc9d9357f1f9fbfa6fb0c8d2261.scope: Deactivated successfully. Dec 13 01:37:51.652488 systemd[1]: cri-containerd-5ff9adc84c383664d2334c55bbf7c3bc1c40bfc9d9357f1f9fbfa6fb0c8d2261.scope: Consumed 2.174s CPU time, 16.5M memory peak, 0B memory swap peak. Dec 13 01:37:51.722357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ff9adc84c383664d2334c55bbf7c3bc1c40bfc9d9357f1f9fbfa6fb0c8d2261-rootfs.mount: Deactivated successfully. Dec 13 01:37:51.726910 containerd[1508]: time="2024-12-13T01:37:51.726394493Z" level=info msg="shim disconnected" id=5ff9adc84c383664d2334c55bbf7c3bc1c40bfc9d9357f1f9fbfa6fb0c8d2261 namespace=k8s.io Dec 13 01:37:51.726910 containerd[1508]: time="2024-12-13T01:37:51.726488890Z" level=warning msg="cleaning up after shim disconnected" id=5ff9adc84c383664d2334c55bbf7c3bc1c40bfc9d9357f1f9fbfa6fb0c8d2261 namespace=k8s.io Dec 13 01:37:51.726910 containerd[1508]: time="2024-12-13T01:37:51.726505962Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:37:52.186966 kubelet[2771]: I1213 01:37:52.186364 2771 scope.go:117] "RemoveContainer" containerID="5ff9adc84c383664d2334c55bbf7c3bc1c40bfc9d9357f1f9fbfa6fb0c8d2261" Dec 13 01:37:52.191624 kubelet[2771]: I1213 01:37:52.190800 2771 scope.go:117] "RemoveContainer" containerID="c1f8d03b2ffad7e597ec8c9ad346e98fc3b05f0b645389c140f8dce43aeaf52f" Dec 13 01:37:52.191842 containerd[1508]: time="2024-12-13T01:37:52.191229848Z" level=info msg="CreateContainer within sandbox \"f412f8e5c0382068c84c453833bf91c7a5dd2aa59b21250077f7065f8e5b4285\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 01:37:52.196362 containerd[1508]: time="2024-12-13T01:37:52.196137467Z" level=info msg="CreateContainer within sandbox \"0eabf31996028ea1b72e2e393fa0b0a09ca43a4f917f1e110e5bbc699e252ab4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 01:37:52.276634 containerd[1508]: time="2024-12-13T01:37:52.276548163Z" level=info msg="CreateContainer within sandbox \"f412f8e5c0382068c84c453833bf91c7a5dd2aa59b21250077f7065f8e5b4285\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"24da563400c4e1f57d60589342a0c72bd8afa23c30cc703f236a5a1d67595475\"" Dec 13 01:37:52.277131 containerd[1508]: time="2024-12-13T01:37:52.277064840Z" level=info msg="StartContainer for \"24da563400c4e1f57d60589342a0c72bd8afa23c30cc703f236a5a1d67595475\"" Dec 13 01:37:52.282037 containerd[1508]: time="2024-12-13T01:37:52.281776374Z" level=info msg="CreateContainer within sandbox \"0eabf31996028ea1b72e2e393fa0b0a09ca43a4f917f1e110e5bbc699e252ab4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"43edb23326234d5fcc28b2b76ee3260ab6c0ee649c76873212ca7d030293c788\"" Dec 13 01:37:52.282423 containerd[1508]: time="2024-12-13T01:37:52.282397206Z" level=info msg="StartContainer for \"43edb23326234d5fcc28b2b76ee3260ab6c0ee649c76873212ca7d030293c788\"" Dec 13 01:37:52.311367 systemd[1]: Started cri-containerd-24da563400c4e1f57d60589342a0c72bd8afa23c30cc703f236a5a1d67595475.scope - libcontainer container 24da563400c4e1f57d60589342a0c72bd8afa23c30cc703f236a5a1d67595475. Dec 13 01:37:52.321394 systemd[1]: Started cri-containerd-43edb23326234d5fcc28b2b76ee3260ab6c0ee649c76873212ca7d030293c788.scope - libcontainer container 43edb23326234d5fcc28b2b76ee3260ab6c0ee649c76873212ca7d030293c788. Dec 13 01:37:52.369480 containerd[1508]: time="2024-12-13T01:37:52.369273228Z" level=info msg="StartContainer for \"24da563400c4e1f57d60589342a0c72bd8afa23c30cc703f236a5a1d67595475\" returns successfully" Dec 13 01:37:52.377915 containerd[1508]: time="2024-12-13T01:37:52.377870769Z" level=info msg="StartContainer for \"43edb23326234d5fcc28b2b76ee3260ab6c0ee649c76873212ca7d030293c788\" returns successfully" Dec 13 01:37:52.537425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2956883380.mount: Deactivated successfully. Dec 13 01:37:56.439814 kubelet[2771]: E1213 01:37:56.439578 2771 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:42116->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-2-1-b-0bf1a54360.181098cdbf7f980a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-2-1-b-0bf1a54360,UID:1c87784ba6b275d3ce680bf4728328e0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-b-0bf1a54360,},FirstTimestamp:2024-12-13 01:37:45.986168842 +0000 UTC m=+342.830415655,LastTimestamp:2024-12-13 01:37:45.986168842 +0000 UTC m=+342.830415655,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-b-0bf1a54360,}"