Feb 13 20:06:37.987806 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:06:37.987833 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 20:06:37.987844 kernel: KASLR enabled Feb 13 20:06:37.987849 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Feb 13 20:06:37.987855 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Feb 13 20:06:37.987861 kernel: random: crng init done Feb 13 20:06:37.987868 kernel: ACPI: Early table checksum verification disabled Feb 13 20:06:37.987874 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Feb 13 20:06:37.987883 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Feb 13 20:06:37.987892 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:37.987900 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:37.987906 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:37.987913 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:37.987919 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:37.987928 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:37.987937 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:37.987944 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:37.987951 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:06:37.987958 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 20:06:37.987964 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Feb 13 20:06:37.987973 kernel: NUMA: Failed to initialise from firmware Feb 13 20:06:37.987980 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 20:06:37.987987 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Feb 13 20:06:37.987994 kernel: Zone ranges: Feb 13 20:06:37.988001 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 20:06:37.988008 kernel: DMA32 empty Feb 13 20:06:37.988026 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Feb 13 20:06:37.988034 kernel: Movable zone start for each node Feb 13 20:06:37.988042 kernel: Early memory node ranges Feb 13 20:06:37.988048 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Feb 13 20:06:37.988054 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Feb 13 20:06:37.988062 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Feb 13 20:06:37.988069 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Feb 13 20:06:37.988076 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Feb 13 20:06:37.988084 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Feb 13 20:06:37.988091 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Feb 13 20:06:37.988097 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 20:06:37.988108 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Feb 13 20:06:37.988116 kernel: psci: probing for conduit method from ACPI. Feb 13 20:06:37.988123 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:06:37.988134 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:06:37.988141 kernel: psci: Trusted OS migration not required Feb 13 20:06:37.988148 kernel: psci: SMC Calling Convention v1.1 Feb 13 20:06:37.988159 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 20:06:37.988167 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:06:37.988175 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:06:37.988182 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 20:06:37.990313 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:06:37.990357 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:06:37.990378 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:06:37.990386 kernel: CPU features: detected: Spectre-v4 Feb 13 20:06:37.990393 kernel: CPU features: detected: Spectre-BHB Feb 13 20:06:37.990401 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:06:37.990415 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:06:37.990423 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:06:37.990430 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:06:37.990439 kernel: alternatives: applying boot alternatives Feb 13 20:06:37.990449 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:06:37.990458 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:06:37.990466 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:06:37.990473 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:06:37.990481 kernel: Fallback order for Node 0: 0 Feb 13 20:06:37.990489 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Feb 13 20:06:37.990497 kernel: Policy zone: Normal Feb 13 20:06:37.990507 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:06:37.990515 kernel: software IO TLB: area num 2. Feb 13 20:06:37.990523 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Feb 13 20:06:37.990531 kernel: Memory: 3882936K/4096000K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 213064K reserved, 0K cma-reserved) Feb 13 20:06:37.990540 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:06:37.990548 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:06:37.990557 kernel: rcu: RCU event tracing is enabled. Feb 13 20:06:37.990565 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:06:37.990573 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:06:37.990581 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:06:37.990588 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:06:37.990598 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:06:37.990606 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:06:37.990614 kernel: GICv3: 256 SPIs implemented Feb 13 20:06:37.990623 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:06:37.990631 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:06:37.990639 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:06:37.990647 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 20:06:37.990654 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 20:06:37.990663 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 20:06:37.990671 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 20:06:37.990678 kernel: GICv3: using LPI property table @0x00000001000e0000 Feb 13 20:06:37.990686 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Feb 13 20:06:37.990695 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:06:37.990703 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:06:37.990712 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:06:37.990720 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:06:37.990728 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:06:37.990736 kernel: Console: colour dummy device 80x25 Feb 13 20:06:37.990745 kernel: ACPI: Core revision 20230628 Feb 13 20:06:37.990753 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:06:37.990762 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:06:37.990769 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:06:37.990779 kernel: landlock: Up and running. Feb 13 20:06:37.990786 kernel: SELinux: Initializing. Feb 13 20:06:37.990793 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:06:37.990800 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:06:37.990807 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:06:37.990815 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:06:37.990824 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:06:37.990831 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:06:37.990838 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 20:06:37.990850 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 20:06:37.990858 kernel: Remapping and enabling EFI services. Feb 13 20:06:37.990865 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:06:37.990873 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:06:37.990881 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 20:06:37.990890 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Feb 13 20:06:37.990898 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:06:37.990905 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:06:37.990914 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:06:37.990922 kernel: SMP: Total of 2 processors activated. Feb 13 20:06:37.990932 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:06:37.990940 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:06:37.990955 kernel: CPU features: detected: Common not Private translations Feb 13 20:06:37.990965 kernel: CPU features: detected: CRC32 instructions Feb 13 20:06:37.990973 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 20:06:37.990982 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:06:37.990990 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:06:37.990999 kernel: CPU features: detected: Privileged Access Never Feb 13 20:06:37.991008 kernel: CPU features: detected: RAS Extension Support Feb 13 20:06:37.991020 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 20:06:37.991028 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:06:37.991048 kernel: alternatives: applying system-wide alternatives Feb 13 20:06:37.991057 kernel: devtmpfs: initialized Feb 13 20:06:37.991065 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:06:37.991074 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:06:37.991082 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:06:37.991094 kernel: SMBIOS 3.0.0 present. Feb 13 20:06:37.991103 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Feb 13 20:06:37.991111 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:06:37.991120 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:06:37.991129 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:06:37.991137 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:06:37.991144 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:06:37.991151 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Feb 13 20:06:37.991159 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:06:37.991168 kernel: cpuidle: using governor menu Feb 13 20:06:37.991177 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:06:37.991186 kernel: ASID allocator initialised with 32768 entries Feb 13 20:06:37.991208 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:06:37.991218 kernel: Serial: AMBA PL011 UART driver Feb 13 20:06:37.991226 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:06:37.991233 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:06:37.991243 kernel: Modules: 509040 pages in range for PLT usage Feb 13 20:06:37.991252 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:06:37.991263 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:06:37.991272 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:06:37.991280 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:06:37.991289 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:06:37.991298 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:06:37.991306 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:06:37.991314 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:06:37.991323 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:06:37.991332 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:06:37.991342 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:06:37.991351 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:06:37.991360 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:06:37.991418 kernel: ACPI: Interpreter enabled Feb 13 20:06:37.991427 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:06:37.991437 kernel: ACPI: MCFG table detected, 1 entries Feb 13 20:06:37.991445 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:06:37.991454 kernel: printk: console [ttyAMA0] enabled Feb 13 20:06:37.991462 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:06:37.991681 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:06:37.991773 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 20:06:37.991855 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 20:06:37.991935 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 20:06:37.992015 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 20:06:37.992027 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 20:06:37.992036 kernel: PCI host bridge to bus 0000:00 Feb 13 20:06:37.992129 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 20:06:37.994286 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 20:06:37.994434 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 20:06:37.994519 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:06:37.994625 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 20:06:37.994724 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Feb 13 20:06:37.994821 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Feb 13 20:06:37.994905 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 20:06:37.994997 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:37.995083 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Feb 13 20:06:37.995184 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:37.995299 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Feb 13 20:06:37.995418 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:37.995512 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Feb 13 20:06:37.995604 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:37.995684 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Feb 13 20:06:37.995774 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:37.995848 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Feb 13 20:06:37.996032 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:37.996131 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Feb 13 20:06:37.998314 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:37.998490 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Feb 13 20:06:37.998592 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:37.998676 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Feb 13 20:06:37.998762 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Feb 13 20:06:37.998853 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Feb 13 20:06:37.998951 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Feb 13 20:06:37.999032 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Feb 13 20:06:37.999127 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 20:06:37.999277 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Feb 13 20:06:37.999390 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:06:37.999482 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 20:06:37.999578 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 20:06:37.999664 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Feb 13 20:06:37.999759 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Feb 13 20:06:37.999839 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Feb 13 20:06:37.999944 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Feb 13 20:06:38.000035 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Feb 13 20:06:38.000124 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Feb 13 20:06:38.001427 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 20:06:38.001581 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Feb 13 20:06:38.001667 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Feb 13 20:06:38.001762 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Feb 13 20:06:38.001847 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Feb 13 20:06:38.001935 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 20:06:38.002027 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 20:06:38.002110 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Feb 13 20:06:38.002208 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Feb 13 20:06:38.002291 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 20:06:38.002388 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 13 20:06:38.002471 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Feb 13 20:06:38.002540 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Feb 13 20:06:38.002611 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 13 20:06:38.002680 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 13 20:06:38.002752 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Feb 13 20:06:38.002838 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 20:06:38.002918 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Feb 13 20:06:38.002998 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 13 20:06:38.003086 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 20:06:38.003168 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Feb 13 20:06:38.006341 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 13 20:06:38.006519 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 20:06:38.006607 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Feb 13 20:06:38.006688 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Feb 13 20:06:38.006772 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 20:06:38.006865 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Feb 13 20:06:38.006945 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Feb 13 20:06:38.007170 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 20:06:38.007282 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Feb 13 20:06:38.007352 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Feb 13 20:06:38.007461 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 20:06:38.007546 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Feb 13 20:06:38.007626 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Feb 13 20:06:38.007713 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 20:06:38.007796 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Feb 13 20:06:38.007878 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Feb 13 20:06:38.007965 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 13 20:06:38.008038 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 20:06:38.008124 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 13 20:06:38.009241 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 20:06:38.009394 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 13 20:06:38.009478 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 20:06:38.009557 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Feb 13 20:06:38.009631 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 20:06:38.009703 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Feb 13 20:06:38.009774 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 20:06:38.009846 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Feb 13 20:06:38.009919 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 20:06:38.009990 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Feb 13 20:06:38.010056 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 20:06:38.010131 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Feb 13 20:06:38.010303 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 20:06:38.010436 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Feb 13 20:06:38.010521 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 20:06:38.010599 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Feb 13 20:06:38.010674 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Feb 13 20:06:38.010744 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Feb 13 20:06:38.010810 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 20:06:38.010879 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Feb 13 20:06:38.010946 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 20:06:38.011012 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Feb 13 20:06:38.011082 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 20:06:38.011163 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Feb 13 20:06:38.011254 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 20:06:38.011339 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Feb 13 20:06:38.011517 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 20:06:38.011606 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Feb 13 20:06:38.011819 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 20:06:38.011971 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Feb 13 20:06:38.012131 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 20:06:38.012631 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Feb 13 20:06:38.012729 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 20:06:38.012812 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Feb 13 20:06:38.012879 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Feb 13 20:06:38.013046 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Feb 13 20:06:38.013149 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Feb 13 20:06:38.013300 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:06:38.013411 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Feb 13 20:06:38.013486 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Feb 13 20:06:38.013554 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 20:06:38.013624 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Feb 13 20:06:38.013689 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 20:06:38.013782 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Feb 13 20:06:38.013870 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Feb 13 20:06:38.013943 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 20:06:38.014019 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Feb 13 20:06:38.014257 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 20:06:38.014348 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 20:06:38.014437 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Feb 13 20:06:38.014517 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Feb 13 20:06:38.014592 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 20:06:38.014662 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Feb 13 20:06:38.014738 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 20:06:38.014824 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 20:06:38.014898 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Feb 13 20:06:38.014966 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 20:06:38.015033 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Feb 13 20:06:38.015101 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 20:06:38.015174 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Feb 13 20:06:38.015324 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Feb 13 20:06:38.015426 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Feb 13 20:06:38.015500 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 20:06:38.015563 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Feb 13 20:06:38.015628 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 20:06:38.015704 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Feb 13 20:06:38.015777 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Feb 13 20:06:38.015845 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Feb 13 20:06:38.015910 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 20:06:38.015973 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Feb 13 20:06:38.016037 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 20:06:38.016109 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Feb 13 20:06:38.016177 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Feb 13 20:06:38.016340 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Feb 13 20:06:38.016469 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Feb 13 20:06:38.016540 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 20:06:38.016604 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Feb 13 20:06:38.016667 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 20:06:38.016736 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Feb 13 20:06:38.016801 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 20:06:38.016864 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Feb 13 20:06:38.016929 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 20:06:38.017005 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Feb 13 20:06:38.017070 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Feb 13 20:06:38.017137 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Feb 13 20:06:38.017219 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 20:06:38.017297 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 20:06:38.017360 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 20:06:38.017475 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 20:06:38.017566 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 20:06:38.017635 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Feb 13 20:06:38.017695 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 20:06:38.017765 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Feb 13 20:06:38.017830 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Feb 13 20:06:38.017908 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 20:06:38.017994 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Feb 13 20:06:38.018060 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Feb 13 20:06:38.018130 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 20:06:38.018421 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 20:06:38.018530 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Feb 13 20:06:38.018592 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 20:06:38.018661 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Feb 13 20:06:38.018727 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Feb 13 20:06:38.018788 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 20:06:38.018857 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Feb 13 20:06:38.018920 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Feb 13 20:06:38.018987 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 20:06:38.019069 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Feb 13 20:06:38.019137 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Feb 13 20:06:38.019219 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 20:06:38.019296 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Feb 13 20:06:38.019375 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Feb 13 20:06:38.019447 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 20:06:38.019526 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Feb 13 20:06:38.019595 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Feb 13 20:06:38.019723 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 20:06:38.019738 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 20:06:38.019746 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 20:06:38.019755 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 20:06:38.019763 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 20:06:38.019771 kernel: iommu: Default domain type: Translated Feb 13 20:06:38.019783 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:06:38.019791 kernel: efivars: Registered efivars operations Feb 13 20:06:38.019799 kernel: vgaarb: loaded Feb 13 20:06:38.019807 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:06:38.019815 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:06:38.019823 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:06:38.019831 kernel: pnp: PnP ACPI init Feb 13 20:06:38.019935 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 20:06:38.019976 kernel: pnp: PnP ACPI: found 1 devices Feb 13 20:06:38.019989 kernel: NET: Registered PF_INET protocol family Feb 13 20:06:38.019997 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:06:38.020005 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:06:38.020013 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:06:38.020022 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:06:38.020030 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:06:38.020038 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:06:38.020046 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:06:38.020054 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:06:38.020064 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:06:38.020178 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Feb 13 20:06:38.020191 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:06:38.020226 kernel: kvm [1]: HYP mode not available Feb 13 20:06:38.020234 kernel: Initialise system trusted keyrings Feb 13 20:06:38.020247 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:06:38.020254 kernel: Key type asymmetric registered Feb 13 20:06:38.020262 kernel: Asymmetric key parser 'x509' registered Feb 13 20:06:38.020270 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:06:38.020279 kernel: io scheduler mq-deadline registered Feb 13 20:06:38.020287 kernel: io scheduler kyber registered Feb 13 20:06:38.020295 kernel: io scheduler bfq registered Feb 13 20:06:38.020303 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 20:06:38.020448 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Feb 13 20:06:38.020541 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Feb 13 20:06:38.020623 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:38.020718 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Feb 13 20:06:38.020804 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Feb 13 20:06:38.020882 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:38.020963 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Feb 13 20:06:38.021034 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Feb 13 20:06:38.021101 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:38.021177 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Feb 13 20:06:38.023434 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Feb 13 20:06:38.023547 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:38.023622 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Feb 13 20:06:38.023691 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Feb 13 20:06:38.023758 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:38.023848 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Feb 13 20:06:38.023924 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Feb 13 20:06:38.023996 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:38.024068 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Feb 13 20:06:38.024136 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Feb 13 20:06:38.024285 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:38.024411 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Feb 13 20:06:38.024492 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Feb 13 20:06:38.024581 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:38.024595 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Feb 13 20:06:38.024680 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Feb 13 20:06:38.024764 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Feb 13 20:06:38.024856 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:06:38.024870 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 20:06:38.024879 kernel: ACPI: button: Power Button [PWRB] Feb 13 20:06:38.024890 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 20:06:38.024983 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Feb 13 20:06:38.025071 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Feb 13 20:06:38.025082 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:06:38.025090 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 20:06:38.025173 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Feb 13 20:06:38.025189 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Feb 13 20:06:38.026299 kernel: thunder_xcv, ver 1.0 Feb 13 20:06:38.026309 kernel: thunder_bgx, ver 1.0 Feb 13 20:06:38.026317 kernel: nicpf, ver 1.0 Feb 13 20:06:38.026325 kernel: nicvf, ver 1.0 Feb 13 20:06:38.026538 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:06:38.026625 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:06:37 UTC (1739477197) Feb 13 20:06:38.026638 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:06:38.026656 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 20:06:38.026665 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:06:38.026674 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:06:38.026682 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:06:38.026690 kernel: Segment Routing with IPv6 Feb 13 20:06:38.026698 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:06:38.026706 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:06:38.026717 kernel: Key type dns_resolver registered Feb 13 20:06:38.026726 kernel: registered taskstats version 1 Feb 13 20:06:38.026738 kernel: Loading compiled-in X.509 certificates Feb 13 20:06:38.026747 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 20:06:38.026756 kernel: Key type .fscrypt registered Feb 13 20:06:38.026765 kernel: Key type fscrypt-provisioning registered Feb 13 20:06:38.026774 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:06:38.026783 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:06:38.026794 kernel: ima: No architecture policies found Feb 13 20:06:38.026802 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:06:38.026810 kernel: clk: Disabling unused clocks Feb 13 20:06:38.026819 kernel: Freeing unused kernel memory: 39360K Feb 13 20:06:38.026829 kernel: Run /init as init process Feb 13 20:06:38.026838 kernel: with arguments: Feb 13 20:06:38.026848 kernel: /init Feb 13 20:06:38.026856 kernel: with environment: Feb 13 20:06:38.026865 kernel: HOME=/ Feb 13 20:06:38.026874 kernel: TERM=linux Feb 13 20:06:38.026882 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:06:38.026893 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:06:38.026906 systemd[1]: Detected virtualization kvm. Feb 13 20:06:38.026916 systemd[1]: Detected architecture arm64. Feb 13 20:06:38.026926 systemd[1]: Running in initrd. Feb 13 20:06:38.026934 systemd[1]: No hostname configured, using default hostname. Feb 13 20:06:38.026944 systemd[1]: Hostname set to . Feb 13 20:06:38.026955 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:06:38.026964 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:06:38.026976 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:06:38.026985 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:06:38.026996 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:06:38.027005 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:06:38.027015 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:06:38.027025 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:06:38.027036 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:06:38.027048 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:06:38.027058 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:06:38.027067 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:06:38.027077 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:06:38.027087 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:06:38.027096 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:06:38.027106 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:06:38.027116 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:06:38.027127 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:06:38.027137 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:06:38.027147 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:06:38.027157 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:06:38.027166 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:06:38.027176 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:06:38.027185 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:06:38.027316 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:06:38.027327 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:06:38.027341 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:06:38.027351 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:06:38.027374 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:06:38.027387 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:06:38.027396 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:06:38.027406 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:06:38.027416 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:06:38.027424 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:06:38.027468 systemd-journald[235]: Collecting audit messages is disabled. Feb 13 20:06:38.027494 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:06:38.027503 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:06:38.027514 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:06:38.027524 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:06:38.027533 kernel: Bridge firewalling registered Feb 13 20:06:38.027543 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:06:38.027553 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:06:38.027566 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:06:38.027576 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:06:38.027588 systemd-journald[235]: Journal started Feb 13 20:06:38.027610 systemd-journald[235]: Runtime Journal (/run/log/journal/e75ecf10604448e6b76bdafb933f7364) is 8.0M, max 76.6M, 68.6M free. Feb 13 20:06:37.975264 systemd-modules-load[236]: Inserted module 'overlay' Feb 13 20:06:38.008395 systemd-modules-load[236]: Inserted module 'br_netfilter' Feb 13 20:06:38.033231 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:06:38.054574 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:06:38.055509 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:06:38.060991 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:06:38.069982 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:06:38.079544 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:06:38.082644 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:06:38.087489 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:06:38.105953 dracut-cmdline[271]: dracut-dracut-053 Feb 13 20:06:38.113226 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:06:38.141780 systemd-resolved[273]: Positive Trust Anchors: Feb 13 20:06:38.141804 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:06:38.141837 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:06:38.149741 systemd-resolved[273]: Defaulting to hostname 'linux'. Feb 13 20:06:38.151087 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:06:38.152724 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:06:38.262283 kernel: SCSI subsystem initialized Feb 13 20:06:38.268230 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:06:38.279279 kernel: iscsi: registered transport (tcp) Feb 13 20:06:38.295262 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:06:38.295337 kernel: QLogic iSCSI HBA Driver Feb 13 20:06:38.375856 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:06:38.382479 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:06:38.406497 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:06:38.406587 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:06:38.406600 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:06:38.466289 kernel: raid6: neonx8 gen() 15524 MB/s Feb 13 20:06:38.483254 kernel: raid6: neonx4 gen() 15468 MB/s Feb 13 20:06:38.500257 kernel: raid6: neonx2 gen() 13098 MB/s Feb 13 20:06:38.517258 kernel: raid6: neonx1 gen() 10426 MB/s Feb 13 20:06:38.536386 kernel: raid6: int64x8 gen() 6748 MB/s Feb 13 20:06:38.553411 kernel: raid6: int64x4 gen() 7123 MB/s Feb 13 20:06:38.570896 kernel: raid6: int64x2 gen() 5607 MB/s Feb 13 20:06:38.586276 kernel: raid6: int64x1 gen() 4959 MB/s Feb 13 20:06:38.586434 kernel: raid6: using algorithm neonx8 gen() 15524 MB/s Feb 13 20:06:38.603257 kernel: raid6: .... xor() 11648 MB/s, rmw enabled Feb 13 20:06:38.603351 kernel: raid6: using neon recovery algorithm Feb 13 20:06:38.609245 kernel: xor: measuring software checksum speed Feb 13 20:06:38.609325 kernel: 8regs : 16326 MB/sec Feb 13 20:06:38.609339 kernel: 32regs : 19627 MB/sec Feb 13 20:06:38.609352 kernel: arm64_neon : 22441 MB/sec Feb 13 20:06:38.610293 kernel: xor: using function: arm64_neon (22441 MB/sec) Feb 13 20:06:38.669243 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:06:38.693955 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:06:38.701521 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:06:38.730468 systemd-udevd[455]: Using default interface naming scheme 'v255'. Feb 13 20:06:38.733955 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:06:38.747869 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:06:38.767004 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Feb 13 20:06:38.826117 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:06:38.835548 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:06:38.904430 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:06:38.911666 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:06:38.943255 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:06:38.944623 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:06:38.945747 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:06:38.947090 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:06:38.956742 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:06:38.979026 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:06:39.016416 kernel: scsi host0: Virtio SCSI HBA Feb 13 20:06:39.028011 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 20:06:39.028058 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Feb 13 20:06:39.054654 kernel: ACPI: bus type USB registered Feb 13 20:06:39.054736 kernel: usbcore: registered new interface driver usbfs Feb 13 20:06:39.054764 kernel: usbcore: registered new interface driver hub Feb 13 20:06:39.055341 kernel: usbcore: registered new device driver usb Feb 13 20:06:39.072696 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:06:39.072817 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:06:39.076272 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:06:39.079205 kernel: sr 0:0:0:0: Power-on or device reset occurred Feb 13 20:06:39.082291 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Feb 13 20:06:39.082454 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 20:06:39.082467 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Feb 13 20:06:39.076856 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:06:39.077042 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:06:39.081702 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:06:39.094447 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:06:39.100620 kernel: sd 0:0:0:1: Power-on or device reset occurred Feb 13 20:06:39.114182 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Feb 13 20:06:39.114384 kernel: sd 0:0:0:1: [sda] Write Protect is off Feb 13 20:06:39.114496 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Feb 13 20:06:39.114599 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 20:06:39.114702 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:06:39.114714 kernel: GPT:17805311 != 80003071 Feb 13 20:06:39.114736 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:06:39.114748 kernel: GPT:17805311 != 80003071 Feb 13 20:06:39.114759 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:06:39.114770 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:06:39.114782 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Feb 13 20:06:39.122256 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:06:39.131288 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 20:06:39.135486 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Feb 13 20:06:39.135615 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 20:06:39.135699 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 20:06:39.135797 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Feb 13 20:06:39.135878 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Feb 13 20:06:39.135978 kernel: hub 1-0:1.0: USB hub found Feb 13 20:06:39.136082 kernel: hub 1-0:1.0: 4 ports detected Feb 13 20:06:39.136171 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 20:06:39.136296 kernel: hub 2-0:1.0: USB hub found Feb 13 20:06:39.136479 kernel: hub 2-0:1.0: 4 ports detected Feb 13 20:06:39.133474 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:06:39.180294 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:06:39.196614 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (520) Feb 13 20:06:39.201227 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (528) Feb 13 20:06:39.204259 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Feb 13 20:06:39.215339 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Feb 13 20:06:39.216950 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Feb 13 20:06:39.223022 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Feb 13 20:06:39.231888 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 20:06:39.241556 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:06:39.252264 disk-uuid[573]: Primary Header is updated. Feb 13 20:06:39.252264 disk-uuid[573]: Secondary Entries is updated. Feb 13 20:06:39.252264 disk-uuid[573]: Secondary Header is updated. Feb 13 20:06:39.376576 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 20:06:39.630563 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Feb 13 20:06:39.800118 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Feb 13 20:06:39.800178 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Feb 13 20:06:39.805248 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Feb 13 20:06:39.858487 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Feb 13 20:06:39.858721 kernel: usbcore: registered new interface driver usbhid Feb 13 20:06:39.860185 kernel: usbhid: USB HID core driver Feb 13 20:06:40.272681 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:06:40.272747 disk-uuid[575]: The operation has completed successfully. Feb 13 20:06:40.348003 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:06:40.348120 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:06:40.363496 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:06:40.379620 sh[583]: Success Feb 13 20:06:40.398070 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:06:40.477843 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:06:40.481724 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:06:40.487861 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:06:40.510819 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 20:06:40.510893 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:06:40.510922 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:06:40.511506 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:06:40.512288 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:06:40.519234 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:06:40.521086 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:06:40.522328 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:06:40.527454 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:06:40.531504 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:06:40.544787 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:06:40.544848 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:06:40.544860 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:06:40.553222 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:06:40.553302 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:06:40.567837 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:06:40.568939 kernel: BTRFS info (device sda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:06:40.576730 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:06:40.584519 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:06:40.692941 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:06:40.702880 ignition[670]: Ignition 2.19.0 Feb 13 20:06:40.702897 ignition[670]: Stage: fetch-offline Feb 13 20:06:40.704790 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:06:40.702948 ignition[670]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:06:40.705893 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:06:40.702958 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:06:40.703254 ignition[670]: parsed url from cmdline: "" Feb 13 20:06:40.703258 ignition[670]: no config URL provided Feb 13 20:06:40.703263 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:06:40.703271 ignition[670]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:06:40.703278 ignition[670]: failed to fetch config: resource requires networking Feb 13 20:06:40.703743 ignition[670]: Ignition finished successfully Feb 13 20:06:40.737314 systemd-networkd[770]: lo: Link UP Feb 13 20:06:40.737327 systemd-networkd[770]: lo: Gained carrier Feb 13 20:06:40.739546 systemd-networkd[770]: Enumeration completed Feb 13 20:06:40.740178 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:06:40.740779 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:06:40.740782 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:06:40.742632 systemd-networkd[770]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:06:40.742636 systemd-networkd[770]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:06:40.743290 systemd-networkd[770]: eth0: Link UP Feb 13 20:06:40.743294 systemd-networkd[770]: eth0: Gained carrier Feb 13 20:06:40.743303 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:06:40.744522 systemd[1]: Reached target network.target - Network. Feb 13 20:06:40.750777 systemd-networkd[770]: eth1: Link UP Feb 13 20:06:40.750786 systemd-networkd[770]: eth1: Gained carrier Feb 13 20:06:40.750797 systemd-networkd[770]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:06:40.751191 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:06:40.768103 ignition[773]: Ignition 2.19.0 Feb 13 20:06:40.768113 ignition[773]: Stage: fetch Feb 13 20:06:40.768378 ignition[773]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:06:40.768390 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:06:40.768495 ignition[773]: parsed url from cmdline: "" Feb 13 20:06:40.768499 ignition[773]: no config URL provided Feb 13 20:06:40.768503 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:06:40.768512 ignition[773]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:06:40.768533 ignition[773]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Feb 13 20:06:40.769767 ignition[773]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 20:06:40.788858 systemd-networkd[770]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:06:40.807296 systemd-networkd[770]: eth0: DHCPv4 address 168.119.253.211/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 20:06:40.969974 ignition[773]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Feb 13 20:06:40.979590 ignition[773]: GET result: OK Feb 13 20:06:40.980809 ignition[773]: parsing config with SHA512: 922a88de4a57eca18113141c866e63bdf25de5f833297f68ab180f6a257a0bd7140a4b98233bdd21356c0abb5a498963a00e7d301776afcfdb42f527ae66a66c Feb 13 20:06:40.986455 unknown[773]: fetched base config from "system" Feb 13 20:06:40.986858 ignition[773]: fetch: fetch complete Feb 13 20:06:40.986466 unknown[773]: fetched base config from "system" Feb 13 20:06:40.986863 ignition[773]: fetch: fetch passed Feb 13 20:06:40.986471 unknown[773]: fetched user config from "hetzner" Feb 13 20:06:40.986911 ignition[773]: Ignition finished successfully Feb 13 20:06:40.990368 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:06:41.003855 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:06:41.021556 ignition[780]: Ignition 2.19.0 Feb 13 20:06:41.021569 ignition[780]: Stage: kargs Feb 13 20:06:41.021783 ignition[780]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:06:41.021795 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:06:41.022901 ignition[780]: kargs: kargs passed Feb 13 20:06:41.022963 ignition[780]: Ignition finished successfully Feb 13 20:06:41.025576 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:06:41.030430 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:06:41.047150 ignition[786]: Ignition 2.19.0 Feb 13 20:06:41.047161 ignition[786]: Stage: disks Feb 13 20:06:41.047711 ignition[786]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:06:41.047724 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:06:41.048695 ignition[786]: disks: disks passed Feb 13 20:06:41.048758 ignition[786]: Ignition finished successfully Feb 13 20:06:41.050605 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:06:41.051830 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:06:41.054311 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:06:41.054939 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:06:41.055522 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:06:41.056977 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:06:41.065472 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:06:41.084740 systemd-fsck[794]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 20:06:41.088290 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:06:41.094502 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:06:41.145425 kernel: EXT4-fs (sda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 20:06:41.143230 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:06:41.144242 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:06:41.149491 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:06:41.154527 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:06:41.158501 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:06:41.162480 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:06:41.165895 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:06:41.178076 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (802) Feb 13 20:06:41.181536 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:06:41.181612 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:06:41.181624 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:06:41.190123 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:06:41.198496 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:06:41.207121 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:06:41.207167 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:06:41.208432 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:06:41.272313 coreos-metadata[804]: Feb 13 20:06:41.272 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Feb 13 20:06:41.275791 coreos-metadata[804]: Feb 13 20:06:41.275 INFO Fetch successful Feb 13 20:06:41.277372 coreos-metadata[804]: Feb 13 20:06:41.277 INFO wrote hostname ci-4081-3-1-8-7bfd910be1 to /sysroot/etc/hostname Feb 13 20:06:41.280204 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:06:41.280244 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:06:41.289477 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:06:41.295447 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:06:41.301574 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:06:41.424896 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:06:41.432394 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:06:41.438941 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:06:41.451250 kernel: BTRFS info (device sda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:06:41.487229 ignition[918]: INFO : Ignition 2.19.0 Feb 13 20:06:41.487229 ignition[918]: INFO : Stage: mount Feb 13 20:06:41.487229 ignition[918]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:06:41.487229 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:06:41.486207 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:06:41.491366 ignition[918]: INFO : mount: mount passed Feb 13 20:06:41.491366 ignition[918]: INFO : Ignition finished successfully Feb 13 20:06:41.490238 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:06:41.506687 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:06:41.510576 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:06:41.524470 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:06:41.539275 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (930) Feb 13 20:06:41.541587 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:06:41.541688 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:06:41.541740 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:06:41.545299 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:06:41.545380 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:06:41.548729 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:06:41.582847 ignition[947]: INFO : Ignition 2.19.0 Feb 13 20:06:41.583731 ignition[947]: INFO : Stage: files Feb 13 20:06:41.584369 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:06:41.586024 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:06:41.586024 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:06:41.587880 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:06:41.587880 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:06:41.592047 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:06:41.592047 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:06:41.592047 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:06:41.591588 unknown[947]: wrote ssh authorized keys file for user: core Feb 13 20:06:41.594857 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 20:06:41.594857 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 13 20:06:41.679269 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:06:41.909617 systemd-networkd[770]: eth1: Gained IPv6LL Feb 13 20:06:42.219218 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 20:06:42.219218 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:06:42.221574 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:06:42.221574 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:06:42.223635 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:06:42.223635 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:06:42.223635 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:06:42.223635 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:06:42.223635 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:06:42.230220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:06:42.230220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:06:42.230220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:06:42.230220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:06:42.230220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:06:42.230220 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 20:06:42.549797 systemd-networkd[770]: eth0: Gained IPv6LL Feb 13 20:06:42.810166 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:06:43.148180 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:06:43.148180 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:06:43.152027 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:06:43.152027 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:06:43.152027 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:06:43.152027 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 20:06:43.152027 ignition[947]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 20:06:43.152027 ignition[947]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 20:06:43.152027 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 20:06:43.152027 ignition[947]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:06:43.152027 ignition[947]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:06:43.152027 ignition[947]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:06:43.152027 ignition[947]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:06:43.152027 ignition[947]: INFO : files: files passed Feb 13 20:06:43.152027 ignition[947]: INFO : Ignition finished successfully Feb 13 20:06:43.151952 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:06:43.159585 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:06:43.171472 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:06:43.177562 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:06:43.177744 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:06:43.184380 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:06:43.184380 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:06:43.187513 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:06:43.191348 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:06:43.193560 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:06:43.199553 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:06:43.241505 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:06:43.243176 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:06:43.246079 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:06:43.247626 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:06:43.249497 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:06:43.256739 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:06:43.281723 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:06:43.290548 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:06:43.303672 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:06:43.305110 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:06:43.305920 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:06:43.307009 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:06:43.307215 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:06:43.308668 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:06:43.309722 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:06:43.310870 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:06:43.312014 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:06:43.313010 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:06:43.314055 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:06:43.314989 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:06:43.315982 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:06:43.317116 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:06:43.318161 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:06:43.319123 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:06:43.319351 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:06:43.320492 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:06:43.321493 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:06:43.322408 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:06:43.322830 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:06:43.323677 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:06:43.323863 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:06:43.325228 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:06:43.325480 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:06:43.326438 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:06:43.326590 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:06:43.327331 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:06:43.327491 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:06:43.338134 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:06:43.339122 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:06:43.339416 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:06:43.347782 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:06:43.348571 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:06:43.350674 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:06:43.352952 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:06:43.353585 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:06:43.360619 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:06:43.364292 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:06:43.372236 ignition[999]: INFO : Ignition 2.19.0 Feb 13 20:06:43.372236 ignition[999]: INFO : Stage: umount Feb 13 20:06:43.372236 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:06:43.372236 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:06:43.380184 ignition[999]: INFO : umount: umount passed Feb 13 20:06:43.380184 ignition[999]: INFO : Ignition finished successfully Feb 13 20:06:43.376007 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:06:43.380602 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:06:43.380755 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:06:43.381954 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:06:43.382070 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:06:43.384148 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:06:43.385489 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:06:43.386719 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:06:43.386780 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:06:43.393753 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:06:43.393824 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:06:43.396240 systemd[1]: Stopped target network.target - Network. Feb 13 20:06:43.398773 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:06:43.398949 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:06:43.400582 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:06:43.401482 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:06:43.403299 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:06:43.404218 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:06:43.408330 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:06:43.410190 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:06:43.410285 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:06:43.414288 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:06:43.414440 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:06:43.416346 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:06:43.416478 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:06:43.418711 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:06:43.418774 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:06:43.419553 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:06:43.419599 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:06:43.420697 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:06:43.421460 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:06:43.427975 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:06:43.428136 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:06:43.429270 systemd-networkd[770]: eth1: DHCPv6 lease lost Feb 13 20:06:43.429705 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:06:43.429768 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:06:43.434296 systemd-networkd[770]: eth0: DHCPv6 lease lost Feb 13 20:06:43.436222 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:06:43.436401 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:06:43.438184 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:06:43.438254 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:06:43.447485 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:06:43.448357 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:06:43.448463 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:06:43.449482 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:06:43.449535 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:06:43.450518 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:06:43.450567 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:06:43.451282 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:06:43.465841 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:06:43.465971 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:06:43.475754 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:06:43.476029 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:06:43.478597 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:06:43.478741 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:06:43.480073 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:06:43.480116 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:06:43.481181 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:06:43.481252 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:06:43.482800 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:06:43.482850 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:06:43.484523 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:06:43.484596 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:06:43.500513 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:06:43.503107 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:06:43.503800 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:06:43.505229 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:06:43.505290 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:06:43.508450 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:06:43.508882 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:06:43.510696 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:06:43.517517 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:06:43.527349 systemd[1]: Switching root. Feb 13 20:06:43.555396 systemd-journald[235]: Journal stopped Feb 13 20:06:44.585109 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). Feb 13 20:06:44.585233 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:06:44.585251 kernel: SELinux: policy capability open_perms=1 Feb 13 20:06:44.585261 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:06:44.585271 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:06:44.585281 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:06:44.585309 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:06:44.585320 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:06:44.585334 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:06:44.585346 kernel: audit: type=1403 audit(1739477203.719:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:06:44.585357 systemd[1]: Successfully loaded SELinux policy in 38.202ms. Feb 13 20:06:44.585381 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.841ms. Feb 13 20:06:44.585397 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:06:44.585408 systemd[1]: Detected virtualization kvm. Feb 13 20:06:44.585419 systemd[1]: Detected architecture arm64. Feb 13 20:06:44.585430 systemd[1]: Detected first boot. Feb 13 20:06:44.585440 systemd[1]: Hostname set to . Feb 13 20:06:44.585451 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:06:44.585461 zram_generator::config[1042]: No configuration found. Feb 13 20:06:44.585473 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:06:44.585483 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:06:44.585494 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:06:44.585504 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:06:44.585516 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:06:44.585527 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:06:44.585537 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:06:44.585550 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:06:44.585562 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:06:44.585573 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:06:44.585584 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:06:44.585595 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:06:44.585637 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:06:44.585652 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:06:44.585663 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:06:44.585676 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:06:44.585687 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:06:44.585698 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:06:44.585708 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 20:06:44.585718 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:06:44.585729 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:06:44.585740 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:06:44.585751 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:06:44.585762 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:06:44.585772 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:06:44.585787 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:06:44.585798 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:06:44.585808 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:06:44.585819 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:06:44.585829 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:06:44.585839 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:06:44.585853 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:06:44.585864 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:06:44.585874 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:06:44.585886 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:06:44.585896 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:06:44.585907 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:06:44.585917 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:06:44.585928 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:06:44.585938 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:06:44.585950 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:06:44.585960 systemd[1]: Reached target machines.target - Containers. Feb 13 20:06:44.585971 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:06:44.585981 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:06:44.586014 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:06:44.586030 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:06:44.586043 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:06:44.586056 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:06:44.586066 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:06:44.586077 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:06:44.586087 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:06:44.586098 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:06:44.586110 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:06:44.586122 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:06:44.586133 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:06:44.586144 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:06:44.586155 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:06:44.586165 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:06:44.586176 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:06:44.590551 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:06:44.590615 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:06:44.590627 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:06:44.590652 systemd[1]: Stopped verity-setup.service. Feb 13 20:06:44.590663 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:06:44.590674 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:06:44.590684 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:06:44.590695 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:06:44.590707 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:06:44.590718 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:06:44.590729 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:06:44.590742 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:06:44.590752 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:06:44.590763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:06:44.590773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:06:44.590787 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:06:44.590799 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:06:44.590812 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:06:44.590825 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:06:44.590884 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:06:44.590941 systemd-journald[1109]: Collecting audit messages is disabled. Feb 13 20:06:44.590973 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:06:44.590987 systemd-journald[1109]: Journal started Feb 13 20:06:44.591011 systemd-journald[1109]: Runtime Journal (/run/log/journal/e75ecf10604448e6b76bdafb933f7364) is 8.0M, max 76.6M, 68.6M free. Feb 13 20:06:44.594184 kernel: loop: module loaded Feb 13 20:06:44.594324 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:06:44.594358 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:06:44.280264 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:06:44.306772 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 20:06:44.307185 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:06:44.600363 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:06:44.600427 kernel: fuse: init (API version 7.39) Feb 13 20:06:44.600442 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:06:44.609561 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:06:44.628224 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:06:44.628334 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:06:44.629220 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:06:44.633236 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:06:44.641693 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:06:44.652231 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:06:44.663233 kernel: ACPI: bus type drm_connector registered Feb 13 20:06:44.666981 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:06:44.675239 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:06:44.677902 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:06:44.679698 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:06:44.680260 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:06:44.682710 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:06:44.683325 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:06:44.685071 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:06:44.685973 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:06:44.687891 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:06:44.698595 kernel: loop0: detected capacity change from 0 to 114432 Feb 13 20:06:44.690732 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:06:44.727223 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:06:44.744552 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:06:44.752793 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:06:44.757357 kernel: loop1: detected capacity change from 0 to 201592 Feb 13 20:06:44.755940 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:06:44.759000 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:06:44.763552 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:06:44.765053 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:06:44.771181 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:06:44.781544 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:06:44.785634 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:06:44.805597 systemd-journald[1109]: Time spent on flushing to /var/log/journal/e75ecf10604448e6b76bdafb933f7364 is 78.097ms for 1131 entries. Feb 13 20:06:44.805597 systemd-journald[1109]: System Journal (/var/log/journal/e75ecf10604448e6b76bdafb933f7364) is 8.0M, max 584.8M, 576.8M free. Feb 13 20:06:44.906545 systemd-journald[1109]: Received client request to flush runtime journal. Feb 13 20:06:44.907159 kernel: loop2: detected capacity change from 0 to 8 Feb 13 20:06:44.907220 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 20:06:44.830691 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:06:44.840433 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:06:44.887782 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:06:44.908360 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:06:44.917656 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:06:44.920001 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:06:44.921770 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:06:44.923488 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:06:44.944236 kernel: loop4: detected capacity change from 0 to 114432 Feb 13 20:06:44.985369 kernel: loop5: detected capacity change from 0 to 201592 Feb 13 20:06:44.987464 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 20:06:44.988077 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 20:06:45.001343 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:06:45.019251 kernel: loop6: detected capacity change from 0 to 8 Feb 13 20:06:45.021639 kernel: loop7: detected capacity change from 0 to 114328 Feb 13 20:06:45.044922 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Feb 13 20:06:45.046851 (sd-merge)[1180]: Merged extensions into '/usr'. Feb 13 20:06:45.054165 systemd[1]: Reloading requested from client PID 1137 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:06:45.054208 systemd[1]: Reloading... Feb 13 20:06:45.199292 zram_generator::config[1207]: No configuration found. Feb 13 20:06:45.388724 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:06:45.394047 ldconfig[1129]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:06:45.442131 systemd[1]: Reloading finished in 387 ms. Feb 13 20:06:45.465761 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:06:45.467480 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:06:45.479813 systemd[1]: Starting ensure-sysext.service... Feb 13 20:06:45.483997 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:06:45.495479 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:06:45.495641 systemd[1]: Reloading... Feb 13 20:06:45.554550 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:06:45.554829 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:06:45.556829 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:06:45.557073 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Feb 13 20:06:45.557117 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Feb 13 20:06:45.566554 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:06:45.566570 systemd-tmpfiles[1245]: Skipping /boot Feb 13 20:06:45.576690 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:06:45.576709 systemd-tmpfiles[1245]: Skipping /boot Feb 13 20:06:45.607410 zram_generator::config[1271]: No configuration found. Feb 13 20:06:45.703214 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:06:45.751120 systemd[1]: Reloading finished in 255 ms. Feb 13 20:06:45.781508 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:06:45.788171 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:06:45.815784 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:06:45.821516 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:06:45.834626 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:06:45.840511 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:06:45.844547 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:06:45.850052 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:06:45.858347 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:06:45.867392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:06:45.872900 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:06:45.877120 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:06:45.878467 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:06:45.882535 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:06:45.882725 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:06:45.887417 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:06:45.887622 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:06:45.891500 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:06:45.903839 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:06:45.905530 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:06:45.907652 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:06:45.917828 systemd[1]: Finished ensure-sysext.service. Feb 13 20:06:45.929063 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:06:45.930970 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:06:45.934926 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:06:45.935923 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:06:45.940482 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:06:45.941185 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:06:45.958154 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:06:45.958423 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:06:45.962780 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Feb 13 20:06:45.968015 augenrules[1342]: No rules Feb 13 20:06:45.970537 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:06:45.974126 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:06:45.979249 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:06:45.981920 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:06:45.988454 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:06:46.011579 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:06:46.017758 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:06:46.031845 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:06:46.052178 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:06:46.058999 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:06:46.066646 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:06:46.179707 systemd-resolved[1320]: Positive Trust Anchors: Feb 13 20:06:46.179743 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:06:46.179776 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:06:46.186934 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:06:46.187953 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:06:46.190637 systemd-networkd[1357]: lo: Link UP Feb 13 20:06:46.190644 systemd-networkd[1357]: lo: Gained carrier Feb 13 20:06:46.191414 systemd-networkd[1357]: Enumeration completed Feb 13 20:06:46.191827 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:06:46.191991 systemd-timesyncd[1340]: No network connectivity, watching for changes. Feb 13 20:06:46.192822 systemd-resolved[1320]: Using system hostname 'ci-4081-3-1-8-7bfd910be1'. Feb 13 20:06:46.207606 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:06:46.208249 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:06:46.209499 systemd[1]: Reached target network.target - Network. Feb 13 20:06:46.211437 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:06:46.243121 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 20:06:46.265179 systemd-networkd[1357]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:06:46.265213 systemd-networkd[1357]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:06:46.266150 systemd-networkd[1357]: eth1: Link UP Feb 13 20:06:46.266163 systemd-networkd[1357]: eth1: Gained carrier Feb 13 20:06:46.266184 systemd-networkd[1357]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:06:46.298351 systemd-networkd[1357]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:06:46.299626 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Feb 13 20:06:46.315226 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:06:46.354041 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Feb 13 20:06:46.354215 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:06:46.360767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:06:46.364520 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:06:46.370083 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:06:46.371503 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:06:46.371546 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:06:46.377679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:06:46.379451 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:06:46.385853 systemd-networkd[1357]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:06:46.385865 systemd-networkd[1357]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:06:46.387343 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Feb 13 20:06:46.387579 systemd-networkd[1357]: eth0: Link UP Feb 13 20:06:46.387583 systemd-networkd[1357]: eth0: Gained carrier Feb 13 20:06:46.387607 systemd-networkd[1357]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:06:46.390699 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Feb 13 20:06:46.402357 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:06:46.404116 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:06:46.406551 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:06:46.406919 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:06:46.409220 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:06:46.410305 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:06:46.439363 systemd-networkd[1357]: eth0: DHCPv4 address 168.119.253.211/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 20:06:46.439713 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Feb 13 20:06:46.440387 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Feb 13 20:06:46.459240 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Feb 13 20:06:46.459371 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1370) Feb 13 20:06:46.460812 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 20:06:46.460885 kernel: [drm] features: -context_init Feb 13 20:06:46.471291 kernel: [drm] number of scanouts: 1 Feb 13 20:06:46.471473 kernel: [drm] number of cap sets: 0 Feb 13 20:06:46.483279 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Feb 13 20:06:46.499297 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 20:06:46.500392 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:06:46.519754 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 20:06:46.528517 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 20:06:46.540693 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:06:46.543447 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:06:46.543654 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:06:46.561034 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:06:46.564255 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:06:46.635687 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:06:46.644278 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:06:46.652523 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:06:46.674937 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:06:46.704358 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:06:46.705707 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:06:46.706690 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:06:46.707918 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:06:46.709030 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:06:46.710347 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:06:46.711367 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:06:46.712001 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:06:46.712679 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:06:46.712712 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:06:46.713164 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:06:46.715068 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:06:46.717441 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:06:46.724994 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:06:46.728473 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:06:46.730135 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:06:46.731220 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:06:46.732029 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:06:46.732667 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:06:46.732746 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:06:46.741664 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:06:46.746547 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:06:46.749226 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:06:46.759413 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:06:46.763667 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:06:46.769038 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:06:46.771406 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:06:46.782180 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:06:46.785789 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:06:46.793994 jq[1434]: false Feb 13 20:06:46.794311 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Feb 13 20:06:46.797642 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:06:46.800922 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:06:46.806099 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:06:46.808554 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:06:46.809147 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:06:46.810967 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:06:46.819660 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:06:46.822215 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:06:46.835932 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:06:46.836278 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:06:46.868314 extend-filesystems[1435]: Found loop4 Feb 13 20:06:46.868314 extend-filesystems[1435]: Found loop5 Feb 13 20:06:46.868314 extend-filesystems[1435]: Found loop6 Feb 13 20:06:46.868314 extend-filesystems[1435]: Found loop7 Feb 13 20:06:46.868314 extend-filesystems[1435]: Found sda Feb 13 20:06:46.868314 extend-filesystems[1435]: Found sda1 Feb 13 20:06:46.868314 extend-filesystems[1435]: Found sda2 Feb 13 20:06:46.868314 extend-filesystems[1435]: Found sda3 Feb 13 20:06:46.868314 extend-filesystems[1435]: Found usr Feb 13 20:06:46.868314 extend-filesystems[1435]: Found sda4 Feb 13 20:06:46.868314 extend-filesystems[1435]: Found sda6 Feb 13 20:06:46.868314 extend-filesystems[1435]: Found sda7 Feb 13 20:06:46.868314 extend-filesystems[1435]: Found sda9 Feb 13 20:06:46.868314 extend-filesystems[1435]: Checking size of /dev/sda9 Feb 13 20:06:46.871726 dbus-daemon[1433]: [system] SELinux support is enabled Feb 13 20:06:46.884810 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:06:46.909945 jq[1447]: true Feb 13 20:06:46.918684 coreos-metadata[1432]: Feb 13 20:06:46.882 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Feb 13 20:06:46.918684 coreos-metadata[1432]: Feb 13 20:06:46.889 INFO Fetch successful Feb 13 20:06:46.918684 coreos-metadata[1432]: Feb 13 20:06:46.892 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Feb 13 20:06:46.918684 coreos-metadata[1432]: Feb 13 20:06:46.906 INFO Fetch successful Feb 13 20:06:46.885528 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:06:46.927443 update_engine[1446]: I20250213 20:06:46.919056 1446 main.cc:92] Flatcar Update Engine starting Feb 13 20:06:46.927443 update_engine[1446]: I20250213 20:06:46.926959 1446 update_check_scheduler.cc:74] Next update check in 4m35s Feb 13 20:06:46.895320 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:06:46.895551 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:06:46.901526 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:06:46.901581 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:06:46.933424 extend-filesystems[1435]: Resized partition /dev/sda9 Feb 13 20:06:46.905511 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:06:46.940945 extend-filesystems[1474]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:06:46.949568 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Feb 13 20:06:46.905540 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:06:46.924825 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:06:46.939808 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:06:46.940037 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:06:46.947539 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:06:46.966214 jq[1465]: true Feb 13 20:06:46.966457 tar[1450]: linux-arm64/LICENSE Feb 13 20:06:46.966457 tar[1450]: linux-arm64/helm Feb 13 20:06:47.071500 systemd-logind[1444]: New seat seat0. Feb 13 20:06:47.085613 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 20:06:47.085641 systemd-logind[1444]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Feb 13 20:06:47.130844 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1359) Feb 13 20:06:47.141144 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:06:47.179972 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:06:47.184424 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:06:47.199031 bash[1506]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:06:47.205365 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:06:47.221709 systemd[1]: Starting sshkeys.service... Feb 13 20:06:47.277405 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:06:47.293803 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:06:47.300349 containerd[1462]: time="2025-02-13T20:06:47.299002080Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:06:47.319219 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Feb 13 20:06:47.348953 coreos-metadata[1514]: Feb 13 20:06:47.345 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Feb 13 20:06:47.348953 coreos-metadata[1514]: Feb 13 20:06:47.348 INFO Fetch successful Feb 13 20:06:47.353935 extend-filesystems[1474]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 20:06:47.353935 extend-filesystems[1474]: old_desc_blocks = 1, new_desc_blocks = 5 Feb 13 20:06:47.353935 extend-filesystems[1474]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Feb 13 20:06:47.360986 extend-filesystems[1435]: Resized filesystem in /dev/sda9 Feb 13 20:06:47.360986 extend-filesystems[1435]: Found sr0 Feb 13 20:06:47.355188 unknown[1514]: wrote ssh authorized keys file for user: core Feb 13 20:06:47.370171 containerd[1462]: time="2025-02-13T20:06:47.365486960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:06:47.361928 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:06:47.362103 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:06:47.377212 containerd[1462]: time="2025-02-13T20:06:47.371841720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:06:47.377212 containerd[1462]: time="2025-02-13T20:06:47.371887360Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:06:47.377212 containerd[1462]: time="2025-02-13T20:06:47.371906080Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:06:47.377212 containerd[1462]: time="2025-02-13T20:06:47.372085440Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:06:47.377212 containerd[1462]: time="2025-02-13T20:06:47.372105400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:06:47.377212 containerd[1462]: time="2025-02-13T20:06:47.372169840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:06:47.377212 containerd[1462]: time="2025-02-13T20:06:47.372182560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:06:47.377212 containerd[1462]: time="2025-02-13T20:06:47.372443800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:06:47.377212 containerd[1462]: time="2025-02-13T20:06:47.372463360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:06:47.377212 containerd[1462]: time="2025-02-13T20:06:47.372488280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:06:47.377212 containerd[1462]: time="2025-02-13T20:06:47.372498480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:06:47.377592 containerd[1462]: time="2025-02-13T20:06:47.372582040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:06:47.377592 containerd[1462]: time="2025-02-13T20:06:47.372778960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:06:47.377592 containerd[1462]: time="2025-02-13T20:06:47.372871120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:06:47.377592 containerd[1462]: time="2025-02-13T20:06:47.372884120Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:06:47.377592 containerd[1462]: time="2025-02-13T20:06:47.372952800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:06:47.377592 containerd[1462]: time="2025-02-13T20:06:47.372993080Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:06:47.389824 containerd[1462]: time="2025-02-13T20:06:47.389778080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:06:47.390364 containerd[1462]: time="2025-02-13T20:06:47.390339920Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:06:47.390435 containerd[1462]: time="2025-02-13T20:06:47.390423080Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:06:47.391565 containerd[1462]: time="2025-02-13T20:06:47.391254160Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:06:47.391565 containerd[1462]: time="2025-02-13T20:06:47.391291880Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:06:47.391565 containerd[1462]: time="2025-02-13T20:06:47.391537680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:06:47.391912 containerd[1462]: time="2025-02-13T20:06:47.391888440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:06:47.392065 containerd[1462]: time="2025-02-13T20:06:47.392042320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:06:47.392100 containerd[1462]: time="2025-02-13T20:06:47.392069480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:06:47.392100 containerd[1462]: time="2025-02-13T20:06:47.392085760Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:06:47.392135 containerd[1462]: time="2025-02-13T20:06:47.392100360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:06:47.392135 containerd[1462]: time="2025-02-13T20:06:47.392113960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:06:47.392135 containerd[1462]: time="2025-02-13T20:06:47.392129920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:06:47.392235 containerd[1462]: time="2025-02-13T20:06:47.392145440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:06:47.392235 containerd[1462]: time="2025-02-13T20:06:47.392160760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:06:47.392235 containerd[1462]: time="2025-02-13T20:06:47.392174200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:06:47.392235 containerd[1462]: time="2025-02-13T20:06:47.392186680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:06:47.392376 containerd[1462]: time="2025-02-13T20:06:47.392248920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:06:47.392376 containerd[1462]: time="2025-02-13T20:06:47.392299680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:06:47.392376 containerd[1462]: time="2025-02-13T20:06:47.392316760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:06:47.392376 containerd[1462]: time="2025-02-13T20:06:47.392330160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:06:47.392376 containerd[1462]: time="2025-02-13T20:06:47.392345000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:06:47.392376 containerd[1462]: time="2025-02-13T20:06:47.392360680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:06:47.392376 containerd[1462]: time="2025-02-13T20:06:47.392375000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:06:47.392501 containerd[1462]: time="2025-02-13T20:06:47.392387160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:06:47.392501 containerd[1462]: time="2025-02-13T20:06:47.392401560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:06:47.392501 containerd[1462]: time="2025-02-13T20:06:47.392414960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:06:47.392501 containerd[1462]: time="2025-02-13T20:06:47.392436040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:06:47.392501 containerd[1462]: time="2025-02-13T20:06:47.392448680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:06:47.392501 containerd[1462]: time="2025-02-13T20:06:47.392460720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:06:47.392501 containerd[1462]: time="2025-02-13T20:06:47.392473720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:06:47.392501 containerd[1462]: time="2025-02-13T20:06:47.392494160Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:06:47.392637 containerd[1462]: time="2025-02-13T20:06:47.392519480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:06:47.392637 containerd[1462]: time="2025-02-13T20:06:47.392532760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:06:47.392637 containerd[1462]: time="2025-02-13T20:06:47.392545320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:06:47.392689 containerd[1462]: time="2025-02-13T20:06:47.392666760Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:06:47.392708 containerd[1462]: time="2025-02-13T20:06:47.392686560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:06:47.392708 containerd[1462]: time="2025-02-13T20:06:47.392699800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:06:47.392832 containerd[1462]: time="2025-02-13T20:06:47.392774440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:06:47.392832 containerd[1462]: time="2025-02-13T20:06:47.392799440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:06:47.392832 containerd[1462]: time="2025-02-13T20:06:47.392815120Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:06:47.392832 containerd[1462]: time="2025-02-13T20:06:47.392826200Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:06:47.392910 containerd[1462]: time="2025-02-13T20:06:47.392837280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:06:47.393209 containerd[1462]: time="2025-02-13T20:06:47.393140520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:06:47.394578 containerd[1462]: time="2025-02-13T20:06:47.394534480Z" level=info msg="Connect containerd service" Feb 13 20:06:47.394642 containerd[1462]: time="2025-02-13T20:06:47.394625840Z" level=info msg="using legacy CRI server" Feb 13 20:06:47.394642 containerd[1462]: time="2025-02-13T20:06:47.394634920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:06:47.395746 containerd[1462]: time="2025-02-13T20:06:47.395699440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:06:47.401220 containerd[1462]: time="2025-02-13T20:06:47.400951440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:06:47.403053 containerd[1462]: time="2025-02-13T20:06:47.401509600Z" level=info msg="Start subscribing containerd event" Feb 13 20:06:47.403053 containerd[1462]: time="2025-02-13T20:06:47.401575720Z" level=info msg="Start recovering state" Feb 13 20:06:47.403053 containerd[1462]: time="2025-02-13T20:06:47.401656080Z" level=info msg="Start event monitor" Feb 13 20:06:47.403053 containerd[1462]: time="2025-02-13T20:06:47.401669000Z" level=info msg="Start snapshots syncer" Feb 13 20:06:47.403053 containerd[1462]: time="2025-02-13T20:06:47.401680680Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:06:47.403053 containerd[1462]: time="2025-02-13T20:06:47.401690040Z" level=info msg="Start streaming server" Feb 13 20:06:47.403053 containerd[1462]: time="2025-02-13T20:06:47.402786520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:06:47.403053 containerd[1462]: time="2025-02-13T20:06:47.402973920Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:06:47.407442 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:06:47.409345 containerd[1462]: time="2025-02-13T20:06:47.409305400Z" level=info msg="containerd successfully booted in 0.113438s" Feb 13 20:06:47.409619 locksmithd[1477]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:06:47.410023 update-ssh-keys[1524]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:06:47.411577 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:06:47.422527 systemd[1]: Finished sshkeys.service. Feb 13 20:06:47.477571 systemd-networkd[1357]: eth1: Gained IPv6LL Feb 13 20:06:47.478131 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Feb 13 20:06:47.483715 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:06:47.485774 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:06:47.495834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:06:47.508756 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:06:47.562047 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:06:47.763989 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:06:47.794498 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:06:47.814716 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:06:47.836394 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:06:47.838372 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:06:47.846582 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:06:47.861462 systemd-networkd[1357]: eth0: Gained IPv6LL Feb 13 20:06:47.863077 systemd-timesyncd[1340]: Network configuration changed, trying to establish connection. Feb 13 20:06:47.890302 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:06:47.903169 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:06:47.912996 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 20:06:47.914104 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:06:47.921222 tar[1450]: linux-arm64/README.md Feb 13 20:06:47.953927 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:06:48.510600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:06:48.510719 (kubelet)[1564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:06:48.513039 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:06:48.516259 systemd[1]: Startup finished in 919ms (kernel) + 6.002s (initrd) + 4.836s (userspace) = 11.758s. Feb 13 20:06:49.080493 kubelet[1564]: E0213 20:06:49.080379 1564 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:06:49.084323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:06:49.084616 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:06:59.272624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:06:59.283645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:06:59.411905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:06:59.417935 (kubelet)[1584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:06:59.482397 kubelet[1584]: E0213 20:06:59.482319 1584 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:06:59.486624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:06:59.486762 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:07:09.522705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:07:09.530519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:07:09.692338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:07:09.706678 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:07:09.762280 kubelet[1598]: E0213 20:07:09.762139 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:07:09.766153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:07:09.766410 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:07:18.132381 systemd-timesyncd[1340]: Contacted time server 213.239.234.28:123 (2.flatcar.pool.ntp.org). Feb 13 20:07:18.132458 systemd-timesyncd[1340]: Initial clock synchronization to Thu 2025-02-13 20:07:18.324123 UTC. Feb 13 20:07:19.772715 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 20:07:19.780836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:07:19.945543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:07:19.947064 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:07:20.001993 kubelet[1614]: E0213 20:07:20.001941 1614 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:07:20.006720 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:07:20.006923 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:07:30.023376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 20:07:30.034451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:07:30.176534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:07:30.177796 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:07:30.221367 kubelet[1629]: E0213 20:07:30.221136 1629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:07:30.224818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:07:30.225004 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:07:31.804290 update_engine[1446]: I20250213 20:07:31.803685 1446 update_attempter.cc:509] Updating boot flags... Feb 13 20:07:31.879822 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1645) Feb 13 20:07:31.931229 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1647) Feb 13 20:07:40.272881 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 20:07:40.288672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:07:40.434078 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:07:40.451791 (kubelet)[1662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:07:40.513678 kubelet[1662]: E0213 20:07:40.513595 1662 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:07:40.516385 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:07:40.516563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:07:50.523547 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 20:07:50.531640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:07:50.666053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:07:50.668494 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:07:50.711558 kubelet[1677]: E0213 20:07:50.711448 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:07:50.715058 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:07:50.715328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:08:00.772957 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 20:08:00.786608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:08:00.944625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:08:00.944687 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:08:00.993210 kubelet[1691]: E0213 20:08:00.993144 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:08:00.995798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:08:00.995939 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:08:11.023087 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 20:08:11.030580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:08:11.222664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:08:11.225830 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:08:11.287296 kubelet[1706]: E0213 20:08:11.287094 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:08:11.290569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:08:11.290827 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:08:21.524402 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Feb 13 20:08:21.533544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:08:21.671650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:08:21.682721 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:08:21.729170 kubelet[1721]: E0213 20:08:21.729078 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:08:21.731968 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:08:21.732418 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:08:31.773466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Feb 13 20:08:31.783641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:08:31.969503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:08:31.985462 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:08:32.054523 kubelet[1736]: E0213 20:08:32.054386 1736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:08:32.057009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:08:32.057149 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:08:40.865062 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:08:40.877930 systemd[1]: Started sshd@0-168.119.253.211:22-147.75.109.163:34400.service - OpenSSH per-connection server daemon (147.75.109.163:34400). Feb 13 20:08:41.870343 sshd[1745]: Accepted publickey for core from 147.75.109.163 port 34400 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:08:41.873874 sshd[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:41.889850 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:08:41.902902 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:08:41.906988 systemd-logind[1444]: New session 1 of user core. Feb 13 20:08:41.925798 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:08:41.938105 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:08:41.952790 (systemd)[1749]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:08:42.093616 systemd[1749]: Queued start job for default target default.target. Feb 13 20:08:42.103732 systemd[1749]: Created slice app.slice - User Application Slice. Feb 13 20:08:42.103764 systemd[1749]: Reached target paths.target - Paths. Feb 13 20:08:42.103780 systemd[1749]: Reached target timers.target - Timers. Feb 13 20:08:42.108008 systemd[1749]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:08:42.125361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Feb 13 20:08:42.128751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:08:42.140053 systemd[1749]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:08:42.140231 systemd[1749]: Reached target sockets.target - Sockets. Feb 13 20:08:42.140256 systemd[1749]: Reached target basic.target - Basic System. Feb 13 20:08:42.140304 systemd[1749]: Reached target default.target - Main User Target. Feb 13 20:08:42.140339 systemd[1749]: Startup finished in 175ms. Feb 13 20:08:42.141068 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:08:42.142993 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:08:42.293131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:08:42.300359 (kubelet)[1766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:08:42.353992 kubelet[1766]: E0213 20:08:42.353861 1766 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:08:42.357354 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:08:42.357734 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:08:42.856166 systemd[1]: Started sshd@1-168.119.253.211:22-147.75.109.163:34404.service - OpenSSH per-connection server daemon (147.75.109.163:34404). Feb 13 20:08:43.839096 sshd[1775]: Accepted publickey for core from 147.75.109.163 port 34404 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:08:43.844699 sshd[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:43.852879 systemd-logind[1444]: New session 2 of user core. Feb 13 20:08:43.871120 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:08:44.530494 sshd[1775]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:44.536113 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:08:44.537971 systemd[1]: sshd@1-168.119.253.211:22-147.75.109.163:34404.service: Deactivated successfully. Feb 13 20:08:44.542479 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:08:44.545449 systemd-logind[1444]: Removed session 2. Feb 13 20:08:44.706758 systemd[1]: Started sshd@2-168.119.253.211:22-147.75.109.163:34418.service - OpenSSH per-connection server daemon (147.75.109.163:34418). Feb 13 20:08:45.691729 sshd[1782]: Accepted publickey for core from 147.75.109.163 port 34418 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:08:45.693953 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:45.703926 systemd-logind[1444]: New session 3 of user core. Feb 13 20:08:45.718934 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:08:46.376002 sshd[1782]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:46.382322 systemd[1]: sshd@2-168.119.253.211:22-147.75.109.163:34418.service: Deactivated successfully. Feb 13 20:08:46.386059 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:08:46.388342 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:08:46.392238 systemd-logind[1444]: Removed session 3. Feb 13 20:08:46.557355 systemd[1]: Started sshd@3-168.119.253.211:22-147.75.109.163:34422.service - OpenSSH per-connection server daemon (147.75.109.163:34422). Feb 13 20:08:47.545061 sshd[1789]: Accepted publickey for core from 147.75.109.163 port 34422 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:08:47.547460 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:47.553776 systemd-logind[1444]: New session 4 of user core. Feb 13 20:08:47.565537 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:08:48.238777 sshd[1789]: pam_unix(sshd:session): session closed for user core Feb 13 20:08:48.243728 systemd[1]: sshd@3-168.119.253.211:22-147.75.109.163:34422.service: Deactivated successfully. Feb 13 20:08:48.246893 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:08:48.251723 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:08:48.254627 systemd-logind[1444]: Removed session 4. Feb 13 20:08:48.413731 systemd[1]: Started sshd@4-168.119.253.211:22-147.75.109.163:34436.service - OpenSSH per-connection server daemon (147.75.109.163:34436). Feb 13 20:08:49.414368 sshd[1796]: Accepted publickey for core from 147.75.109.163 port 34436 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:08:49.417849 sshd[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:08:49.429542 systemd-logind[1444]: New session 5 of user core. Feb 13 20:08:49.431726 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:08:49.951051 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:08:49.951502 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:08:50.319869 (dockerd)[1814]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:08:50.321599 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:08:50.611300 dockerd[1814]: time="2025-02-13T20:08:50.609384057Z" level=info msg="Starting up" Feb 13 20:08:50.700879 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2128522462-merged.mount: Deactivated successfully. Feb 13 20:08:50.721485 systemd[1]: var-lib-docker-metacopy\x2dcheck1526659309-merged.mount: Deactivated successfully. Feb 13 20:08:50.732858 dockerd[1814]: time="2025-02-13T20:08:50.732790116Z" level=info msg="Loading containers: start." Feb 13 20:08:50.853231 kernel: Initializing XFRM netlink socket Feb 13 20:08:50.965250 systemd-networkd[1357]: docker0: Link UP Feb 13 20:08:50.991303 dockerd[1814]: time="2025-02-13T20:08:50.991225934Z" level=info msg="Loading containers: done." Feb 13 20:08:51.013243 dockerd[1814]: time="2025-02-13T20:08:51.012532260Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:08:51.013243 dockerd[1814]: time="2025-02-13T20:08:51.012705029Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:08:51.013243 dockerd[1814]: time="2025-02-13T20:08:51.012983885Z" level=info msg="Daemon has completed initialization" Feb 13 20:08:51.059883 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:08:51.060163 dockerd[1814]: time="2025-02-13T20:08:51.059764361Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:08:51.696906 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3363363643-merged.mount: Deactivated successfully. Feb 13 20:08:51.872753 containerd[1462]: time="2025-02-13T20:08:51.872363741Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 20:08:52.523172 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Feb 13 20:08:52.532752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:08:52.555851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943614134.mount: Deactivated successfully. Feb 13 20:08:52.722707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:08:52.723859 (kubelet)[1968]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:08:52.801852 kubelet[1968]: E0213 20:08:52.800636 1968 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:08:52.805300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:08:52.805896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:08:55.019233 containerd[1462]: time="2025-02-13T20:08:55.017636629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:55.019735 containerd[1462]: time="2025-02-13T20:08:55.019178481Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218328" Feb 13 20:08:55.019908 containerd[1462]: time="2025-02-13T20:08:55.019878570Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:55.024333 containerd[1462]: time="2025-02-13T20:08:55.024271856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:55.025749 containerd[1462]: time="2025-02-13T20:08:55.025704992Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 3.153260128s" Feb 13 20:08:55.025915 containerd[1462]: time="2025-02-13T20:08:55.025898703Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\"" Feb 13 20:08:55.027002 containerd[1462]: time="2025-02-13T20:08:55.026958377Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 20:08:57.263176 containerd[1462]: time="2025-02-13T20:08:57.261720228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:57.263176 containerd[1462]: time="2025-02-13T20:08:57.263100134Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528165" Feb 13 20:08:57.264141 containerd[1462]: time="2025-02-13T20:08:57.264088055Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:57.269234 containerd[1462]: time="2025-02-13T20:08:57.269164376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:57.270632 containerd[1462]: time="2025-02-13T20:08:57.270556841Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 2.243423033s" Feb 13 20:08:57.270632 containerd[1462]: time="2025-02-13T20:08:57.270611439Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\"" Feb 13 20:08:57.271548 containerd[1462]: time="2025-02-13T20:08:57.271475325Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 20:08:59.018292 containerd[1462]: time="2025-02-13T20:08:59.018212891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:59.020678 containerd[1462]: time="2025-02-13T20:08:59.020613769Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480820" Feb 13 20:08:59.021884 containerd[1462]: time="2025-02-13T20:08:59.021795048Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:59.025891 containerd[1462]: time="2025-02-13T20:08:59.025806310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:08:59.027358 containerd[1462]: time="2025-02-13T20:08:59.027271059Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 1.755735257s" Feb 13 20:08:59.027653 containerd[1462]: time="2025-02-13T20:08:59.027527650Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\"" Feb 13 20:08:59.028616 containerd[1462]: time="2025-02-13T20:08:59.028569814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 20:09:00.496029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2916506142.mount: Deactivated successfully. Feb 13 20:09:00.843343 containerd[1462]: time="2025-02-13T20:09:00.842961121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:00.846044 containerd[1462]: time="2025-02-13T20:09:00.845996183Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363408" Feb 13 20:09:00.847419 containerd[1462]: time="2025-02-13T20:09:00.847372979Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:00.851568 containerd[1462]: time="2025-02-13T20:09:00.851421969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:00.852923 containerd[1462]: time="2025-02-13T20:09:00.852084867Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.823463575s" Feb 13 20:09:00.852923 containerd[1462]: time="2025-02-13T20:09:00.852129426Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 20:09:00.853130 containerd[1462]: time="2025-02-13T20:09:00.852929680Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 20:09:01.553855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1712108652.mount: Deactivated successfully. Feb 13 20:09:02.717239 containerd[1462]: time="2025-02-13T20:09:02.716837058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:02.719058 containerd[1462]: time="2025-02-13T20:09:02.718994118Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Feb 13 20:09:02.720447 containerd[1462]: time="2025-02-13T20:09:02.720357200Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:02.725225 containerd[1462]: time="2025-02-13T20:09:02.724158414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:02.727226 containerd[1462]: time="2025-02-13T20:09:02.726400471Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.873435593s" Feb 13 20:09:02.727226 containerd[1462]: time="2025-02-13T20:09:02.726462229Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Feb 13 20:09:02.727863 containerd[1462]: time="2025-02-13T20:09:02.727795712Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:09:03.023592 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Feb 13 20:09:03.032711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:09:03.185982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:03.200582 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:09:03.259926 kubelet[2097]: E0213 20:09:03.259844 2097 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:09:03.262913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:09:03.263210 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:09:03.333742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4213463970.mount: Deactivated successfully. Feb 13 20:09:03.369339 containerd[1462]: time="2025-02-13T20:09:03.369047661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:03.371989 containerd[1462]: time="2025-02-13T20:09:03.371415519Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Feb 13 20:09:03.374525 containerd[1462]: time="2025-02-13T20:09:03.374380883Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:03.379638 containerd[1462]: time="2025-02-13T20:09:03.379503790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:03.381625 containerd[1462]: time="2025-02-13T20:09:03.380936313Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 653.095603ms" Feb 13 20:09:03.381625 containerd[1462]: time="2025-02-13T20:09:03.380984552Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 20:09:03.382395 containerd[1462]: time="2025-02-13T20:09:03.381878529Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 20:09:04.080756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount385627679.mount: Deactivated successfully. Feb 13 20:09:07.004370 containerd[1462]: time="2025-02-13T20:09:07.004298311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:07.007380 containerd[1462]: time="2025-02-13T20:09:07.007335375Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812491" Feb 13 20:09:07.009097 containerd[1462]: time="2025-02-13T20:09:07.009044024Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:07.013316 containerd[1462]: time="2025-02-13T20:09:07.013269587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:07.015942 containerd[1462]: time="2025-02-13T20:09:07.015891939Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.633980252s" Feb 13 20:09:07.016125 containerd[1462]: time="2025-02-13T20:09:07.016107056Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Feb 13 20:09:12.516343 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:12.528609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:09:12.570039 systemd[1]: Reloading requested from client PID 2190 ('systemctl') (unit session-5.scope)... Feb 13 20:09:12.570248 systemd[1]: Reloading... Feb 13 20:09:12.726254 zram_generator::config[2232]: No configuration found. Feb 13 20:09:12.829370 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:09:12.902579 systemd[1]: Reloading finished in 331 ms. Feb 13 20:09:12.965245 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:09:12.965345 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:09:12.965607 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:12.974683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:09:13.131462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:13.142792 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:09:13.194814 kubelet[2278]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:09:13.194814 kubelet[2278]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:09:13.194814 kubelet[2278]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:09:13.194814 kubelet[2278]: I0213 20:09:13.194788 2278 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:09:13.888345 kubelet[2278]: I0213 20:09:13.888103 2278 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:09:13.888345 kubelet[2278]: I0213 20:09:13.888151 2278 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:09:13.890230 kubelet[2278]: I0213 20:09:13.889171 2278 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:09:13.923420 kubelet[2278]: E0213 20:09:13.923364 2278 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://168.119.253.211:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 168.119.253.211:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:13.924977 kubelet[2278]: I0213 20:09:13.924883 2278 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:09:13.935602 kubelet[2278]: E0213 20:09:13.935557 2278 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:09:13.935602 kubelet[2278]: I0213 20:09:13.935597 2278 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:09:13.939043 kubelet[2278]: I0213 20:09:13.939002 2278 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:09:13.940175 kubelet[2278]: I0213 20:09:13.940069 2278 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:09:13.940378 kubelet[2278]: I0213 20:09:13.940147 2278 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-1-8-7bfd910be1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:09:13.940472 kubelet[2278]: I0213 20:09:13.940448 2278 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:09:13.940472 kubelet[2278]: I0213 20:09:13.940457 2278 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:09:13.940724 kubelet[2278]: I0213 20:09:13.940694 2278 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:09:13.944475 kubelet[2278]: I0213 20:09:13.944314 2278 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:09:13.944475 kubelet[2278]: I0213 20:09:13.944351 2278 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:09:13.944475 kubelet[2278]: I0213 20:09:13.944378 2278 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:09:13.944475 kubelet[2278]: I0213 20:09:13.944389 2278 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:09:13.951234 kubelet[2278]: W0213 20:09:13.950346 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://168.119.253.211:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-8-7bfd910be1&limit=500&resourceVersion=0": dial tcp 168.119.253.211:6443: connect: connection refused Feb 13 20:09:13.951234 kubelet[2278]: E0213 20:09:13.950407 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://168.119.253.211:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-8-7bfd910be1&limit=500&resourceVersion=0\": dial tcp 168.119.253.211:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:13.951234 kubelet[2278]: W0213 20:09:13.950814 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://168.119.253.211:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 168.119.253.211:6443: connect: connection refused Feb 13 20:09:13.951234 kubelet[2278]: E0213 20:09:13.950922 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://168.119.253.211:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.253.211:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:13.952555 kubelet[2278]: I0213 20:09:13.951718 2278 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:09:13.952555 kubelet[2278]: I0213 20:09:13.952370 2278 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:09:13.952555 kubelet[2278]: W0213 20:09:13.952499 2278 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:09:13.955254 kubelet[2278]: I0213 20:09:13.955203 2278 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:09:13.955254 kubelet[2278]: I0213 20:09:13.955268 2278 server.go:1287] "Started kubelet" Feb 13 20:09:13.955781 kubelet[2278]: I0213 20:09:13.955739 2278 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:09:13.956980 kubelet[2278]: I0213 20:09:13.956949 2278 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:09:13.960468 kubelet[2278]: I0213 20:09:13.960371 2278 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:09:13.960834 kubelet[2278]: I0213 20:09:13.960801 2278 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:09:13.961592 kubelet[2278]: E0213 20:09:13.961109 2278 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://168.119.253.211:6443/api/v1/namespaces/default/events\": dial tcp 168.119.253.211:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-1-8-7bfd910be1.1823dd6ff156b2f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-1-8-7bfd910be1,UID:ci-4081-3-1-8-7bfd910be1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-1-8-7bfd910be1,},FirstTimestamp:2025-02-13 20:09:13.955242741 +0000 UTC m=+0.807627364,LastTimestamp:2025-02-13 20:09:13.955242741 +0000 UTC m=+0.807627364,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-1-8-7bfd910be1,}" Feb 13 20:09:13.962250 kubelet[2278]: I0213 20:09:13.960866 2278 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:09:13.962937 kubelet[2278]: I0213 20:09:13.962902 2278 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:09:13.965901 kubelet[2278]: E0213 20:09:13.965560 2278 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-1-8-7bfd910be1\" not found" Feb 13 20:09:13.965901 kubelet[2278]: I0213 20:09:13.965622 2278 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:09:13.966050 kubelet[2278]: I0213 20:09:13.965915 2278 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:09:13.966050 kubelet[2278]: I0213 20:09:13.965987 2278 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:09:13.966643 kubelet[2278]: W0213 20:09:13.966432 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://168.119.253.211:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.253.211:6443: connect: connection refused Feb 13 20:09:13.966643 kubelet[2278]: E0213 20:09:13.966494 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://168.119.253.211:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.253.211:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:13.968289 kubelet[2278]: I0213 20:09:13.968247 2278 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:09:13.969688 kubelet[2278]: I0213 20:09:13.968408 2278 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:09:13.970352 kubelet[2278]: E0213 20:09:13.970308 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.253.211:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-8-7bfd910be1?timeout=10s\": dial tcp 168.119.253.211:6443: connect: connection refused" interval="200ms" Feb 13 20:09:13.970646 kubelet[2278]: E0213 20:09:13.970622 2278 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:09:13.971162 kubelet[2278]: I0213 20:09:13.971110 2278 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:09:13.997656 kubelet[2278]: I0213 20:09:13.997583 2278 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:09:14.000058 kubelet[2278]: I0213 20:09:13.999851 2278 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:09:14.000058 kubelet[2278]: I0213 20:09:13.999898 2278 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:09:14.000058 kubelet[2278]: I0213 20:09:13.999927 2278 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:09:14.000058 kubelet[2278]: I0213 20:09:13.999941 2278 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:09:14.000058 kubelet[2278]: E0213 20:09:14.000003 2278 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:09:14.003122 kubelet[2278]: W0213 20:09:14.002716 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://168.119.253.211:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.253.211:6443: connect: connection refused Feb 13 20:09:14.003122 kubelet[2278]: E0213 20:09:14.002920 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://168.119.253.211:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 168.119.253.211:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:14.008673 kubelet[2278]: I0213 20:09:14.008573 2278 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:09:14.008673 kubelet[2278]: I0213 20:09:14.008593 2278 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:09:14.008673 kubelet[2278]: I0213 20:09:14.008617 2278 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:09:14.010456 kubelet[2278]: I0213 20:09:14.010423 2278 policy_none.go:49] "None policy: Start" Feb 13 20:09:14.010456 kubelet[2278]: I0213 20:09:14.010460 2278 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:09:14.010605 kubelet[2278]: I0213 20:09:14.010477 2278 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:09:14.020486 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:09:14.033102 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:09:14.044800 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:09:14.047761 kubelet[2278]: I0213 20:09:14.047219 2278 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:09:14.047761 kubelet[2278]: I0213 20:09:14.047466 2278 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:09:14.047761 kubelet[2278]: I0213 20:09:14.047482 2278 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:09:14.048034 kubelet[2278]: I0213 20:09:14.047839 2278 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:09:14.050032 kubelet[2278]: E0213 20:09:14.049992 2278 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:09:14.050254 kubelet[2278]: E0213 20:09:14.050052 2278 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-1-8-7bfd910be1\" not found" Feb 13 20:09:14.116783 systemd[1]: Created slice kubepods-burstable-pod87796d8a096e41f4258ca3ce609dbaf8.slice - libcontainer container kubepods-burstable-pod87796d8a096e41f4258ca3ce609dbaf8.slice. Feb 13 20:09:14.126339 kubelet[2278]: E0213 20:09:14.125976 2278 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-1-8-7bfd910be1\" not found" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.130946 systemd[1]: Created slice kubepods-burstable-pod0b3d382e7eca8f6aaa07b40a57ad7b9e.slice - libcontainer container kubepods-burstable-pod0b3d382e7eca8f6aaa07b40a57ad7b9e.slice. Feb 13 20:09:14.133860 kubelet[2278]: E0213 20:09:14.133703 2278 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-1-8-7bfd910be1\" not found" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.140457 systemd[1]: Created slice kubepods-burstable-podf4c8274de1181451647f9755fe345d67.slice - libcontainer container kubepods-burstable-podf4c8274de1181451647f9755fe345d67.slice. Feb 13 20:09:14.144761 kubelet[2278]: E0213 20:09:14.144702 2278 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-1-8-7bfd910be1\" not found" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.150445 kubelet[2278]: I0213 20:09:14.150386 2278 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.151108 kubelet[2278]: E0213 20:09:14.151078 2278 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://168.119.253.211:6443/api/v1/nodes\": dial tcp 168.119.253.211:6443: connect: connection refused" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.167753 kubelet[2278]: I0213 20:09:14.167315 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87796d8a096e41f4258ca3ce609dbaf8-ca-certs\") pod \"kube-controller-manager-ci-4081-3-1-8-7bfd910be1\" (UID: \"87796d8a096e41f4258ca3ce609dbaf8\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.167753 kubelet[2278]: I0213 20:09:14.167351 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87796d8a096e41f4258ca3ce609dbaf8-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-1-8-7bfd910be1\" (UID: \"87796d8a096e41f4258ca3ce609dbaf8\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.167753 kubelet[2278]: I0213 20:09:14.167373 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b3d382e7eca8f6aaa07b40a57ad7b9e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-1-8-7bfd910be1\" (UID: \"0b3d382e7eca8f6aaa07b40a57ad7b9e\") " pod="kube-system/kube-scheduler-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.167753 kubelet[2278]: I0213 20:09:14.167391 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/87796d8a096e41f4258ca3ce609dbaf8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-1-8-7bfd910be1\" (UID: \"87796d8a096e41f4258ca3ce609dbaf8\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.167753 kubelet[2278]: I0213 20:09:14.167408 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87796d8a096e41f4258ca3ce609dbaf8-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-1-8-7bfd910be1\" (UID: \"87796d8a096e41f4258ca3ce609dbaf8\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.168179 kubelet[2278]: I0213 20:09:14.167473 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87796d8a096e41f4258ca3ce609dbaf8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-1-8-7bfd910be1\" (UID: \"87796d8a096e41f4258ca3ce609dbaf8\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.168179 kubelet[2278]: I0213 20:09:14.167492 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4c8274de1181451647f9755fe345d67-ca-certs\") pod \"kube-apiserver-ci-4081-3-1-8-7bfd910be1\" (UID: \"f4c8274de1181451647f9755fe345d67\") " pod="kube-system/kube-apiserver-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.168179 kubelet[2278]: I0213 20:09:14.167533 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4c8274de1181451647f9755fe345d67-k8s-certs\") pod \"kube-apiserver-ci-4081-3-1-8-7bfd910be1\" (UID: \"f4c8274de1181451647f9755fe345d67\") " pod="kube-system/kube-apiserver-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.168179 kubelet[2278]: I0213 20:09:14.167554 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4c8274de1181451647f9755fe345d67-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-1-8-7bfd910be1\" (UID: \"f4c8274de1181451647f9755fe345d67\") " pod="kube-system/kube-apiserver-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.171552 kubelet[2278]: E0213 20:09:14.171424 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.253.211:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-8-7bfd910be1?timeout=10s\": dial tcp 168.119.253.211:6443: connect: connection refused" interval="400ms" Feb 13 20:09:14.355303 kubelet[2278]: I0213 20:09:14.354808 2278 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.355303 kubelet[2278]: E0213 20:09:14.355267 2278 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://168.119.253.211:6443/api/v1/nodes\": dial tcp 168.119.253.211:6443: connect: connection refused" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.428686 containerd[1462]: time="2025-02-13T20:09:14.428537554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-1-8-7bfd910be1,Uid:87796d8a096e41f4258ca3ce609dbaf8,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:14.439173 containerd[1462]: time="2025-02-13T20:09:14.439113840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-1-8-7bfd910be1,Uid:0b3d382e7eca8f6aaa07b40a57ad7b9e,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:14.446950 containerd[1462]: time="2025-02-13T20:09:14.446229990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-1-8-7bfd910be1,Uid:f4c8274de1181451647f9755fe345d67,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:14.572972 kubelet[2278]: E0213 20:09:14.572864 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.253.211:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-8-7bfd910be1?timeout=10s\": dial tcp 168.119.253.211:6443: connect: connection refused" interval="800ms" Feb 13 20:09:14.758503 kubelet[2278]: I0213 20:09:14.758380 2278 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.759409 kubelet[2278]: E0213 20:09:14.758784 2278 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://168.119.253.211:6443/api/v1/nodes\": dial tcp 168.119.253.211:6443: connect: connection refused" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:14.888234 kubelet[2278]: W0213 20:09:14.887944 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://168.119.253.211:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.253.211:6443: connect: connection refused Feb 13 20:09:14.888234 kubelet[2278]: E0213 20:09:14.888078 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://168.119.253.211:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.253.211:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:14.896775 kubelet[2278]: W0213 20:09:14.896683 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://168.119.253.211:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 168.119.253.211:6443: connect: connection refused Feb 13 20:09:14.896775 kubelet[2278]: E0213 20:09:14.896768 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://168.119.253.211:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.253.211:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:14.991328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3263175027.mount: Deactivated successfully. Feb 13 20:09:15.004764 containerd[1462]: time="2025-02-13T20:09:15.004692600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:09:15.006316 kubelet[2278]: W0213 20:09:15.006171 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://168.119.253.211:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-8-7bfd910be1&limit=500&resourceVersion=0": dial tcp 168.119.253.211:6443: connect: connection refused Feb 13 20:09:15.006316 kubelet[2278]: E0213 20:09:15.006286 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://168.119.253.211:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-8-7bfd910be1&limit=500&resourceVersion=0\": dial tcp 168.119.253.211:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:15.007268 containerd[1462]: time="2025-02-13T20:09:15.007050227Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:09:15.008359 containerd[1462]: time="2025-02-13T20:09:15.008292580Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:09:15.010987 containerd[1462]: time="2025-02-13T20:09:15.010420488Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:09:15.014287 containerd[1462]: time="2025-02-13T20:09:15.012228438Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:09:15.014287 containerd[1462]: time="2025-02-13T20:09:15.012659355Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:09:15.015108 containerd[1462]: time="2025-02-13T20:09:15.014962982Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Feb 13 20:09:15.019242 containerd[1462]: time="2025-02-13T20:09:15.017090810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:09:15.019242 containerd[1462]: time="2025-02-13T20:09:15.018825361Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.521122ms" Feb 13 20:09:15.023136 containerd[1462]: time="2025-02-13T20:09:15.023062977Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 594.426503ms" Feb 13 20:09:15.027144 containerd[1462]: time="2025-02-13T20:09:15.026936475Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 580.528806ms" Feb 13 20:09:15.165777 containerd[1462]: time="2025-02-13T20:09:15.165654178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:15.165777 containerd[1462]: time="2025-02-13T20:09:15.165728697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:15.165777 containerd[1462]: time="2025-02-13T20:09:15.165738737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:15.166008 containerd[1462]: time="2025-02-13T20:09:15.165933896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:15.174611 containerd[1462]: time="2025-02-13T20:09:15.173961651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:15.174611 containerd[1462]: time="2025-02-13T20:09:15.174541968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:15.174611 containerd[1462]: time="2025-02-13T20:09:15.174567608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:15.175014 containerd[1462]: time="2025-02-13T20:09:15.174669847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:15.175226 containerd[1462]: time="2025-02-13T20:09:15.175136524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:15.175334 containerd[1462]: time="2025-02-13T20:09:15.175206604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:15.176292 containerd[1462]: time="2025-02-13T20:09:15.175324283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:15.177040 containerd[1462]: time="2025-02-13T20:09:15.176994314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:15.196545 systemd[1]: Started cri-containerd-7ced58c4568493f856766d812c69096af37615fe3a43db9be779e13f820ac607.scope - libcontainer container 7ced58c4568493f856766d812c69096af37615fe3a43db9be779e13f820ac607. Feb 13 20:09:15.202548 systemd[1]: Started cri-containerd-0d2ca74d510653f25e71c9d8957f7142705375a408d231365a761ec1cf3101d0.scope - libcontainer container 0d2ca74d510653f25e71c9d8957f7142705375a408d231365a761ec1cf3101d0. Feb 13 20:09:15.220785 systemd[1]: Started cri-containerd-0de6fa3dd5a55f9aa3a34d452579897ea164ccff4ca365228d6f0ba8fcd2b68e.scope - libcontainer container 0de6fa3dd5a55f9aa3a34d452579897ea164ccff4ca365228d6f0ba8fcd2b68e. Feb 13 20:09:15.270054 containerd[1462]: time="2025-02-13T20:09:15.269171157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-1-8-7bfd910be1,Uid:87796d8a096e41f4258ca3ce609dbaf8,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d2ca74d510653f25e71c9d8957f7142705375a408d231365a761ec1cf3101d0\"" Feb 13 20:09:15.279028 containerd[1462]: time="2025-02-13T20:09:15.278875143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-1-8-7bfd910be1,Uid:0b3d382e7eca8f6aaa07b40a57ad7b9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ced58c4568493f856766d812c69096af37615fe3a43db9be779e13f820ac607\"" Feb 13 20:09:15.279450 containerd[1462]: time="2025-02-13T20:09:15.279402980Z" level=info msg="CreateContainer within sandbox \"0d2ca74d510653f25e71c9d8957f7142705375a408d231365a761ec1cf3101d0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:09:15.285944 containerd[1462]: time="2025-02-13T20:09:15.285808704Z" level=info msg="CreateContainer within sandbox \"7ced58c4568493f856766d812c69096af37615fe3a43db9be779e13f820ac607\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:09:15.293570 containerd[1462]: time="2025-02-13T20:09:15.293430301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-1-8-7bfd910be1,Uid:f4c8274de1181451647f9755fe345d67,Namespace:kube-system,Attempt:0,} returns sandbox id \"0de6fa3dd5a55f9aa3a34d452579897ea164ccff4ca365228d6f0ba8fcd2b68e\"" Feb 13 20:09:15.298729 containerd[1462]: time="2025-02-13T20:09:15.298681392Z" level=info msg="CreateContainer within sandbox \"0de6fa3dd5a55f9aa3a34d452579897ea164ccff4ca365228d6f0ba8fcd2b68e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:09:15.309294 containerd[1462]: time="2025-02-13T20:09:15.309039094Z" level=info msg="CreateContainer within sandbox \"0d2ca74d510653f25e71c9d8957f7142705375a408d231365a761ec1cf3101d0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f8455e74408cf66ceb5e0b3077b024a7207eeb91fcaa1010539a35898553c650\"" Feb 13 20:09:15.310347 containerd[1462]: time="2025-02-13T20:09:15.310231327Z" level=info msg="StartContainer for \"f8455e74408cf66ceb5e0b3077b024a7207eeb91fcaa1010539a35898553c650\"" Feb 13 20:09:15.320030 containerd[1462]: time="2025-02-13T20:09:15.319981512Z" level=info msg="CreateContainer within sandbox \"7ced58c4568493f856766d812c69096af37615fe3a43db9be779e13f820ac607\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a1920917c7e43b265afdd7c9b66bc026797cc9a0c77672cab2c659a3babaea8b\"" Feb 13 20:09:15.321426 containerd[1462]: time="2025-02-13T20:09:15.321334785Z" level=info msg="StartContainer for \"a1920917c7e43b265afdd7c9b66bc026797cc9a0c77672cab2c659a3babaea8b\"" Feb 13 20:09:15.329405 containerd[1462]: time="2025-02-13T20:09:15.329358500Z" level=info msg="CreateContainer within sandbox \"0de6fa3dd5a55f9aa3a34d452579897ea164ccff4ca365228d6f0ba8fcd2b68e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"563d6c939526da531a0d49d3891533edf2e625d26cd118eeb140b4575cffb41c\"" Feb 13 20:09:15.331221 containerd[1462]: time="2025-02-13T20:09:15.330244535Z" level=info msg="StartContainer for \"563d6c939526da531a0d49d3891533edf2e625d26cd118eeb140b4575cffb41c\"" Feb 13 20:09:15.353680 systemd[1]: Started cri-containerd-f8455e74408cf66ceb5e0b3077b024a7207eeb91fcaa1010539a35898553c650.scope - libcontainer container f8455e74408cf66ceb5e0b3077b024a7207eeb91fcaa1010539a35898553c650. Feb 13 20:09:15.369578 systemd[1]: Started cri-containerd-a1920917c7e43b265afdd7c9b66bc026797cc9a0c77672cab2c659a3babaea8b.scope - libcontainer container a1920917c7e43b265afdd7c9b66bc026797cc9a0c77672cab2c659a3babaea8b. Feb 13 20:09:15.374952 kubelet[2278]: E0213 20:09:15.374063 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.253.211:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-8-7bfd910be1?timeout=10s\": dial tcp 168.119.253.211:6443: connect: connection refused" interval="1.6s" Feb 13 20:09:15.383960 systemd[1]: Started cri-containerd-563d6c939526da531a0d49d3891533edf2e625d26cd118eeb140b4575cffb41c.scope - libcontainer container 563d6c939526da531a0d49d3891533edf2e625d26cd118eeb140b4575cffb41c. Feb 13 20:09:15.453576 containerd[1462]: time="2025-02-13T20:09:15.452013932Z" level=info msg="StartContainer for \"a1920917c7e43b265afdd7c9b66bc026797cc9a0c77672cab2c659a3babaea8b\" returns successfully" Feb 13 20:09:15.461564 containerd[1462]: time="2025-02-13T20:09:15.461302760Z" level=info msg="StartContainer for \"f8455e74408cf66ceb5e0b3077b024a7207eeb91fcaa1010539a35898553c650\" returns successfully" Feb 13 20:09:15.475187 containerd[1462]: time="2025-02-13T20:09:15.475027203Z" level=info msg="StartContainer for \"563d6c939526da531a0d49d3891533edf2e625d26cd118eeb140b4575cffb41c\" returns successfully" Feb 13 20:09:15.478282 kubelet[2278]: W0213 20:09:15.478129 2278 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://168.119.253.211:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.253.211:6443: connect: connection refused Feb 13 20:09:15.478282 kubelet[2278]: E0213 20:09:15.478236 2278 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://168.119.253.211:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 168.119.253.211:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:09:15.562886 kubelet[2278]: I0213 20:09:15.562444 2278 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:15.562886 kubelet[2278]: E0213 20:09:15.562783 2278 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://168.119.253.211:6443/api/v1/nodes\": dial tcp 168.119.253.211:6443: connect: connection refused" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:16.011951 kubelet[2278]: E0213 20:09:16.011416 2278 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-1-8-7bfd910be1\" not found" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:16.011951 kubelet[2278]: E0213 20:09:16.011482 2278 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-1-8-7bfd910be1\" not found" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:16.016903 kubelet[2278]: E0213 20:09:16.016871 2278 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-1-8-7bfd910be1\" not found" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:17.019858 kubelet[2278]: E0213 20:09:17.019827 2278 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-1-8-7bfd910be1\" not found" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:17.020650 kubelet[2278]: E0213 20:09:17.019967 2278 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-1-8-7bfd910be1\" not found" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:17.167560 kubelet[2278]: I0213 20:09:17.167525 2278 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:17.191058 systemd[1]: Started sshd@5-168.119.253.211:22-193.32.162.135:49354.service - OpenSSH per-connection server daemon (193.32.162.135:49354). Feb 13 20:09:17.286687 sshd[2549]: Connection closed by 193.32.162.135 port 49354 Feb 13 20:09:17.288379 systemd[1]: sshd@5-168.119.253.211:22-193.32.162.135:49354.service: Deactivated successfully. Feb 13 20:09:18.165975 kubelet[2278]: E0213 20:09:18.165664 2278 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-1-8-7bfd910be1\" not found" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:18.349990 kubelet[2278]: I0213 20:09:18.347132 2278 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:18.369665 kubelet[2278]: I0213 20:09:18.369618 2278 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:18.392034 kubelet[2278]: E0213 20:09:18.391726 2278 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-1-8-7bfd910be1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:18.392034 kubelet[2278]: I0213 20:09:18.391817 2278 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:18.395529 kubelet[2278]: E0213 20:09:18.395252 2278 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-1-8-7bfd910be1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:18.395529 kubelet[2278]: I0213 20:09:18.395454 2278 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:18.397990 kubelet[2278]: E0213 20:09:18.397954 2278 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-1-8-7bfd910be1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:18.953151 kubelet[2278]: I0213 20:09:18.952817 2278 apiserver.go:52] "Watching apiserver" Feb 13 20:09:18.966568 kubelet[2278]: I0213 20:09:18.966343 2278 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:09:19.295588 kubelet[2278]: I0213 20:09:19.295312 2278 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:20.598347 systemd[1]: Reloading requested from client PID 2553 ('systemctl') (unit session-5.scope)... Feb 13 20:09:20.598364 systemd[1]: Reloading... Feb 13 20:09:20.738400 zram_generator::config[2596]: No configuration found. Feb 13 20:09:20.845766 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:09:20.932484 systemd[1]: Reloading finished in 333 ms. Feb 13 20:09:20.988049 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:09:21.006932 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:09:21.007244 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:21.007354 systemd[1]: kubelet.service: Consumed 1.321s CPU time, 127.2M memory peak, 0B memory swap peak. Feb 13 20:09:21.022989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:09:21.190928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:09:21.206584 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:09:21.266176 kubelet[2638]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:09:21.268225 kubelet[2638]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:09:21.268225 kubelet[2638]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:09:21.268225 kubelet[2638]: I0213 20:09:21.266625 2638 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:09:21.281380 kubelet[2638]: I0213 20:09:21.281333 2638 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:09:21.281547 kubelet[2638]: I0213 20:09:21.281537 2638 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:09:21.282019 kubelet[2638]: I0213 20:09:21.281997 2638 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:09:21.288457 kubelet[2638]: I0213 20:09:21.288422 2638 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:09:21.292465 kubelet[2638]: I0213 20:09:21.292419 2638 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:09:21.296157 kubelet[2638]: E0213 20:09:21.296110 2638 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:09:21.296157 kubelet[2638]: I0213 20:09:21.296152 2638 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:09:21.299741 kubelet[2638]: I0213 20:09:21.299691 2638 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:09:21.299960 kubelet[2638]: I0213 20:09:21.299931 2638 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:09:21.300132 kubelet[2638]: I0213 20:09:21.299961 2638 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-1-8-7bfd910be1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:09:21.300359 kubelet[2638]: I0213 20:09:21.300142 2638 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:09:21.300359 kubelet[2638]: I0213 20:09:21.300151 2638 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:09:21.300359 kubelet[2638]: I0213 20:09:21.300254 2638 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:09:21.300448 kubelet[2638]: I0213 20:09:21.300422 2638 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:09:21.300448 kubelet[2638]: I0213 20:09:21.300438 2638 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:09:21.300503 kubelet[2638]: I0213 20:09:21.300461 2638 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:09:21.301759 kubelet[2638]: I0213 20:09:21.301074 2638 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:09:21.305212 kubelet[2638]: I0213 20:09:21.302883 2638 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:09:21.305212 kubelet[2638]: I0213 20:09:21.304077 2638 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:09:21.305369 kubelet[2638]: I0213 20:09:21.305347 2638 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:09:21.305402 kubelet[2638]: I0213 20:09:21.305392 2638 server.go:1287] "Started kubelet" Feb 13 20:09:21.309037 kubelet[2638]: I0213 20:09:21.308985 2638 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:09:21.310614 kubelet[2638]: I0213 20:09:21.310552 2638 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:09:21.314095 kubelet[2638]: I0213 20:09:21.314017 2638 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:09:21.315180 kubelet[2638]: I0213 20:09:21.315145 2638 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:09:21.316008 kubelet[2638]: I0213 20:09:21.315968 2638 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:09:21.319676 kubelet[2638]: I0213 20:09:21.319635 2638 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:09:21.330504 kubelet[2638]: I0213 20:09:21.328748 2638 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:09:21.330504 kubelet[2638]: E0213 20:09:21.328893 2638 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-1-8-7bfd910be1\" not found" Feb 13 20:09:21.331684 kubelet[2638]: I0213 20:09:21.331390 2638 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:09:21.331684 kubelet[2638]: I0213 20:09:21.331546 2638 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:09:21.331792 kubelet[2638]: I0213 20:09:21.331702 2638 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:09:21.332058 kubelet[2638]: I0213 20:09:21.332032 2638 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:09:21.341353 kubelet[2638]: I0213 20:09:21.341305 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:09:21.344150 kubelet[2638]: I0213 20:09:21.342376 2638 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:09:21.344150 kubelet[2638]: I0213 20:09:21.342405 2638 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:09:21.344150 kubelet[2638]: I0213 20:09:21.342425 2638 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:09:21.344150 kubelet[2638]: I0213 20:09:21.342432 2638 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:09:21.344150 kubelet[2638]: E0213 20:09:21.342482 2638 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:09:21.376172 kubelet[2638]: I0213 20:09:21.375973 2638 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:09:21.432119 kubelet[2638]: I0213 20:09:21.432089 2638 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:09:21.432119 kubelet[2638]: I0213 20:09:21.432118 2638 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:09:21.432285 kubelet[2638]: I0213 20:09:21.432143 2638 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:09:21.432358 kubelet[2638]: I0213 20:09:21.432342 2638 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:09:21.432423 kubelet[2638]: I0213 20:09:21.432358 2638 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:09:21.432423 kubelet[2638]: I0213 20:09:21.432379 2638 policy_none.go:49] "None policy: Start" Feb 13 20:09:21.432423 kubelet[2638]: I0213 20:09:21.432387 2638 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:09:21.432423 kubelet[2638]: I0213 20:09:21.432396 2638 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:09:21.432518 kubelet[2638]: I0213 20:09:21.432492 2638 state_mem.go:75] "Updated machine memory state" Feb 13 20:09:21.436620 kubelet[2638]: I0213 20:09:21.436593 2638 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:09:21.438053 kubelet[2638]: I0213 20:09:21.437287 2638 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:09:21.438053 kubelet[2638]: I0213 20:09:21.437307 2638 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:09:21.438053 kubelet[2638]: I0213 20:09:21.437517 2638 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:09:21.439427 kubelet[2638]: E0213 20:09:21.439405 2638 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:09:21.443687 kubelet[2638]: I0213 20:09:21.443422 2638 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:21.444351 kubelet[2638]: I0213 20:09:21.444332 2638 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:21.449794 kubelet[2638]: I0213 20:09:21.449473 2638 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:21.459431 kubelet[2638]: E0213 20:09:21.459339 2638 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-1-8-7bfd910be1\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:21.549594 kubelet[2638]: I0213 20:09:21.549150 2638 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:21.560418 kubelet[2638]: I0213 20:09:21.560291 2638 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:21.560418 kubelet[2638]: I0213 20:09:21.560418 2638 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:21.632492 kubelet[2638]: I0213 20:09:21.632417 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/87796d8a096e41f4258ca3ce609dbaf8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-1-8-7bfd910be1\" (UID: \"87796d8a096e41f4258ca3ce609dbaf8\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:21.632492 kubelet[2638]: I0213 20:09:21.632484 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87796d8a096e41f4258ca3ce609dbaf8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-1-8-7bfd910be1\" (UID: \"87796d8a096e41f4258ca3ce609dbaf8\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:21.632492 kubelet[2638]: I0213 20:09:21.632507 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4c8274de1181451647f9755fe345d67-ca-certs\") pod \"kube-apiserver-ci-4081-3-1-8-7bfd910be1\" (UID: \"f4c8274de1181451647f9755fe345d67\") " pod="kube-system/kube-apiserver-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:21.632492 kubelet[2638]: I0213 20:09:21.632524 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4c8274de1181451647f9755fe345d67-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-1-8-7bfd910be1\" (UID: \"f4c8274de1181451647f9755fe345d67\") " pod="kube-system/kube-apiserver-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:21.632492 kubelet[2638]: I0213 20:09:21.632540 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87796d8a096e41f4258ca3ce609dbaf8-ca-certs\") pod \"kube-controller-manager-ci-4081-3-1-8-7bfd910be1\" (UID: \"87796d8a096e41f4258ca3ce609dbaf8\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:21.633022 kubelet[2638]: I0213 20:09:21.632556 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b3d382e7eca8f6aaa07b40a57ad7b9e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-1-8-7bfd910be1\" (UID: \"0b3d382e7eca8f6aaa07b40a57ad7b9e\") " pod="kube-system/kube-scheduler-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:21.633022 kubelet[2638]: I0213 20:09:21.632572 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4c8274de1181451647f9755fe345d67-k8s-certs\") pod \"kube-apiserver-ci-4081-3-1-8-7bfd910be1\" (UID: \"f4c8274de1181451647f9755fe345d67\") " pod="kube-system/kube-apiserver-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:21.633022 kubelet[2638]: I0213 20:09:21.632587 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87796d8a096e41f4258ca3ce609dbaf8-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-1-8-7bfd910be1\" (UID: \"87796d8a096e41f4258ca3ce609dbaf8\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:21.633022 kubelet[2638]: I0213 20:09:21.632609 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87796d8a096e41f4258ca3ce609dbaf8-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-1-8-7bfd910be1\" (UID: \"87796d8a096e41f4258ca3ce609dbaf8\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-8-7bfd910be1" Feb 13 20:09:22.320785 kubelet[2638]: I0213 20:09:22.320726 2638 apiserver.go:52] "Watching apiserver" Feb 13 20:09:22.332545 kubelet[2638]: I0213 20:09:22.332471 2638 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:09:22.465233 kubelet[2638]: I0213 20:09:22.465131 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-1-8-7bfd910be1" podStartSLOduration=1.465104754 podStartE2EDuration="1.465104754s" podCreationTimestamp="2025-02-13 20:09:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:09:22.447857581 +0000 UTC m=+1.234821044" watchObservedRunningTime="2025-02-13 20:09:22.465104754 +0000 UTC m=+1.252068257" Feb 13 20:09:22.465493 kubelet[2638]: I0213 20:09:22.465351 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-1-8-7bfd910be1" podStartSLOduration=3.465341795 podStartE2EDuration="3.465341795s" podCreationTimestamp="2025-02-13 20:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:09:22.464946074 +0000 UTC m=+1.251909577" watchObservedRunningTime="2025-02-13 20:09:22.465341795 +0000 UTC m=+1.252305338" Feb 13 20:09:22.498037 kubelet[2638]: I0213 20:09:22.497911 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-1-8-7bfd910be1" podStartSLOduration=1.497845295 podStartE2EDuration="1.497845295s" podCreationTimestamp="2025-02-13 20:09:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:09:22.485665218 +0000 UTC m=+1.272628681" watchObservedRunningTime="2025-02-13 20:09:22.497845295 +0000 UTC m=+1.284808758" Feb 13 20:09:22.702788 sudo[1799]: pam_unix(sudo:session): session closed for user root Feb 13 20:09:22.863656 sshd[1796]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:22.870019 systemd[1]: sshd@4-168.119.253.211:22-147.75.109.163:34436.service: Deactivated successfully. Feb 13 20:09:22.875444 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:09:22.875655 systemd[1]: session-5.scope: Consumed 6.816s CPU time, 151.1M memory peak, 0B memory swap peak. Feb 13 20:09:22.877020 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:09:22.879900 systemd-logind[1444]: Removed session 5. Feb 13 20:09:25.680468 kubelet[2638]: I0213 20:09:25.680269 2638 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:09:25.681960 containerd[1462]: time="2025-02-13T20:09:25.681734602Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:09:25.683414 kubelet[2638]: I0213 20:09:25.682962 2638 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:09:26.617669 systemd[1]: Created slice kubepods-besteffort-pod9beb0310_3792_41f0_ac48_4560ae5bc28a.slice - libcontainer container kubepods-besteffort-pod9beb0310_3792_41f0_ac48_4560ae5bc28a.slice. Feb 13 20:09:26.639764 systemd[1]: Created slice kubepods-burstable-pod77017bab_26f3_4502_8a1d_a60a8677c93b.slice - libcontainer container kubepods-burstable-pod77017bab_26f3_4502_8a1d_a60a8677c93b.slice. Feb 13 20:09:26.663169 kubelet[2638]: I0213 20:09:26.663118 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/77017bab-26f3-4502-8a1d-a60a8677c93b-run\") pod \"kube-flannel-ds-r9dgt\" (UID: \"77017bab-26f3-4502-8a1d-a60a8677c93b\") " pod="kube-flannel/kube-flannel-ds-r9dgt" Feb 13 20:09:26.663351 kubelet[2638]: I0213 20:09:26.663170 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/77017bab-26f3-4502-8a1d-a60a8677c93b-cni-plugin\") pod \"kube-flannel-ds-r9dgt\" (UID: \"77017bab-26f3-4502-8a1d-a60a8677c93b\") " pod="kube-flannel/kube-flannel-ds-r9dgt" Feb 13 20:09:26.663351 kubelet[2638]: I0213 20:09:26.663243 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9beb0310-3792-41f0-ac48-4560ae5bc28a-kube-proxy\") pod \"kube-proxy-s6sz8\" (UID: \"9beb0310-3792-41f0-ac48-4560ae5bc28a\") " pod="kube-system/kube-proxy-s6sz8" Feb 13 20:09:26.663351 kubelet[2638]: I0213 20:09:26.663263 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9beb0310-3792-41f0-ac48-4560ae5bc28a-lib-modules\") pod \"kube-proxy-s6sz8\" (UID: \"9beb0310-3792-41f0-ac48-4560ae5bc28a\") " pod="kube-system/kube-proxy-s6sz8" Feb 13 20:09:26.663351 kubelet[2638]: I0213 20:09:26.663282 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj25v\" (UniqueName: \"kubernetes.io/projected/9beb0310-3792-41f0-ac48-4560ae5bc28a-kube-api-access-rj25v\") pod \"kube-proxy-s6sz8\" (UID: \"9beb0310-3792-41f0-ac48-4560ae5bc28a\") " pod="kube-system/kube-proxy-s6sz8" Feb 13 20:09:26.663351 kubelet[2638]: I0213 20:09:26.663304 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77017bab-26f3-4502-8a1d-a60a8677c93b-xtables-lock\") pod \"kube-flannel-ds-r9dgt\" (UID: \"77017bab-26f3-4502-8a1d-a60a8677c93b\") " pod="kube-flannel/kube-flannel-ds-r9dgt" Feb 13 20:09:26.663554 kubelet[2638]: I0213 20:09:26.663322 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/77017bab-26f3-4502-8a1d-a60a8677c93b-cni\") pod \"kube-flannel-ds-r9dgt\" (UID: \"77017bab-26f3-4502-8a1d-a60a8677c93b\") " pod="kube-flannel/kube-flannel-ds-r9dgt" Feb 13 20:09:26.663554 kubelet[2638]: I0213 20:09:26.663344 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9beb0310-3792-41f0-ac48-4560ae5bc28a-xtables-lock\") pod \"kube-proxy-s6sz8\" (UID: \"9beb0310-3792-41f0-ac48-4560ae5bc28a\") " pod="kube-system/kube-proxy-s6sz8" Feb 13 20:09:26.663554 kubelet[2638]: I0213 20:09:26.663361 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/77017bab-26f3-4502-8a1d-a60a8677c93b-flannel-cfg\") pod \"kube-flannel-ds-r9dgt\" (UID: \"77017bab-26f3-4502-8a1d-a60a8677c93b\") " pod="kube-flannel/kube-flannel-ds-r9dgt" Feb 13 20:09:26.663554 kubelet[2638]: I0213 20:09:26.663379 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2p2m\" (UniqueName: \"kubernetes.io/projected/77017bab-26f3-4502-8a1d-a60a8677c93b-kube-api-access-b2p2m\") pod \"kube-flannel-ds-r9dgt\" (UID: \"77017bab-26f3-4502-8a1d-a60a8677c93b\") " pod="kube-flannel/kube-flannel-ds-r9dgt" Feb 13 20:09:26.931743 containerd[1462]: time="2025-02-13T20:09:26.931515667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s6sz8,Uid:9beb0310-3792-41f0-ac48-4560ae5bc28a,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:26.949046 containerd[1462]: time="2025-02-13T20:09:26.948774152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-r9dgt,Uid:77017bab-26f3-4502-8a1d-a60a8677c93b,Namespace:kube-flannel,Attempt:0,}" Feb 13 20:09:27.011787 containerd[1462]: time="2025-02-13T20:09:27.011109655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:27.015328 containerd[1462]: time="2025-02-13T20:09:27.013838477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:27.015328 containerd[1462]: time="2025-02-13T20:09:27.013902558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:27.015328 containerd[1462]: time="2025-02-13T20:09:27.013915558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:27.015328 containerd[1462]: time="2025-02-13T20:09:27.013999838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:27.015328 containerd[1462]: time="2025-02-13T20:09:27.013834437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:27.015328 containerd[1462]: time="2025-02-13T20:09:27.013877397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:27.015328 containerd[1462]: time="2025-02-13T20:09:27.014299321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:27.038433 systemd[1]: Started cri-containerd-345f5f036f317ac580ff703d08deed8d0bbe173eae13224a767ae10af4973364.scope - libcontainer container 345f5f036f317ac580ff703d08deed8d0bbe173eae13224a767ae10af4973364. Feb 13 20:09:27.047384 systemd[1]: Started cri-containerd-3a29efbb2d8b0a0a91af06117ec78a0c7180f4ea5342fd9637870ed4ecd52639.scope - libcontainer container 3a29efbb2d8b0a0a91af06117ec78a0c7180f4ea5342fd9637870ed4ecd52639. Feb 13 20:09:27.089189 containerd[1462]: time="2025-02-13T20:09:27.089113935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s6sz8,Uid:9beb0310-3792-41f0-ac48-4560ae5bc28a,Namespace:kube-system,Attempt:0,} returns sandbox id \"345f5f036f317ac580ff703d08deed8d0bbe173eae13224a767ae10af4973364\"" Feb 13 20:09:27.096488 containerd[1462]: time="2025-02-13T20:09:27.096449635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-r9dgt,Uid:77017bab-26f3-4502-8a1d-a60a8677c93b,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"3a29efbb2d8b0a0a91af06117ec78a0c7180f4ea5342fd9637870ed4ecd52639\"" Feb 13 20:09:27.103174 containerd[1462]: time="2025-02-13T20:09:27.102832208Z" level=info msg="CreateContainer within sandbox \"345f5f036f317ac580ff703d08deed8d0bbe173eae13224a767ae10af4973364\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:09:27.112486 containerd[1462]: time="2025-02-13T20:09:27.111509119Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:09:27.126550 containerd[1462]: time="2025-02-13T20:09:27.126507602Z" level=info msg="CreateContainer within sandbox \"345f5f036f317ac580ff703d08deed8d0bbe173eae13224a767ae10af4973364\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"08e6e3b72d91ac975645462445a4dc1bfb9a5a0a1da86018f3fee5d55a27a3a7\"" Feb 13 20:09:27.127820 containerd[1462]: time="2025-02-13T20:09:27.127671572Z" level=info msg="StartContainer for \"08e6e3b72d91ac975645462445a4dc1bfb9a5a0a1da86018f3fee5d55a27a3a7\"" Feb 13 20:09:27.162802 systemd[1]: Started cri-containerd-08e6e3b72d91ac975645462445a4dc1bfb9a5a0a1da86018f3fee5d55a27a3a7.scope - libcontainer container 08e6e3b72d91ac975645462445a4dc1bfb9a5a0a1da86018f3fee5d55a27a3a7. Feb 13 20:09:27.197254 containerd[1462]: time="2025-02-13T20:09:27.196559578Z" level=info msg="StartContainer for \"08e6e3b72d91ac975645462445a4dc1bfb9a5a0a1da86018f3fee5d55a27a3a7\" returns successfully" Feb 13 20:09:28.098648 kubelet[2638]: I0213 20:09:28.098167 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s6sz8" podStartSLOduration=2.098145832 podStartE2EDuration="2.098145832s" podCreationTimestamp="2025-02-13 20:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:09:27.441107346 +0000 UTC m=+6.228070809" watchObservedRunningTime="2025-02-13 20:09:28.098145832 +0000 UTC m=+6.885109295" Feb 13 20:09:29.904804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1913979797.mount: Deactivated successfully. Feb 13 20:09:29.938327 containerd[1462]: time="2025-02-13T20:09:29.938241020Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:29.940506 containerd[1462]: time="2025-02-13T20:09:29.940290760Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673531" Feb 13 20:09:29.942417 containerd[1462]: time="2025-02-13T20:09:29.941998458Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:29.945662 containerd[1462]: time="2025-02-13T20:09:29.945603054Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:29.946998 containerd[1462]: time="2025-02-13T20:09:29.946944147Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.835383908s" Feb 13 20:09:29.946998 containerd[1462]: time="2025-02-13T20:09:29.946994268Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 20:09:29.955212 containerd[1462]: time="2025-02-13T20:09:29.955137750Z" level=info msg="CreateContainer within sandbox \"3a29efbb2d8b0a0a91af06117ec78a0c7180f4ea5342fd9637870ed4ecd52639\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 20:09:29.972939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1499564904.mount: Deactivated successfully. Feb 13 20:09:29.978170 containerd[1462]: time="2025-02-13T20:09:29.978016179Z" level=info msg="CreateContainer within sandbox \"3a29efbb2d8b0a0a91af06117ec78a0c7180f4ea5342fd9637870ed4ecd52639\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"970d9485af9c27ddb7ce5029d2eee1f21e47b872d9b5025ad5fd5dbcd68a1af8\"" Feb 13 20:09:29.978916 containerd[1462]: time="2025-02-13T20:09:29.978880228Z" level=info msg="StartContainer for \"970d9485af9c27ddb7ce5029d2eee1f21e47b872d9b5025ad5fd5dbcd68a1af8\"" Feb 13 20:09:30.016486 systemd[1]: Started cri-containerd-970d9485af9c27ddb7ce5029d2eee1f21e47b872d9b5025ad5fd5dbcd68a1af8.scope - libcontainer container 970d9485af9c27ddb7ce5029d2eee1f21e47b872d9b5025ad5fd5dbcd68a1af8. Feb 13 20:09:30.051764 systemd[1]: cri-containerd-970d9485af9c27ddb7ce5029d2eee1f21e47b872d9b5025ad5fd5dbcd68a1af8.scope: Deactivated successfully. Feb 13 20:09:30.056050 containerd[1462]: time="2025-02-13T20:09:30.055591446Z" level=info msg="StartContainer for \"970d9485af9c27ddb7ce5029d2eee1f21e47b872d9b5025ad5fd5dbcd68a1af8\" returns successfully" Feb 13 20:09:30.140433 containerd[1462]: time="2025-02-13T20:09:30.140363052Z" level=info msg="shim disconnected" id=970d9485af9c27ddb7ce5029d2eee1f21e47b872d9b5025ad5fd5dbcd68a1af8 namespace=k8s.io Feb 13 20:09:30.140433 containerd[1462]: time="2025-02-13T20:09:30.140429413Z" level=warning msg="cleaning up after shim disconnected" id=970d9485af9c27ddb7ce5029d2eee1f21e47b872d9b5025ad5fd5dbcd68a1af8 namespace=k8s.io Feb 13 20:09:30.140433 containerd[1462]: time="2025-02-13T20:09:30.140440853Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:09:30.441543 containerd[1462]: time="2025-02-13T20:09:30.441479380Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 20:09:33.377893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount262410975.mount: Deactivated successfully. Feb 13 20:09:34.096360 containerd[1462]: time="2025-02-13T20:09:34.096286410Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:34.099849 containerd[1462]: time="2025-02-13T20:09:34.099546736Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Feb 13 20:09:34.103494 containerd[1462]: time="2025-02-13T20:09:34.103442151Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:34.108066 containerd[1462]: time="2025-02-13T20:09:34.107946215Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:09:34.109513 containerd[1462]: time="2025-02-13T20:09:34.109356915Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.667821695s" Feb 13 20:09:34.109513 containerd[1462]: time="2025-02-13T20:09:34.109402235Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 20:09:34.113657 containerd[1462]: time="2025-02-13T20:09:34.113606175Z" level=info msg="CreateContainer within sandbox \"3a29efbb2d8b0a0a91af06117ec78a0c7180f4ea5342fd9637870ed4ecd52639\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:09:34.137287 containerd[1462]: time="2025-02-13T20:09:34.137019186Z" level=info msg="CreateContainer within sandbox \"3a29efbb2d8b0a0a91af06117ec78a0c7180f4ea5342fd9637870ed4ecd52639\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ad8179c2f1e28e52fe84817e618eb9111edb5ba51138b8ecf7bea1cc4e799571\"" Feb 13 20:09:34.138123 containerd[1462]: time="2025-02-13T20:09:34.137971480Z" level=info msg="StartContainer for \"ad8179c2f1e28e52fe84817e618eb9111edb5ba51138b8ecf7bea1cc4e799571\"" Feb 13 20:09:34.172510 systemd[1]: Started cri-containerd-ad8179c2f1e28e52fe84817e618eb9111edb5ba51138b8ecf7bea1cc4e799571.scope - libcontainer container ad8179c2f1e28e52fe84817e618eb9111edb5ba51138b8ecf7bea1cc4e799571. Feb 13 20:09:34.211229 systemd[1]: cri-containerd-ad8179c2f1e28e52fe84817e618eb9111edb5ba51138b8ecf7bea1cc4e799571.scope: Deactivated successfully. Feb 13 20:09:34.217590 containerd[1462]: time="2025-02-13T20:09:34.217533726Z" level=info msg="StartContainer for \"ad8179c2f1e28e52fe84817e618eb9111edb5ba51138b8ecf7bea1cc4e799571\" returns successfully" Feb 13 20:09:34.262868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad8179c2f1e28e52fe84817e618eb9111edb5ba51138b8ecf7bea1cc4e799571-rootfs.mount: Deactivated successfully. Feb 13 20:09:34.306092 containerd[1462]: time="2025-02-13T20:09:34.305977577Z" level=info msg="shim disconnected" id=ad8179c2f1e28e52fe84817e618eb9111edb5ba51138b8ecf7bea1cc4e799571 namespace=k8s.io Feb 13 20:09:34.306092 containerd[1462]: time="2025-02-13T20:09:34.306069778Z" level=warning msg="cleaning up after shim disconnected" id=ad8179c2f1e28e52fe84817e618eb9111edb5ba51138b8ecf7bea1cc4e799571 namespace=k8s.io Feb 13 20:09:34.306092 containerd[1462]: time="2025-02-13T20:09:34.306088379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:09:34.312234 kubelet[2638]: I0213 20:09:34.311474 2638 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 20:09:34.371868 systemd[1]: Created slice kubepods-burstable-pod9002da04_0d00_4938_98b4_c9a4f08b529d.slice - libcontainer container kubepods-burstable-pod9002da04_0d00_4938_98b4_c9a4f08b529d.slice. Feb 13 20:09:34.382456 systemd[1]: Created slice kubepods-burstable-podbbfb6aba_2a0b_4495_b832_d36e9a3bc363.slice - libcontainer container kubepods-burstable-podbbfb6aba_2a0b_4495_b832_d36e9a3bc363.slice. Feb 13 20:09:34.414414 kubelet[2638]: I0213 20:09:34.414325 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbfb6aba-2a0b-4495-b832-d36e9a3bc363-config-volume\") pod \"coredns-668d6bf9bc-dmhq2\" (UID: \"bbfb6aba-2a0b-4495-b832-d36e9a3bc363\") " pod="kube-system/coredns-668d6bf9bc-dmhq2" Feb 13 20:09:34.414636 kubelet[2638]: I0213 20:09:34.414418 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jqxn\" (UniqueName: \"kubernetes.io/projected/bbfb6aba-2a0b-4495-b832-d36e9a3bc363-kube-api-access-8jqxn\") pod \"coredns-668d6bf9bc-dmhq2\" (UID: \"bbfb6aba-2a0b-4495-b832-d36e9a3bc363\") " pod="kube-system/coredns-668d6bf9bc-dmhq2" Feb 13 20:09:34.414636 kubelet[2638]: I0213 20:09:34.414479 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9002da04-0d00-4938-98b4-c9a4f08b529d-config-volume\") pod \"coredns-668d6bf9bc-7q9pr\" (UID: \"9002da04-0d00-4938-98b4-c9a4f08b529d\") " pod="kube-system/coredns-668d6bf9bc-7q9pr" Feb 13 20:09:34.414891 kubelet[2638]: I0213 20:09:34.414760 2638 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shvr5\" (UniqueName: \"kubernetes.io/projected/9002da04-0d00-4938-98b4-c9a4f08b529d-kube-api-access-shvr5\") pod \"coredns-668d6bf9bc-7q9pr\" (UID: \"9002da04-0d00-4938-98b4-c9a4f08b529d\") " pod="kube-system/coredns-668d6bf9bc-7q9pr" Feb 13 20:09:34.465876 containerd[1462]: time="2025-02-13T20:09:34.465815799Z" level=info msg="CreateContainer within sandbox \"3a29efbb2d8b0a0a91af06117ec78a0c7180f4ea5342fd9637870ed4ecd52639\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 20:09:34.507351 containerd[1462]: time="2025-02-13T20:09:34.506532855Z" level=info msg="CreateContainer within sandbox \"3a29efbb2d8b0a0a91af06117ec78a0c7180f4ea5342fd9637870ed4ecd52639\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"0b28f1760bbda9d372e859876e0b0ea1568e77c22c69c25e6d634be1b6014a5d\"" Feb 13 20:09:34.508167 containerd[1462]: time="2025-02-13T20:09:34.507762192Z" level=info msg="StartContainer for \"0b28f1760bbda9d372e859876e0b0ea1568e77c22c69c25e6d634be1b6014a5d\"" Feb 13 20:09:34.562491 systemd[1]: Started cri-containerd-0b28f1760bbda9d372e859876e0b0ea1568e77c22c69c25e6d634be1b6014a5d.scope - libcontainer container 0b28f1760bbda9d372e859876e0b0ea1568e77c22c69c25e6d634be1b6014a5d. Feb 13 20:09:34.593649 containerd[1462]: time="2025-02-13T20:09:34.593541886Z" level=info msg="StartContainer for \"0b28f1760bbda9d372e859876e0b0ea1568e77c22c69c25e6d634be1b6014a5d\" returns successfully" Feb 13 20:09:34.679890 containerd[1462]: time="2025-02-13T20:09:34.679662185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7q9pr,Uid:9002da04-0d00-4938-98b4-c9a4f08b529d,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:34.687007 containerd[1462]: time="2025-02-13T20:09:34.686955768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dmhq2,Uid:bbfb6aba-2a0b-4495-b832-d36e9a3bc363,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:34.764778 containerd[1462]: time="2025-02-13T20:09:34.764593587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7q9pr,Uid:9002da04-0d00-4938-98b4-c9a4f08b529d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12d2e2ecce48d4d7a35f83226a8ee7e1096416d8a1830165c7773f07e0192f72\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 20:09:34.765154 kubelet[2638]: E0213 20:09:34.765078 2638 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12d2e2ecce48d4d7a35f83226a8ee7e1096416d8a1830165c7773f07e0192f72\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 20:09:34.765476 kubelet[2638]: E0213 20:09:34.765433 2638 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12d2e2ecce48d4d7a35f83226a8ee7e1096416d8a1830165c7773f07e0192f72\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-7q9pr" Feb 13 20:09:34.765535 kubelet[2638]: E0213 20:09:34.765482 2638 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12d2e2ecce48d4d7a35f83226a8ee7e1096416d8a1830165c7773f07e0192f72\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-7q9pr" Feb 13 20:09:34.765763 kubelet[2638]: E0213 20:09:34.765611 2638 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7q9pr_kube-system(9002da04-0d00-4938-98b4-c9a4f08b529d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7q9pr_kube-system(9002da04-0d00-4938-98b4-c9a4f08b529d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12d2e2ecce48d4d7a35f83226a8ee7e1096416d8a1830165c7773f07e0192f72\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-7q9pr" podUID="9002da04-0d00-4938-98b4-c9a4f08b529d" Feb 13 20:09:34.766971 containerd[1462]: time="2025-02-13T20:09:34.766902499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dmhq2,Uid:bbfb6aba-2a0b-4495-b832-d36e9a3bc363,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"884b141f1a2e80f327484b6fb569895df4b12f0165542ac0ecd300248308df2b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 20:09:34.767796 kubelet[2638]: E0213 20:09:34.767208 2638 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"884b141f1a2e80f327484b6fb569895df4b12f0165542ac0ecd300248308df2b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 20:09:34.767796 kubelet[2638]: E0213 20:09:34.767270 2638 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"884b141f1a2e80f327484b6fb569895df4b12f0165542ac0ecd300248308df2b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-dmhq2" Feb 13 20:09:34.767796 kubelet[2638]: E0213 20:09:34.767294 2638 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"884b141f1a2e80f327484b6fb569895df4b12f0165542ac0ecd300248308df2b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-dmhq2" Feb 13 20:09:34.767796 kubelet[2638]: E0213 20:09:34.767343 2638 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dmhq2_kube-system(bbfb6aba-2a0b-4495-b832-d36e9a3bc363)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dmhq2_kube-system(bbfb6aba-2a0b-4495-b832-d36e9a3bc363)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"884b141f1a2e80f327484b6fb569895df4b12f0165542ac0ecd300248308df2b\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-dmhq2" podUID="bbfb6aba-2a0b-4495-b832-d36e9a3bc363" Feb 13 20:09:35.687002 systemd-networkd[1357]: flannel.1: Link UP Feb 13 20:09:35.687008 systemd-networkd[1357]: flannel.1: Gained carrier Feb 13 20:09:37.589631 systemd-networkd[1357]: flannel.1: Gained IPv6LL Feb 13 20:09:47.348669 containerd[1462]: time="2025-02-13T20:09:47.347159650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7q9pr,Uid:9002da04-0d00-4938-98b4-c9a4f08b529d,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:47.420635 systemd-networkd[1357]: cni0: Link UP Feb 13 20:09:47.420646 systemd-networkd[1357]: cni0: Gained carrier Feb 13 20:09:47.421082 systemd-networkd[1357]: cni0: Lost carrier Feb 13 20:09:47.427449 kernel: cni0: port 1(veth3580f1c8) entered blocking state Feb 13 20:09:47.427548 kernel: cni0: port 1(veth3580f1c8) entered disabled state Feb 13 20:09:47.428889 systemd-networkd[1357]: veth3580f1c8: Link UP Feb 13 20:09:47.429230 kernel: veth3580f1c8: entered allmulticast mode Feb 13 20:09:47.431585 kernel: veth3580f1c8: entered promiscuous mode Feb 13 20:09:47.433672 kernel: cni0: port 1(veth3580f1c8) entered blocking state Feb 13 20:09:47.433832 kernel: cni0: port 1(veth3580f1c8) entered forwarding state Feb 13 20:09:47.437229 kernel: cni0: port 1(veth3580f1c8) entered disabled state Feb 13 20:09:47.450335 kernel: cni0: port 1(veth3580f1c8) entered blocking state Feb 13 20:09:47.450412 kernel: cni0: port 1(veth3580f1c8) entered forwarding state Feb 13 20:09:47.450216 systemd-networkd[1357]: veth3580f1c8: Gained carrier Feb 13 20:09:47.451145 systemd-networkd[1357]: cni0: Gained carrier Feb 13 20:09:47.458103 containerd[1462]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Feb 13 20:09:47.458103 containerd[1462]: delegateAdd: netconf sent to delegate plugin: Feb 13 20:09:47.484839 containerd[1462]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T20:09:47.484410420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:47.484839 containerd[1462]: time="2025-02-13T20:09:47.484503662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:47.484839 containerd[1462]: time="2025-02-13T20:09:47.484537702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:47.484839 containerd[1462]: time="2025-02-13T20:09:47.484757307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:47.509513 systemd[1]: run-containerd-runc-k8s.io-c1f4ab9328cfc29abfdbecc6c4c19901971a93353676836493ab27666923f9c8-runc.MbFwq1.mount: Deactivated successfully. Feb 13 20:09:47.522051 systemd[1]: Started cri-containerd-c1f4ab9328cfc29abfdbecc6c4c19901971a93353676836493ab27666923f9c8.scope - libcontainer container c1f4ab9328cfc29abfdbecc6c4c19901971a93353676836493ab27666923f9c8. Feb 13 20:09:47.561621 containerd[1462]: time="2025-02-13T20:09:47.561226526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7q9pr,Uid:9002da04-0d00-4938-98b4-c9a4f08b529d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1f4ab9328cfc29abfdbecc6c4c19901971a93353676836493ab27666923f9c8\"" Feb 13 20:09:47.565069 containerd[1462]: time="2025-02-13T20:09:47.565019251Z" level=info msg="CreateContainer within sandbox \"c1f4ab9328cfc29abfdbecc6c4c19901971a93353676836493ab27666923f9c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:09:47.578333 containerd[1462]: time="2025-02-13T20:09:47.578264545Z" level=info msg="CreateContainer within sandbox \"c1f4ab9328cfc29abfdbecc6c4c19901971a93353676836493ab27666923f9c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d651688d469a158c928ebc6898f6c7fa4c898d7a3e4b0275a0c99e8ff686a20a\"" Feb 13 20:09:47.579373 containerd[1462]: time="2025-02-13T20:09:47.579283968Z" level=info msg="StartContainer for \"d651688d469a158c928ebc6898f6c7fa4c898d7a3e4b0275a0c99e8ff686a20a\"" Feb 13 20:09:47.612602 systemd[1]: Started cri-containerd-d651688d469a158c928ebc6898f6c7fa4c898d7a3e4b0275a0c99e8ff686a20a.scope - libcontainer container d651688d469a158c928ebc6898f6c7fa4c898d7a3e4b0275a0c99e8ff686a20a. Feb 13 20:09:47.649039 containerd[1462]: time="2025-02-13T20:09:47.648993396Z" level=info msg="StartContainer for \"d651688d469a158c928ebc6898f6c7fa4c898d7a3e4b0275a0c99e8ff686a20a\" returns successfully" Feb 13 20:09:48.344956 containerd[1462]: time="2025-02-13T20:09:48.344334015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dmhq2,Uid:bbfb6aba-2a0b-4495-b832-d36e9a3bc363,Namespace:kube-system,Attempt:0,}" Feb 13 20:09:48.387871 systemd-networkd[1357]: veth0d130067: Link UP Feb 13 20:09:48.392238 kernel: cni0: port 2(veth0d130067) entered blocking state Feb 13 20:09:48.392363 kernel: cni0: port 2(veth0d130067) entered disabled state Feb 13 20:09:48.392382 kernel: veth0d130067: entered allmulticast mode Feb 13 20:09:48.396426 kernel: veth0d130067: entered promiscuous mode Feb 13 20:09:48.396547 kernel: cni0: port 2(veth0d130067) entered blocking state Feb 13 20:09:48.396564 kernel: cni0: port 2(veth0d130067) entered forwarding state Feb 13 20:09:48.408131 systemd-networkd[1357]: veth0d130067: Gained carrier Feb 13 20:09:48.412357 containerd[1462]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000948e8), "name":"cbr0", "type":"bridge"} Feb 13 20:09:48.412357 containerd[1462]: delegateAdd: netconf sent to delegate plugin: Feb 13 20:09:48.437864 containerd[1462]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T20:09:48.437667414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:09:48.437864 containerd[1462]: time="2025-02-13T20:09:48.437743616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:09:48.437864 containerd[1462]: time="2025-02-13T20:09:48.437763097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:48.438176 containerd[1462]: time="2025-02-13T20:09:48.438007382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:09:48.502472 systemd[1]: Started cri-containerd-e338b841afc8af650f5fb112af01f2727c43460b4890aa6ddd2f943d0abc33d8.scope - libcontainer container e338b841afc8af650f5fb112af01f2727c43460b4890aa6ddd2f943d0abc33d8. Feb 13 20:09:48.528709 kubelet[2638]: I0213 20:09:48.528467 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-r9dgt" podStartSLOduration=15.522324645 podStartE2EDuration="22.528449196s" podCreationTimestamp="2025-02-13 20:09:26 +0000 UTC" firstStartedPulling="2025-02-13 20:09:27.104413541 +0000 UTC m=+5.891377004" lastFinishedPulling="2025-02-13 20:09:34.110538092 +0000 UTC m=+12.897501555" observedRunningTime="2025-02-13 20:09:35.474650703 +0000 UTC m=+14.261614166" watchObservedRunningTime="2025-02-13 20:09:48.528449196 +0000 UTC m=+27.315412619" Feb 13 20:09:48.557232 kubelet[2638]: I0213 20:09:48.557148 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7q9pr" podStartSLOduration=22.557127328 podStartE2EDuration="22.557127328s" podCreationTimestamp="2025-02-13 20:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:09:48.533084742 +0000 UTC m=+27.320048245" watchObservedRunningTime="2025-02-13 20:09:48.557127328 +0000 UTC m=+27.344090791" Feb 13 20:09:48.584715 containerd[1462]: time="2025-02-13T20:09:48.584665793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dmhq2,Uid:bbfb6aba-2a0b-4495-b832-d36e9a3bc363,Namespace:kube-system,Attempt:0,} returns sandbox id \"e338b841afc8af650f5fb112af01f2727c43460b4890aa6ddd2f943d0abc33d8\"" Feb 13 20:09:48.592674 containerd[1462]: time="2025-02-13T20:09:48.592622454Z" level=info msg="CreateContainer within sandbox \"e338b841afc8af650f5fb112af01f2727c43460b4890aa6ddd2f943d0abc33d8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:09:48.615366 containerd[1462]: time="2025-02-13T20:09:48.614592233Z" level=info msg="CreateContainer within sandbox \"e338b841afc8af650f5fb112af01f2727c43460b4890aa6ddd2f943d0abc33d8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b12a3b6ff0c6efb796cca1da118bdfbb869a90eeadd5c512d3a7ff159072635\"" Feb 13 20:09:48.616640 containerd[1462]: time="2025-02-13T20:09:48.616593518Z" level=info msg="StartContainer for \"0b12a3b6ff0c6efb796cca1da118bdfbb869a90eeadd5c512d3a7ff159072635\"" Feb 13 20:09:48.645543 systemd[1]: Started cri-containerd-0b12a3b6ff0c6efb796cca1da118bdfbb869a90eeadd5c512d3a7ff159072635.scope - libcontainer container 0b12a3b6ff0c6efb796cca1da118bdfbb869a90eeadd5c512d3a7ff159072635. Feb 13 20:09:48.682817 containerd[1462]: time="2025-02-13T20:09:48.681749758Z" level=info msg="StartContainer for \"0b12a3b6ff0c6efb796cca1da118bdfbb869a90eeadd5c512d3a7ff159072635\" returns successfully" Feb 13 20:09:49.045477 systemd-networkd[1357]: cni0: Gained IPv6LL Feb 13 20:09:49.365490 systemd-networkd[1357]: veth3580f1c8: Gained IPv6LL Feb 13 20:09:49.493674 systemd-networkd[1357]: veth0d130067: Gained IPv6LL Feb 13 20:09:49.525321 kubelet[2638]: I0213 20:09:49.525241 2638 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dmhq2" podStartSLOduration=23.525221002 podStartE2EDuration="23.525221002s" podCreationTimestamp="2025-02-13 20:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:09:49.520272567 +0000 UTC m=+28.307236070" watchObservedRunningTime="2025-02-13 20:09:49.525221002 +0000 UTC m=+28.312184465" Feb 13 20:11:21.851514 update_engine[1446]: I20250213 20:11:21.850955 1446 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 20:11:21.851514 update_engine[1446]: I20250213 20:11:21.851034 1446 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 20:11:21.851514 update_engine[1446]: I20250213 20:11:21.851409 1446 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 20:11:21.852501 update_engine[1446]: I20250213 20:11:21.852097 1446 omaha_request_params.cc:62] Current group set to lts Feb 13 20:11:21.852501 update_engine[1446]: I20250213 20:11:21.852286 1446 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 20:11:21.852501 update_engine[1446]: I20250213 20:11:21.852308 1446 update_attempter.cc:643] Scheduling an action processor start. Feb 13 20:11:21.852501 update_engine[1446]: I20250213 20:11:21.852335 1446 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:11:21.852501 update_engine[1446]: I20250213 20:11:21.852381 1446 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 20:11:21.852501 update_engine[1446]: I20250213 20:11:21.852468 1446 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:11:21.852501 update_engine[1446]: I20250213 20:11:21.852483 1446 omaha_request_action.cc:272] Request: Feb 13 20:11:21.852501 update_engine[1446]: Feb 13 20:11:21.852501 update_engine[1446]: Feb 13 20:11:21.852501 update_engine[1446]: Feb 13 20:11:21.852501 update_engine[1446]: Feb 13 20:11:21.852501 update_engine[1446]: Feb 13 20:11:21.852501 update_engine[1446]: Feb 13 20:11:21.852501 update_engine[1446]: Feb 13 20:11:21.852501 update_engine[1446]: Feb 13 20:11:21.852501 update_engine[1446]: I20250213 20:11:21.852494 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:11:21.853555 locksmithd[1477]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 20:11:21.855780 update_engine[1446]: I20250213 20:11:21.855688 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:11:21.856305 update_engine[1446]: I20250213 20:11:21.856256 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:11:21.857599 update_engine[1446]: E20250213 20:11:21.857543 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:11:21.857599 update_engine[1446]: I20250213 20:11:21.857631 1446 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 20:11:31.761011 update_engine[1446]: I20250213 20:11:31.760840 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:11:31.761614 update_engine[1446]: I20250213 20:11:31.761101 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:11:31.761614 update_engine[1446]: I20250213 20:11:31.761340 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:11:31.762241 update_engine[1446]: E20250213 20:11:31.762133 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:11:31.762241 update_engine[1446]: I20250213 20:11:31.762229 1446 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 20:11:41.769551 update_engine[1446]: I20250213 20:11:41.768246 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:11:41.769551 update_engine[1446]: I20250213 20:11:41.768456 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:11:41.769551 update_engine[1446]: I20250213 20:11:41.768679 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:11:41.774320 update_engine[1446]: E20250213 20:11:41.773414 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:11:41.774320 update_engine[1446]: I20250213 20:11:41.774264 1446 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 20:11:51.763955 update_engine[1446]: I20250213 20:11:51.763851 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:11:51.766344 update_engine[1446]: I20250213 20:11:51.764228 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:11:51.766344 update_engine[1446]: I20250213 20:11:51.764529 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:11:51.766344 update_engine[1446]: E20250213 20:11:51.765259 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:11:51.766344 update_engine[1446]: I20250213 20:11:51.765337 1446 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:11:51.766344 update_engine[1446]: I20250213 20:11:51.765352 1446 omaha_request_action.cc:617] Omaha request response: Feb 13 20:11:51.766344 update_engine[1446]: E20250213 20:11:51.765489 1446 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 20:11:51.766344 update_engine[1446]: I20250213 20:11:51.765519 1446 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 20:11:51.766344 update_engine[1446]: I20250213 20:11:51.765530 1446 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:11:51.766344 update_engine[1446]: I20250213 20:11:51.765539 1446 update_attempter.cc:306] Processing Done. Feb 13 20:11:51.766344 update_engine[1446]: E20250213 20:11:51.765559 1446 update_attempter.cc:619] Update failed. Feb 13 20:11:51.766344 update_engine[1446]: I20250213 20:11:51.765569 1446 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 20:11:51.766344 update_engine[1446]: I20250213 20:11:51.765578 1446 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 20:11:51.766344 update_engine[1446]: I20250213 20:11:51.765588 1446 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 20:11:51.766344 update_engine[1446]: I20250213 20:11:51.765689 1446 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:11:51.766344 update_engine[1446]: I20250213 20:11:51.765724 1446 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:11:51.766344 update_engine[1446]: I20250213 20:11:51.765739 1446 omaha_request_action.cc:272] Request: Feb 13 20:11:51.766344 update_engine[1446]: Feb 13 20:11:51.766344 update_engine[1446]: Feb 13 20:11:51.767057 locksmithd[1477]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 20:11:51.767655 update_engine[1446]: Feb 13 20:11:51.767655 update_engine[1446]: Feb 13 20:11:51.767655 update_engine[1446]: Feb 13 20:11:51.767655 update_engine[1446]: Feb 13 20:11:51.767655 update_engine[1446]: I20250213 20:11:51.765750 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:11:51.767655 update_engine[1446]: I20250213 20:11:51.765990 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:11:51.767655 update_engine[1446]: I20250213 20:11:51.766232 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:11:51.767655 update_engine[1446]: E20250213 20:11:51.766898 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:11:51.767655 update_engine[1446]: I20250213 20:11:51.766942 1446 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:11:51.767655 update_engine[1446]: I20250213 20:11:51.766950 1446 omaha_request_action.cc:617] Omaha request response: Feb 13 20:11:51.767655 update_engine[1446]: I20250213 20:11:51.766956 1446 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:11:51.767655 update_engine[1446]: I20250213 20:11:51.766961 1446 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:11:51.767655 update_engine[1446]: I20250213 20:11:51.766965 1446 update_attempter.cc:306] Processing Done. Feb 13 20:11:51.767655 update_engine[1446]: I20250213 20:11:51.766970 1446 update_attempter.cc:310] Error event sent. Feb 13 20:11:51.767655 update_engine[1446]: I20250213 20:11:51.766979 1446 update_check_scheduler.cc:74] Next update check in 41m27s Feb 13 20:11:51.767997 locksmithd[1477]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 20:14:03.986226 systemd[1]: Started sshd@6-168.119.253.211:22-147.75.109.163:51800.service - OpenSSH per-connection server daemon (147.75.109.163:51800). Feb 13 20:14:04.978260 sshd[4651]: Accepted publickey for core from 147.75.109.163 port 51800 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:04.979988 sshd[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:04.989312 systemd-logind[1444]: New session 6 of user core. Feb 13 20:14:04.997971 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:14:05.756417 sshd[4651]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:05.762622 systemd[1]: sshd@6-168.119.253.211:22-147.75.109.163:51800.service: Deactivated successfully. Feb 13 20:14:05.767470 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:14:05.769236 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:14:05.772736 systemd-logind[1444]: Removed session 6. Feb 13 20:14:10.933895 systemd[1]: Started sshd@7-168.119.253.211:22-147.75.109.163:55906.service - OpenSSH per-connection server daemon (147.75.109.163:55906). Feb 13 20:14:11.918642 sshd[4687]: Accepted publickey for core from 147.75.109.163 port 55906 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:11.922134 sshd[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:11.931020 systemd-logind[1444]: New session 7 of user core. Feb 13 20:14:11.939599 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:14:12.709985 sshd[4687]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:12.721610 systemd[1]: sshd@7-168.119.253.211:22-147.75.109.163:55906.service: Deactivated successfully. Feb 13 20:14:12.727703 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:14:12.731299 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:14:12.732838 systemd-logind[1444]: Removed session 7. Feb 13 20:14:17.899543 systemd[1]: Started sshd@8-168.119.253.211:22-147.75.109.163:55916.service - OpenSSH per-connection server daemon (147.75.109.163:55916). Feb 13 20:14:18.877294 sshd[4744]: Accepted publickey for core from 147.75.109.163 port 55916 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:18.879994 sshd[4744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:18.890161 systemd-logind[1444]: New session 8 of user core. Feb 13 20:14:18.894485 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:14:19.647237 sshd[4744]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:19.656600 systemd[1]: sshd@8-168.119.253.211:22-147.75.109.163:55916.service: Deactivated successfully. Feb 13 20:14:19.661968 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:14:19.668139 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:14:19.670851 systemd-logind[1444]: Removed session 8. Feb 13 20:14:19.826711 systemd[1]: Started sshd@9-168.119.253.211:22-147.75.109.163:48774.service - OpenSSH per-connection server daemon (147.75.109.163:48774). Feb 13 20:14:20.814189 sshd[4758]: Accepted publickey for core from 147.75.109.163 port 48774 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:20.818311 sshd[4758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:20.827489 systemd-logind[1444]: New session 9 of user core. Feb 13 20:14:20.840505 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:14:21.625398 sshd[4758]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:21.632322 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:14:21.632393 systemd[1]: sshd@9-168.119.253.211:22-147.75.109.163:48774.service: Deactivated successfully. Feb 13 20:14:21.637819 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:14:21.640547 systemd-logind[1444]: Removed session 9. Feb 13 20:14:21.797745 systemd[1]: Started sshd@10-168.119.253.211:22-147.75.109.163:48778.service - OpenSSH per-connection server daemon (147.75.109.163:48778). Feb 13 20:14:22.788920 sshd[4777]: Accepted publickey for core from 147.75.109.163 port 48778 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:22.792117 sshd[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:22.801537 systemd-logind[1444]: New session 10 of user core. Feb 13 20:14:22.803457 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:14:23.545136 sshd[4777]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:23.549698 systemd[1]: sshd@10-168.119.253.211:22-147.75.109.163:48778.service: Deactivated successfully. Feb 13 20:14:23.553807 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:14:23.555298 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:14:23.556767 systemd-logind[1444]: Removed session 10. Feb 13 20:14:28.730556 systemd[1]: Started sshd@11-168.119.253.211:22-147.75.109.163:48792.service - OpenSSH per-connection server daemon (147.75.109.163:48792). Feb 13 20:14:29.714752 sshd[4829]: Accepted publickey for core from 147.75.109.163 port 48792 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:29.723128 sshd[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:29.747266 systemd-logind[1444]: New session 11 of user core. Feb 13 20:14:29.756845 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:14:30.476822 sshd[4829]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:30.484632 systemd[1]: sshd@11-168.119.253.211:22-147.75.109.163:48792.service: Deactivated successfully. Feb 13 20:14:30.485319 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:14:30.492572 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:14:30.497174 systemd-logind[1444]: Removed session 11. Feb 13 20:14:30.653309 systemd[1]: Started sshd@12-168.119.253.211:22-147.75.109.163:49932.service - OpenSSH per-connection server daemon (147.75.109.163:49932). Feb 13 20:14:31.630336 sshd[4842]: Accepted publickey for core from 147.75.109.163 port 49932 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:31.634418 sshd[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:31.643953 systemd-logind[1444]: New session 12 of user core. Feb 13 20:14:31.651636 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:14:32.430516 sshd[4842]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:32.439154 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:14:32.441283 systemd[1]: sshd@12-168.119.253.211:22-147.75.109.163:49932.service: Deactivated successfully. Feb 13 20:14:32.445061 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:14:32.449697 systemd-logind[1444]: Removed session 12. Feb 13 20:14:32.607694 systemd[1]: Started sshd@13-168.119.253.211:22-147.75.109.163:49934.service - OpenSSH per-connection server daemon (147.75.109.163:49934). Feb 13 20:14:33.582432 sshd[4859]: Accepted publickey for core from 147.75.109.163 port 49934 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:33.584732 sshd[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:33.593120 systemd-logind[1444]: New session 13 of user core. Feb 13 20:14:33.602560 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:14:35.281960 sshd[4859]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:35.292877 systemd[1]: sshd@13-168.119.253.211:22-147.75.109.163:49934.service: Deactivated successfully. Feb 13 20:14:35.299265 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:14:35.300401 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:14:35.303560 systemd-logind[1444]: Removed session 13. Feb 13 20:14:35.462011 systemd[1]: Started sshd@14-168.119.253.211:22-147.75.109.163:49936.service - OpenSSH per-connection server daemon (147.75.109.163:49936). Feb 13 20:14:36.447683 sshd[4892]: Accepted publickey for core from 147.75.109.163 port 49936 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:36.450825 sshd[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:36.461296 systemd-logind[1444]: New session 14 of user core. Feb 13 20:14:36.467492 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:14:37.348290 sshd[4892]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:37.351889 systemd[1]: sshd@14-168.119.253.211:22-147.75.109.163:49936.service: Deactivated successfully. Feb 13 20:14:37.354335 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:14:37.356575 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:14:37.358646 systemd-logind[1444]: Removed session 14. Feb 13 20:14:37.528783 systemd[1]: Started sshd@15-168.119.253.211:22-147.75.109.163:49952.service - OpenSSH per-connection server daemon (147.75.109.163:49952). Feb 13 20:14:38.518712 sshd[4909]: Accepted publickey for core from 147.75.109.163 port 49952 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:38.523933 sshd[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:38.536296 systemd-logind[1444]: New session 15 of user core. Feb 13 20:14:38.541574 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:14:39.295592 sshd[4909]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:39.302185 systemd[1]: sshd@15-168.119.253.211:22-147.75.109.163:49952.service: Deactivated successfully. Feb 13 20:14:39.306024 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:14:39.308758 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:14:39.310261 systemd-logind[1444]: Removed session 15. Feb 13 20:14:44.472551 systemd[1]: Started sshd@16-168.119.253.211:22-147.75.109.163:34486.service - OpenSSH per-connection server daemon (147.75.109.163:34486). Feb 13 20:14:45.454236 sshd[4960]: Accepted publickey for core from 147.75.109.163 port 34486 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:45.455601 sshd[4960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:45.462572 systemd-logind[1444]: New session 16 of user core. Feb 13 20:14:45.471781 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:14:46.207525 sshd[4960]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:46.213241 systemd[1]: sshd@16-168.119.253.211:22-147.75.109.163:34486.service: Deactivated successfully. Feb 13 20:14:46.216057 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:14:46.217745 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:14:46.219148 systemd-logind[1444]: Removed session 16. Feb 13 20:14:51.384733 systemd[1]: Started sshd@17-168.119.253.211:22-147.75.109.163:59486.service - OpenSSH per-connection server daemon (147.75.109.163:59486). Feb 13 20:14:52.383401 sshd[4994]: Accepted publickey for core from 147.75.109.163 port 59486 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:14:52.386238 sshd[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:14:52.395899 systemd-logind[1444]: New session 17 of user core. Feb 13 20:14:52.398431 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:14:53.156070 sshd[4994]: pam_unix(sshd:session): session closed for user core Feb 13 20:14:53.166018 systemd[1]: sshd@17-168.119.253.211:22-147.75.109.163:59486.service: Deactivated successfully. Feb 13 20:14:53.173531 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:14:53.177000 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:14:53.183146 systemd-logind[1444]: Removed session 17. Feb 13 20:14:57.333932 kernel: hrtimer: interrupt took 2677839 ns Feb 13 20:15:08.225556 systemd[1]: cri-containerd-f8455e74408cf66ceb5e0b3077b024a7207eeb91fcaa1010539a35898553c650.scope: Deactivated successfully. Feb 13 20:15:08.225860 systemd[1]: cri-containerd-f8455e74408cf66ceb5e0b3077b024a7207eeb91fcaa1010539a35898553c650.scope: Consumed 6.244s CPU time, 18.1M memory peak, 0B memory swap peak. Feb 13 20:15:08.256125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8455e74408cf66ceb5e0b3077b024a7207eeb91fcaa1010539a35898553c650-rootfs.mount: Deactivated successfully. Feb 13 20:15:08.261941 containerd[1462]: time="2025-02-13T20:15:08.261862540Z" level=info msg="shim disconnected" id=f8455e74408cf66ceb5e0b3077b024a7207eeb91fcaa1010539a35898553c650 namespace=k8s.io Feb 13 20:15:08.262960 containerd[1462]: time="2025-02-13T20:15:08.262463558Z" level=warning msg="cleaning up after shim disconnected" id=f8455e74408cf66ceb5e0b3077b024a7207eeb91fcaa1010539a35898553c650 namespace=k8s.io Feb 13 20:15:08.262960 containerd[1462]: time="2025-02-13T20:15:08.262492519Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:15:08.345896 kubelet[2638]: E0213 20:15:08.345457 2638 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38540->10.0.0.2:2379: read: connection timed out" Feb 13 20:15:08.351502 systemd[1]: cri-containerd-a1920917c7e43b265afdd7c9b66bc026797cc9a0c77672cab2c659a3babaea8b.scope: Deactivated successfully. Feb 13 20:15:08.352096 systemd[1]: cri-containerd-a1920917c7e43b265afdd7c9b66bc026797cc9a0c77672cab2c659a3babaea8b.scope: Consumed 4.899s CPU time, 16.1M memory peak, 0B memory swap peak. Feb 13 20:15:08.373602 kubelet[2638]: I0213 20:15:08.372964 2638 scope.go:117] "RemoveContainer" containerID="f8455e74408cf66ceb5e0b3077b024a7207eeb91fcaa1010539a35898553c650" Feb 13 20:15:08.378559 containerd[1462]: time="2025-02-13T20:15:08.378506994Z" level=info msg="CreateContainer within sandbox \"0d2ca74d510653f25e71c9d8957f7142705375a408d231365a761ec1cf3101d0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 20:15:08.387331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1920917c7e43b265afdd7c9b66bc026797cc9a0c77672cab2c659a3babaea8b-rootfs.mount: Deactivated successfully. Feb 13 20:15:08.396673 containerd[1462]: time="2025-02-13T20:15:08.396458292Z" level=info msg="shim disconnected" id=a1920917c7e43b265afdd7c9b66bc026797cc9a0c77672cab2c659a3babaea8b namespace=k8s.io Feb 13 20:15:08.396673 containerd[1462]: time="2025-02-13T20:15:08.396524573Z" level=warning msg="cleaning up after shim disconnected" id=a1920917c7e43b265afdd7c9b66bc026797cc9a0c77672cab2c659a3babaea8b namespace=k8s.io Feb 13 20:15:08.396673 containerd[1462]: time="2025-02-13T20:15:08.396535574Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:15:08.414421 containerd[1462]: time="2025-02-13T20:15:08.414026498Z" level=info msg="CreateContainer within sandbox \"0d2ca74d510653f25e71c9d8957f7142705375a408d231365a761ec1cf3101d0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9c32d0a854ce1e1f07148b70a94cf50ecbf4a561cccea1a8e83f948dd1763fbd\"" Feb 13 20:15:08.416322 containerd[1462]: time="2025-02-13T20:15:08.415070049Z" level=info msg="StartContainer for \"9c32d0a854ce1e1f07148b70a94cf50ecbf4a561cccea1a8e83f948dd1763fbd\"" Feb 13 20:15:08.450494 systemd[1]: Started cri-containerd-9c32d0a854ce1e1f07148b70a94cf50ecbf4a561cccea1a8e83f948dd1763fbd.scope - libcontainer container 9c32d0a854ce1e1f07148b70a94cf50ecbf4a561cccea1a8e83f948dd1763fbd. Feb 13 20:15:08.500578 containerd[1462]: time="2025-02-13T20:15:08.499725945Z" level=info msg="StartContainer for \"9c32d0a854ce1e1f07148b70a94cf50ecbf4a561cccea1a8e83f948dd1763fbd\" returns successfully" Feb 13 20:15:09.386214 kubelet[2638]: I0213 20:15:09.384407 2638 scope.go:117] "RemoveContainer" containerID="a1920917c7e43b265afdd7c9b66bc026797cc9a0c77672cab2c659a3babaea8b" Feb 13 20:15:09.393479 containerd[1462]: time="2025-02-13T20:15:09.393428643Z" level=info msg="CreateContainer within sandbox \"7ced58c4568493f856766d812c69096af37615fe3a43db9be779e13f820ac607\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 20:15:09.412421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3136687775.mount: Deactivated successfully. Feb 13 20:15:09.427016 containerd[1462]: time="2025-02-13T20:15:09.426948327Z" level=info msg="CreateContainer within sandbox \"7ced58c4568493f856766d812c69096af37615fe3a43db9be779e13f820ac607\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"dc4d80c47e5997b5f16a292f088de2b7f6bba5d3c10e9c72e177bc6ab213a1e4\"" Feb 13 20:15:09.427708 containerd[1462]: time="2025-02-13T20:15:09.427671949Z" level=info msg="StartContainer for \"dc4d80c47e5997b5f16a292f088de2b7f6bba5d3c10e9c72e177bc6ab213a1e4\"" Feb 13 20:15:09.466943 systemd[1]: Started cri-containerd-dc4d80c47e5997b5f16a292f088de2b7f6bba5d3c10e9c72e177bc6ab213a1e4.scope - libcontainer container dc4d80c47e5997b5f16a292f088de2b7f6bba5d3c10e9c72e177bc6ab213a1e4. Feb 13 20:15:09.536390 containerd[1462]: time="2025-02-13T20:15:09.536315565Z" level=info msg="StartContainer for \"dc4d80c47e5997b5f16a292f088de2b7f6bba5d3c10e9c72e177bc6ab213a1e4\" returns successfully" Feb 13 20:15:12.888092 kubelet[2638]: E0213 20:15:12.887955 2638 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38380->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-1-8-7bfd910be1.1823ddc114decbc7 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-1-8-7bfd910be1,UID:f4c8274de1181451647f9755fe345d67,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-1-8-7bfd910be1,},FirstTimestamp:2025-02-13 20:15:02.443715527 +0000 UTC m=+341.230679030,LastTimestamp:2025-02-13 20:15:02.443715527 +0000 UTC m=+341.230679030,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-1-8-7bfd910be1,}"