Mar 17 17:40:25.912812 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:40:25.912843 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Mar 17 16:11:40 -00 2025 Mar 17 17:40:25.912855 kernel: KASLR enabled Mar 17 17:40:25.912862 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 17 17:40:25.912869 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Mar 17 17:40:25.912875 kernel: random: crng init done Mar 17 17:40:25.912882 kernel: secureboot: Secure boot disabled Mar 17 17:40:25.912888 kernel: ACPI: Early table checksum verification disabled Mar 17 17:40:25.912895 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Mar 17 17:40:25.912903 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:40:25.912909 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:25.912915 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:25.912922 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:25.912929 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:25.912936 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:25.912944 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:25.912950 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:25.912957 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:25.912963 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:25.912969 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Mar 17 17:40:25.912976 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Mar 17 17:40:25.912982 kernel: NUMA: Failed to initialise from firmware Mar 17 17:40:25.912988 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Mar 17 17:40:25.912994 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Mar 17 17:40:25.913003 kernel: Zone ranges: Mar 17 17:40:25.913012 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 17:40:25.913020 kernel: DMA32 empty Mar 17 17:40:25.913027 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Mar 17 17:40:25.913033 kernel: Movable zone start for each node Mar 17 17:40:25.913040 kernel: Early memory node ranges Mar 17 17:40:25.913047 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Mar 17 17:40:25.913053 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Mar 17 17:40:25.913059 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Mar 17 17:40:25.913066 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Mar 17 17:40:25.913072 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Mar 17 17:40:25.913079 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Mar 17 17:40:25.913085 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Mar 17 17:40:25.913093 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Mar 17 17:40:25.913100 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Mar 17 17:40:25.913107 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Mar 17 17:40:25.913117 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 17 17:40:25.913124 kernel: psci: probing for conduit method from ACPI. Mar 17 17:40:25.913131 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:40:25.913139 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:40:25.913146 kernel: psci: Trusted OS migration not required Mar 17 17:40:25.913152 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:40:25.913159 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 17:40:25.913165 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:40:25.913173 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:40:25.913183 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 17:40:25.913190 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:40:25.913198 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:40:25.913205 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:40:25.913214 kernel: CPU features: detected: Spectre-v4 Mar 17 17:40:25.913220 kernel: CPU features: detected: Spectre-BHB Mar 17 17:40:25.913227 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:40:25.913233 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:40:25.913239 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:40:25.913246 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:40:25.913253 kernel: alternatives: applying boot alternatives Mar 17 17:40:25.913260 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:40:25.913267 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:40:25.913274 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:40:25.913281 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:40:25.913288 kernel: Fallback order for Node 0: 0 Mar 17 17:40:25.913295 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Mar 17 17:40:25.913301 kernel: Policy zone: Normal Mar 17 17:40:25.913308 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:40:25.913317 kernel: software IO TLB: area num 2. Mar 17 17:40:25.913325 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Mar 17 17:40:25.913333 kernel: Memory: 3883892K/4096000K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 212108K reserved, 0K cma-reserved) Mar 17 17:40:25.913340 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:40:25.913347 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:40:25.913354 kernel: rcu: RCU event tracing is enabled. Mar 17 17:40:25.913361 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:40:25.913369 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:40:25.913379 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:40:25.913386 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:40:25.913392 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:40:25.913399 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:40:25.913406 kernel: GICv3: 256 SPIs implemented Mar 17 17:40:25.913413 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:40:25.913420 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:40:25.913426 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:40:25.913433 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 17:40:25.913441 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 17:40:25.913448 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:40:25.913460 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:40:25.913468 kernel: GICv3: using LPI property table @0x00000001000e0000 Mar 17 17:40:25.913476 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Mar 17 17:40:25.913483 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:40:25.913489 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:40:25.913496 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:40:25.913514 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:40:25.913522 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:40:25.913529 kernel: Console: colour dummy device 80x25 Mar 17 17:40:25.913536 kernel: ACPI: Core revision 20230628 Mar 17 17:40:25.913543 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:40:25.913553 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:40:25.913561 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:40:25.913568 kernel: landlock: Up and running. Mar 17 17:40:25.913575 kernel: SELinux: Initializing. Mar 17 17:40:25.913581 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:40:25.913590 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:40:25.913597 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:40:25.913604 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:40:25.913611 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:40:25.913622 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:40:25.913666 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 17:40:25.913674 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 17:40:25.913710 kernel: Remapping and enabling EFI services. Mar 17 17:40:25.913719 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:40:25.913726 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:40:25.913733 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 17:40:25.913740 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Mar 17 17:40:25.913747 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:40:25.913758 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:40:25.913765 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:40:25.913797 kernel: SMP: Total of 2 processors activated. Mar 17 17:40:25.913808 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:40:25.913815 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:40:25.913823 kernel: CPU features: detected: Common not Private translations Mar 17 17:40:25.913831 kernel: CPU features: detected: CRC32 instructions Mar 17 17:40:25.913839 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 17 17:40:25.913846 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:40:25.913855 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:40:25.913862 kernel: CPU features: detected: Privileged Access Never Mar 17 17:40:25.913870 kernel: CPU features: detected: RAS Extension Support Mar 17 17:40:25.913877 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 17:40:25.913884 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:40:25.913891 kernel: alternatives: applying system-wide alternatives Mar 17 17:40:25.913899 kernel: devtmpfs: initialized Mar 17 17:40:25.913907 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:40:25.913916 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:40:25.913923 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:40:25.913930 kernel: SMBIOS 3.0.0 present. Mar 17 17:40:25.913937 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Mar 17 17:40:25.913944 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:40:25.913954 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:40:25.913963 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:40:25.913971 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:40:25.913980 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:40:25.913990 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Mar 17 17:40:25.913998 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:40:25.914006 kernel: cpuidle: using governor menu Mar 17 17:40:25.914014 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:40:25.914022 kernel: ASID allocator initialised with 32768 entries Mar 17 17:40:25.914029 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:40:25.914036 kernel: Serial: AMBA PL011 UART driver Mar 17 17:40:25.914044 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:40:25.914051 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:40:25.914060 kernel: Modules: 509280 pages in range for PLT usage Mar 17 17:40:25.914067 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:40:25.914074 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:40:25.914081 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:40:25.914087 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:40:25.914094 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:40:25.914102 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:40:25.914109 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:40:25.914116 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:40:25.914147 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:40:25.914156 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:40:25.914163 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:40:25.914170 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:40:25.914177 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:40:25.914185 kernel: ACPI: Interpreter enabled Mar 17 17:40:25.914193 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:40:25.914200 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:40:25.914207 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:40:25.914217 kernel: printk: console [ttyAMA0] enabled Mar 17 17:40:25.914225 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:40:25.914404 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:40:25.914490 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:40:25.914614 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:40:25.915967 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 17:40:25.916147 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 17:40:25.916179 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 17:40:25.916194 kernel: PCI host bridge to bus 0000:00 Mar 17 17:40:25.916333 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 17:40:25.916461 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:40:25.916661 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 17:40:25.916792 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:40:25.917007 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 17:40:25.917180 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Mar 17 17:40:25.917323 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Mar 17 17:40:25.917462 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Mar 17 17:40:25.917680 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 17 17:40:25.917834 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Mar 17 17:40:25.917994 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 17 17:40:25.918146 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Mar 17 17:40:25.918297 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 17 17:40:25.918436 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Mar 17 17:40:25.918657 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 17 17:40:25.918784 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Mar 17 17:40:25.918893 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 17 17:40:25.918968 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Mar 17 17:40:25.919057 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 17 17:40:25.919126 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Mar 17 17:40:25.919208 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 17 17:40:25.919280 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Mar 17 17:40:25.919361 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 17 17:40:25.919430 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Mar 17 17:40:25.919526 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 17 17:40:25.919602 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Mar 17 17:40:25.921824 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Mar 17 17:40:25.921919 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Mar 17 17:40:25.922013 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 17:40:25.922087 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Mar 17 17:40:25.922170 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:40:25.922240 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 17 17:40:25.922315 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 17 17:40:25.922384 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Mar 17 17:40:25.922463 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 17 17:40:25.922553 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Mar 17 17:40:25.924731 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Mar 17 17:40:25.924866 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 17 17:40:25.924949 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Mar 17 17:40:25.925030 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 17 17:40:25.925106 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Mar 17 17:40:25.925181 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Mar 17 17:40:25.925269 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 17 17:40:25.925359 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Mar 17 17:40:25.925430 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Mar 17 17:40:25.925554 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 17:40:25.925694 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Mar 17 17:40:25.925776 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Mar 17 17:40:25.925846 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 17 17:40:25.925927 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Mar 17 17:40:25.925998 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Mar 17 17:40:25.926064 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Mar 17 17:40:25.926136 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Mar 17 17:40:25.926205 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Mar 17 17:40:25.926272 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Mar 17 17:40:25.926345 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 17 17:40:25.926413 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Mar 17 17:40:25.926481 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Mar 17 17:40:25.926566 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 17 17:40:25.926642 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Mar 17 17:40:25.926712 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Mar 17 17:40:25.926783 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 17 17:40:25.926861 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Mar 17 17:40:25.926939 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Mar 17 17:40:25.927021 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 17 17:40:25.927089 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Mar 17 17:40:25.927157 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Mar 17 17:40:25.927227 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 17 17:40:25.927294 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Mar 17 17:40:25.927363 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Mar 17 17:40:25.927443 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 17 17:40:25.927524 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Mar 17 17:40:25.927593 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Mar 17 17:40:25.928608 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 17 17:40:25.928718 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Mar 17 17:40:25.928784 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Mar 17 17:40:25.928852 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Mar 17 17:40:25.928924 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:40:25.929005 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Mar 17 17:40:25.929075 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:40:25.929143 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Mar 17 17:40:25.929210 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:40:25.929281 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Mar 17 17:40:25.929349 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:40:25.929422 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Mar 17 17:40:25.929495 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:40:25.929585 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Mar 17 17:40:25.929678 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:40:25.929757 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Mar 17 17:40:25.929824 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:40:25.929896 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Mar 17 17:40:25.929963 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:40:25.930042 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Mar 17 17:40:25.930112 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:40:25.930191 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Mar 17 17:40:25.930260 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Mar 17 17:40:25.930328 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Mar 17 17:40:25.930398 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 17 17:40:25.930466 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Mar 17 17:40:25.930588 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 17 17:40:25.932245 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Mar 17 17:40:25.932336 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 17 17:40:25.932409 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Mar 17 17:40:25.932475 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 17 17:40:25.932594 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Mar 17 17:40:25.932688 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 17 17:40:25.932763 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Mar 17 17:40:25.932842 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 17 17:40:25.932913 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Mar 17 17:40:25.932982 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 17 17:40:25.933052 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Mar 17 17:40:25.933117 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 17 17:40:25.933185 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Mar 17 17:40:25.933256 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Mar 17 17:40:25.933327 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Mar 17 17:40:25.933401 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Mar 17 17:40:25.933474 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:40:25.933560 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Mar 17 17:40:25.933642 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 17 17:40:25.933715 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 17 17:40:25.933780 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Mar 17 17:40:25.933843 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:40:25.933921 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Mar 17 17:40:25.933998 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 17 17:40:25.934065 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 17 17:40:25.934134 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Mar 17 17:40:25.934201 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:40:25.934279 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Mar 17 17:40:25.934350 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Mar 17 17:40:25.934419 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 17 17:40:25.934488 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 17 17:40:25.934566 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Mar 17 17:40:25.937023 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:40:25.937156 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Mar 17 17:40:25.937232 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 17 17:40:25.937304 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 17 17:40:25.937381 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Mar 17 17:40:25.937449 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:40:25.937553 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Mar 17 17:40:25.937679 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Mar 17 17:40:25.937760 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 17 17:40:25.937831 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 17 17:40:25.937898 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 17 17:40:25.937967 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:40:25.938052 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Mar 17 17:40:25.938124 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Mar 17 17:40:25.938194 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 17 17:40:25.938259 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 17 17:40:25.938327 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Mar 17 17:40:25.938394 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:40:25.938472 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Mar 17 17:40:25.938595 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Mar 17 17:40:25.939564 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Mar 17 17:40:25.939678 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 17 17:40:25.939759 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 17 17:40:25.939844 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Mar 17 17:40:25.939914 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:40:25.939986 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 17 17:40:25.940059 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 17 17:40:25.940137 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Mar 17 17:40:25.940212 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:40:25.940288 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 17 17:40:25.940358 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Mar 17 17:40:25.940426 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Mar 17 17:40:25.940496 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:40:25.940584 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 17:40:25.940776 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:40:25.940856 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 17:40:25.940936 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 17 17:40:25.941004 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Mar 17 17:40:25.941070 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:40:25.941149 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Mar 17 17:40:25.941216 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Mar 17 17:40:25.941285 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:40:25.941382 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Mar 17 17:40:25.941453 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Mar 17 17:40:25.941565 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:40:25.942407 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Mar 17 17:40:25.942521 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Mar 17 17:40:25.942602 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:40:25.942713 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Mar 17 17:40:25.942784 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Mar 17 17:40:25.942856 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:40:25.942931 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Mar 17 17:40:25.943001 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Mar 17 17:40:25.943073 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:40:25.943150 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Mar 17 17:40:25.943221 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Mar 17 17:40:25.943291 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:40:25.943373 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Mar 17 17:40:25.943446 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Mar 17 17:40:25.943538 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:40:25.943623 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Mar 17 17:40:25.945653 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Mar 17 17:40:25.945726 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:40:25.945736 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:40:25.945744 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:40:25.945753 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:40:25.945767 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:40:25.945775 kernel: iommu: Default domain type: Translated Mar 17 17:40:25.945782 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:40:25.945790 kernel: efivars: Registered efivars operations Mar 17 17:40:25.945798 kernel: vgaarb: loaded Mar 17 17:40:25.945805 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:40:25.945813 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:40:25.945820 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:40:25.945828 kernel: pnp: PnP ACPI init Mar 17 17:40:25.945909 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 17:40:25.945922 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:40:25.945930 kernel: NET: Registered PF_INET protocol family Mar 17 17:40:25.945938 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:40:25.945945 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:40:25.945953 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:40:25.945961 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:40:25.945968 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:40:25.945977 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:40:25.945986 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:40:25.945994 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:40:25.946002 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:40:25.946077 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Mar 17 17:40:25.946088 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:40:25.946096 kernel: kvm [1]: HYP mode not available Mar 17 17:40:25.946104 kernel: Initialise system trusted keyrings Mar 17 17:40:25.946112 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:40:25.946120 kernel: Key type asymmetric registered Mar 17 17:40:25.946129 kernel: Asymmetric key parser 'x509' registered Mar 17 17:40:25.946136 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:40:25.946144 kernel: io scheduler mq-deadline registered Mar 17 17:40:25.946151 kernel: io scheduler kyber registered Mar 17 17:40:25.946159 kernel: io scheduler bfq registered Mar 17 17:40:25.946167 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 17:40:25.946237 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Mar 17 17:40:25.946303 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Mar 17 17:40:25.946371 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:40:25.946439 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Mar 17 17:40:25.946546 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Mar 17 17:40:25.946624 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:40:25.948821 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Mar 17 17:40:25.948899 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Mar 17 17:40:25.948974 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:40:25.949046 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Mar 17 17:40:25.949115 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Mar 17 17:40:25.949182 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:40:25.949256 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Mar 17 17:40:25.949324 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Mar 17 17:40:25.949393 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:40:25.949461 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Mar 17 17:40:25.949578 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Mar 17 17:40:25.949684 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:40:25.949760 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Mar 17 17:40:25.949830 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Mar 17 17:40:25.949905 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:40:25.949979 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Mar 17 17:40:25.950049 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Mar 17 17:40:25.950118 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:40:25.950130 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Mar 17 17:40:25.950202 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Mar 17 17:40:25.950275 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Mar 17 17:40:25.950345 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:40:25.950356 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:40:25.950364 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:40:25.950372 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:40:25.950447 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Mar 17 17:40:25.950536 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Mar 17 17:40:25.950547 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:40:25.950558 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 17:40:25.951288 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Mar 17 17:40:25.951310 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Mar 17 17:40:25.951320 kernel: thunder_xcv, ver 1.0 Mar 17 17:40:25.951328 kernel: thunder_bgx, ver 1.0 Mar 17 17:40:25.951336 kernel: nicpf, ver 1.0 Mar 17 17:40:25.951344 kernel: nicvf, ver 1.0 Mar 17 17:40:25.951448 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:40:25.951573 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:40:25 UTC (1742233225) Mar 17 17:40:25.951587 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:40:25.951596 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 17:40:25.951604 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:40:25.951612 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:40:25.951620 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:40:25.951643 kernel: Segment Routing with IPv6 Mar 17 17:40:25.951651 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:40:25.951659 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:40:25.951671 kernel: Key type dns_resolver registered Mar 17 17:40:25.951679 kernel: registered taskstats version 1 Mar 17 17:40:25.951687 kernel: Loading compiled-in X.509 certificates Mar 17 17:40:25.951695 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: f4ff2820cf7379ce82b759137d15b536f0a99b51' Mar 17 17:40:25.951703 kernel: Key type .fscrypt registered Mar 17 17:40:25.951710 kernel: Key type fscrypt-provisioning registered Mar 17 17:40:25.951718 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:40:25.951727 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:40:25.951736 kernel: ima: No architecture policies found Mar 17 17:40:25.951745 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:40:25.951754 kernel: clk: Disabling unused clocks Mar 17 17:40:25.951761 kernel: Freeing unused kernel memory: 38336K Mar 17 17:40:25.951769 kernel: Run /init as init process Mar 17 17:40:25.951777 kernel: with arguments: Mar 17 17:40:25.951785 kernel: /init Mar 17 17:40:25.951794 kernel: with environment: Mar 17 17:40:25.951802 kernel: HOME=/ Mar 17 17:40:25.951810 kernel: TERM=linux Mar 17 17:40:25.951820 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:40:25.951829 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:40:25.951841 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:40:25.951850 systemd[1]: Detected virtualization kvm. Mar 17 17:40:25.951858 systemd[1]: Detected architecture arm64. Mar 17 17:40:25.951866 systemd[1]: Running in initrd. Mar 17 17:40:25.951874 systemd[1]: No hostname configured, using default hostname. Mar 17 17:40:25.951884 systemd[1]: Hostname set to . Mar 17 17:40:25.951892 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:40:25.951901 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:40:25.951909 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:40:25.951918 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:40:25.951927 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:40:25.951936 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:40:25.951944 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:40:25.951955 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:40:25.951965 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:40:25.951974 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:40:25.951982 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:40:25.951991 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:40:25.951999 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:40:25.952007 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:40:25.952017 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:40:25.952025 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:40:25.952033 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:40:25.952042 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:40:25.952050 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:40:25.952059 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:40:25.952067 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:40:25.952076 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:40:25.952085 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:40:25.952096 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:40:25.952104 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:40:25.952113 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:40:25.952123 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:40:25.952131 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:40:25.952139 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:40:25.952147 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:40:25.952155 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:40:25.952165 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:40:25.952173 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:40:25.952182 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:40:25.952217 systemd-journald[237]: Collecting audit messages is disabled. Mar 17 17:40:25.952242 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:40:25.952251 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:40:25.952259 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:40:25.952268 kernel: Bridge firewalling registered Mar 17 17:40:25.952276 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:40:25.952287 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:40:25.952296 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:40:25.952304 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:40:25.952315 systemd-journald[237]: Journal started Mar 17 17:40:25.952335 systemd-journald[237]: Runtime Journal (/run/log/journal/80468e8807e246619d43671cd571b1d0) is 8M, max 76.6M, 68.6M free. Mar 17 17:40:25.899123 systemd-modules-load[238]: Inserted module 'overlay' Mar 17 17:40:25.927335 systemd-modules-load[238]: Inserted module 'br_netfilter' Mar 17 17:40:25.959075 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:40:25.961156 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:40:25.961763 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:40:25.963588 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:40:25.965108 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:40:25.974952 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:40:25.978871 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:40:25.991897 dracut-cmdline[271]: dracut-dracut-053 Mar 17 17:40:25.995851 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:40:26.000709 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:40:26.008828 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:40:26.036603 systemd-resolved[288]: Positive Trust Anchors: Mar 17 17:40:26.036620 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:40:26.036733 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:40:26.046691 systemd-resolved[288]: Defaulting to hostname 'linux'. Mar 17 17:40:26.048976 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:40:26.049673 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:40:26.083653 kernel: SCSI subsystem initialized Mar 17 17:40:26.087679 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:40:26.095848 kernel: iscsi: registered transport (tcp) Mar 17 17:40:26.108724 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:40:26.108851 kernel: QLogic iSCSI HBA Driver Mar 17 17:40:26.161177 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:40:26.168822 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:40:26.189990 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:40:26.190117 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:40:26.190147 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:40:26.243769 kernel: raid6: neonx8 gen() 14931 MB/s Mar 17 17:40:26.260669 kernel: raid6: neonx4 gen() 15291 MB/s Mar 17 17:40:26.277686 kernel: raid6: neonx2 gen() 12911 MB/s Mar 17 17:40:26.294673 kernel: raid6: neonx1 gen() 10236 MB/s Mar 17 17:40:26.311692 kernel: raid6: int64x8 gen() 6660 MB/s Mar 17 17:40:26.328726 kernel: raid6: int64x4 gen() 7166 MB/s Mar 17 17:40:26.345694 kernel: raid6: int64x2 gen() 5969 MB/s Mar 17 17:40:26.362778 kernel: raid6: int64x1 gen() 4895 MB/s Mar 17 17:40:26.363082 kernel: raid6: using algorithm neonx4 gen() 15291 MB/s Mar 17 17:40:26.379716 kernel: raid6: .... xor() 12146 MB/s, rmw enabled Mar 17 17:40:26.379844 kernel: raid6: using neon recovery algorithm Mar 17 17:40:26.385796 kernel: xor: measuring software checksum speed Mar 17 17:40:26.385865 kernel: 8regs : 21647 MB/sec Mar 17 17:40:26.387089 kernel: 32regs : 21636 MB/sec Mar 17 17:40:26.387408 kernel: arm64_neon : 27804 MB/sec Mar 17 17:40:26.387901 kernel: xor: using function: arm64_neon (27804 MB/sec) Mar 17 17:40:26.441800 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:40:26.458902 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:40:26.465952 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:40:26.492244 systemd-udevd[458]: Using default interface naming scheme 'v255'. Mar 17 17:40:26.496470 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:40:26.506145 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:40:26.524811 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Mar 17 17:40:26.559038 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:40:26.566992 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:40:26.621193 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:40:26.632081 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:40:26.653054 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:40:26.655293 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:40:26.657656 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:40:26.659107 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:40:26.664879 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:40:26.691454 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:40:26.756261 kernel: scsi host0: Virtio SCSI HBA Mar 17 17:40:26.765511 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:40:26.765577 kernel: ACPI: bus type USB registered Mar 17 17:40:26.765590 kernel: usbcore: registered new interface driver usbfs Mar 17 17:40:26.765600 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 17 17:40:26.765619 kernel: usbcore: registered new interface driver hub Mar 17 17:40:26.764731 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:40:26.768005 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:40:26.771157 kernel: usbcore: registered new device driver usb Mar 17 17:40:26.770827 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:40:26.773725 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:40:26.774386 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:40:26.779803 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:40:26.785993 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:40:26.799267 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:40:26.808878 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:40:26.826657 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 17:40:26.848814 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 17 17:40:26.848942 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 17 17:40:26.849067 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 17:40:26.849173 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 17 17:40:26.849265 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 17 17:40:26.849356 kernel: hub 1-0:1.0: USB hub found Mar 17 17:40:26.849469 kernel: sr 0:0:0:0: Power-on or device reset occurred Mar 17 17:40:26.849689 kernel: hub 1-0:1.0: 4 ports detected Mar 17 17:40:26.849811 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Mar 17 17:40:26.849942 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:40:26.849961 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 17 17:40:26.850126 kernel: hub 2-0:1.0: USB hub found Mar 17 17:40:26.850239 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:40:26.850341 kernel: hub 2-0:1.0: 4 ports detected Mar 17 17:40:26.845819 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:40:26.858992 kernel: sd 0:0:0:1: Power-on or device reset occurred Mar 17 17:40:26.868833 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 17 17:40:26.868996 kernel: sd 0:0:0:1: [sda] Write Protect is off Mar 17 17:40:26.869087 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Mar 17 17:40:26.869180 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 17 17:40:26.869269 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:40:26.869280 kernel: GPT:17805311 != 80003071 Mar 17 17:40:26.869290 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:40:26.869299 kernel: GPT:17805311 != 80003071 Mar 17 17:40:26.869312 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:40:26.869321 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:40:26.869332 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Mar 17 17:40:26.903653 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (529) Mar 17 17:40:26.910671 kernel: BTRFS: device fsid 5ecee764-de70-4de1-8711-3798360e0d13 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (532) Mar 17 17:40:26.919488 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 17 17:40:26.940360 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 17 17:40:26.955770 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 17 17:40:26.957840 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 17 17:40:26.971769 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 17:40:26.982921 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:40:26.993676 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:40:26.994777 disk-uuid[577]: Primary Header is updated. Mar 17 17:40:26.994777 disk-uuid[577]: Secondary Entries is updated. Mar 17 17:40:26.994777 disk-uuid[577]: Secondary Header is updated. Mar 17 17:40:27.076977 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 17 17:40:27.321676 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Mar 17 17:40:27.455791 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Mar 17 17:40:27.457997 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 17 17:40:27.458241 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Mar 17 17:40:27.511677 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Mar 17 17:40:27.511917 kernel: usbcore: registered new interface driver usbhid Mar 17 17:40:27.511930 kernel: usbhid: USB HID core driver Mar 17 17:40:28.018973 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:40:28.019031 disk-uuid[578]: The operation has completed successfully. Mar 17 17:40:28.085623 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:40:28.085840 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:40:28.107841 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:40:28.114545 sh[592]: Success Mar 17 17:40:28.130676 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:40:28.195812 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:40:28.205748 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:40:28.209598 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:40:28.236100 kernel: BTRFS info (device dm-0): first mount of filesystem 5ecee764-de70-4de1-8711-3798360e0d13 Mar 17 17:40:28.236223 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:40:28.236256 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:40:28.236797 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:40:28.237707 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:40:28.245730 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:40:28.248256 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:40:28.251222 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:40:28.263063 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:40:28.269870 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:40:28.291478 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:40:28.291554 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:40:28.291566 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:40:28.295653 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:40:28.295732 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:40:28.309791 kernel: BTRFS info (device sda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:40:28.310338 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:40:28.317221 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:40:28.323840 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:40:28.379926 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:40:28.393777 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:40:28.424799 systemd-networkd[777]: lo: Link UP Mar 17 17:40:28.424811 systemd-networkd[777]: lo: Gained carrier Mar 17 17:40:28.432083 systemd-networkd[777]: Enumeration completed Mar 17 17:40:28.432298 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:40:28.433716 systemd[1]: Reached target network.target - Network. Mar 17 17:40:28.435282 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:40:28.435286 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:40:28.435930 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:40:28.435933 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:40:28.436382 systemd-networkd[777]: eth0: Link UP Mar 17 17:40:28.436385 systemd-networkd[777]: eth0: Gained carrier Mar 17 17:40:28.436392 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:40:28.443294 systemd-networkd[777]: eth1: Link UP Mar 17 17:40:28.443297 systemd-networkd[777]: eth1: Gained carrier Mar 17 17:40:28.450546 ignition[691]: Ignition 2.20.0 Mar 17 17:40:28.443306 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:40:28.450554 ignition[691]: Stage: fetch-offline Mar 17 17:40:28.452612 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:40:28.450594 ignition[691]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:40:28.450602 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:40:28.450769 ignition[691]: parsed url from cmdline: "" Mar 17 17:40:28.450772 ignition[691]: no config URL provided Mar 17 17:40:28.450777 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:40:28.450784 ignition[691]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:40:28.450789 ignition[691]: failed to fetch config: resource requires networking Mar 17 17:40:28.451117 ignition[691]: Ignition finished successfully Mar 17 17:40:28.464960 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:40:28.477716 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:40:28.483555 ignition[788]: Ignition 2.20.0 Mar 17 17:40:28.484276 ignition[788]: Stage: fetch Mar 17 17:40:28.484914 ignition[788]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:40:28.485465 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:40:28.486227 ignition[788]: parsed url from cmdline: "" Mar 17 17:40:28.486231 ignition[788]: no config URL provided Mar 17 17:40:28.486237 ignition[788]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:40:28.486248 ignition[788]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:40:28.486347 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 17 17:40:28.487147 ignition[788]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 17 17:40:28.516777 systemd-networkd[777]: eth0: DHCPv4 address 128.140.94.11/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 17:40:28.687531 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 17 17:40:28.697826 ignition[788]: GET result: OK Mar 17 17:40:28.698001 ignition[788]: parsing config with SHA512: 43e3b4ebc21c11b3910e8d5b4180a8c8a6f9d2c84dbe8e86c5016e0224fa49ff0d41933a0ad7ed40e34b28e0f8985eda48a016e9cf0e7fc43fc6fc602c99e108 Mar 17 17:40:28.706446 unknown[788]: fetched base config from "system" Mar 17 17:40:28.706458 unknown[788]: fetched base config from "system" Mar 17 17:40:28.706944 ignition[788]: fetch: fetch complete Mar 17 17:40:28.706465 unknown[788]: fetched user config from "hetzner" Mar 17 17:40:28.706951 ignition[788]: fetch: fetch passed Mar 17 17:40:28.710003 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:40:28.707002 ignition[788]: Ignition finished successfully Mar 17 17:40:28.721808 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:40:28.739327 ignition[796]: Ignition 2.20.0 Mar 17 17:40:28.740032 ignition[796]: Stage: kargs Mar 17 17:40:28.740227 ignition[796]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:40:28.740239 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:40:28.741169 ignition[796]: kargs: kargs passed Mar 17 17:40:28.743092 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:40:28.741222 ignition[796]: Ignition finished successfully Mar 17 17:40:28.748856 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:40:28.761646 ignition[802]: Ignition 2.20.0 Mar 17 17:40:28.761660 ignition[802]: Stage: disks Mar 17 17:40:28.761851 ignition[802]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:40:28.761860 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:40:28.764118 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:40:28.762926 ignition[802]: disks: disks passed Mar 17 17:40:28.762984 ignition[802]: Ignition finished successfully Mar 17 17:40:28.767145 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:40:28.767861 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:40:28.769131 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:40:28.770257 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:40:28.771207 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:40:28.778881 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:40:28.797982 systemd-fsck[810]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 17 17:40:28.802992 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:40:29.243941 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:40:29.303656 kernel: EXT4-fs (sda9): mounted filesystem 3914ef65-c5cd-468c-8ee7-964383d8e9e2 r/w with ordered data mode. Quota mode: none. Mar 17 17:40:29.305170 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:40:29.307428 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:40:29.324828 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:40:29.329782 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:40:29.334985 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:40:29.339410 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:40:29.341871 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (818) Mar 17 17:40:29.341048 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:40:29.345302 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:40:29.345340 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:40:29.345353 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:40:29.345879 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:40:29.350917 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:40:29.350973 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:40:29.354002 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:40:29.361577 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:40:29.409109 coreos-metadata[820]: Mar 17 17:40:29.408 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 17 17:40:29.411842 coreos-metadata[820]: Mar 17 17:40:29.411 INFO Fetch successful Mar 17 17:40:29.412563 coreos-metadata[820]: Mar 17 17:40:29.412 INFO wrote hostname ci-4230-1-0-9-a82243c43d to /sysroot/etc/hostname Mar 17 17:40:29.415201 initrd-setup-root[845]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:40:29.417619 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:40:29.422075 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:40:29.428008 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:40:29.434515 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:40:29.554270 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:40:29.560806 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:40:29.565798 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:40:29.570716 kernel: BTRFS info (device sda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:40:29.596452 ignition[935]: INFO : Ignition 2.20.0 Mar 17 17:40:29.596452 ignition[935]: INFO : Stage: mount Mar 17 17:40:29.597552 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:40:29.597552 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:40:29.601203 ignition[935]: INFO : mount: mount passed Mar 17 17:40:29.601203 ignition[935]: INFO : Ignition finished successfully Mar 17 17:40:29.599534 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:40:29.602684 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:40:29.610078 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:40:29.808913 systemd-networkd[777]: eth0: Gained IPv6LL Mar 17 17:40:30.237938 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:40:30.244973 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:40:30.255708 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (947) Mar 17 17:40:30.259219 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:40:30.259283 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:40:30.259309 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:40:30.263655 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:40:30.263721 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:40:30.266243 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:40:30.289797 ignition[964]: INFO : Ignition 2.20.0 Mar 17 17:40:30.290811 ignition[964]: INFO : Stage: files Mar 17 17:40:30.291208 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:40:30.291208 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:40:30.292421 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:40:30.293713 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:40:30.293713 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:40:30.296892 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:40:30.298997 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:40:30.298997 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:40:30.297344 unknown[964]: wrote ssh authorized keys file for user: core Mar 17 17:40:30.302034 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 17 17:40:30.302034 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Mar 17 17:40:30.321143 systemd-networkd[777]: eth1: Gained IPv6LL Mar 17 17:40:30.799448 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:40:31.313957 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 17 17:40:31.316307 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:40:31.316307 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 17:40:31.879995 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:40:31.974838 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:40:31.974838 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:40:31.977301 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:40:31.977301 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:40:31.977301 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:40:31.977301 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:40:31.977301 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:40:31.977301 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:40:31.977301 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:40:31.977301 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:40:31.977301 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:40:31.977301 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:40:31.977301 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:40:31.977301 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:40:31.977301 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Mar 17 17:40:32.487889 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:40:32.812477 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:40:32.812477 ignition[964]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:40:32.814874 ignition[964]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:40:32.814874 ignition[964]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:40:32.814874 ignition[964]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:40:32.814874 ignition[964]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 17 17:40:32.814874 ignition[964]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 17:40:32.814874 ignition[964]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 17:40:32.814874 ignition[964]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 17 17:40:32.814874 ignition[964]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:40:32.814874 ignition[964]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:40:32.814874 ignition[964]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:40:32.814874 ignition[964]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:40:32.814874 ignition[964]: INFO : files: files passed Mar 17 17:40:32.814874 ignition[964]: INFO : Ignition finished successfully Mar 17 17:40:32.817504 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:40:32.826992 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:40:32.828858 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:40:32.836925 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:40:32.837682 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:40:32.847902 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:40:32.847902 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:40:32.851092 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:40:32.854107 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:40:32.855442 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:40:32.861876 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:40:32.897116 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:40:32.897244 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:40:32.899753 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:40:32.902266 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:40:32.903359 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:40:32.907907 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:40:32.924139 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:40:32.928884 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:40:32.942599 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:40:32.943418 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:40:32.944899 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:40:32.946274 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:40:32.946413 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:40:32.948388 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:40:32.949132 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:40:32.950304 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:40:32.951535 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:40:32.952818 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:40:32.954039 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:40:32.955219 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:40:32.957661 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:40:32.958303 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:40:32.959300 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:40:32.960361 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:40:32.960586 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:40:32.961926 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:40:32.962587 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:40:32.963810 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:40:32.963893 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:40:32.965015 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:40:32.965146 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:40:32.966867 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:40:32.966999 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:40:32.968565 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:40:32.968708 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:40:32.969603 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:40:32.969731 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:40:32.981024 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:40:32.986912 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:40:32.987429 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:40:32.987602 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:40:32.991786 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:40:32.991937 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:40:33.000205 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:40:33.001379 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:40:33.003404 ignition[1017]: INFO : Ignition 2.20.0 Mar 17 17:40:33.003404 ignition[1017]: INFO : Stage: umount Mar 17 17:40:33.003404 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:40:33.003404 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:40:33.003404 ignition[1017]: INFO : umount: umount passed Mar 17 17:40:33.003404 ignition[1017]: INFO : Ignition finished successfully Mar 17 17:40:33.009217 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:40:33.009341 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:40:33.016902 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:40:33.017899 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:40:33.018059 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:40:33.019236 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:40:33.019290 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:40:33.022298 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:40:33.022350 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:40:33.024218 systemd[1]: Stopped target network.target - Network. Mar 17 17:40:33.025210 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:40:33.025283 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:40:33.026605 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:40:33.028234 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:40:33.032982 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:40:33.035644 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:40:33.037592 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:40:33.039102 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:40:33.039150 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:40:33.040624 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:40:33.040694 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:40:33.041590 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:40:33.041672 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:40:33.049087 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:40:33.049151 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:40:33.051959 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:40:33.053419 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:40:33.062551 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:40:33.062925 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:40:33.069492 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:40:33.069840 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:40:33.069959 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:40:33.074381 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:40:33.074739 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:40:33.075939 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:40:33.077697 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:40:33.077764 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:40:33.079016 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:40:33.079079 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:40:33.084872 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:40:33.085720 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:40:33.085818 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:40:33.088336 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:40:33.088397 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:40:33.091225 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:40:33.091282 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:40:33.093285 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:40:33.093352 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:40:33.095545 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:40:33.097353 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:40:33.097427 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:40:33.110157 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:40:33.110287 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:40:33.118551 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:40:33.119087 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:40:33.124179 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:40:33.124314 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:40:33.126553 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:40:33.126672 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:40:33.128088 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:40:33.128148 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:40:33.129842 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:40:33.129897 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:40:33.131635 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:40:33.131687 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:40:33.138842 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:40:33.139509 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:40:33.139575 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:40:33.142592 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:40:33.142658 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:40:33.143315 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:40:33.143359 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:40:33.144131 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:40:33.144173 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:40:33.148410 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:40:33.148479 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:40:33.150863 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:40:33.152841 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:40:33.153810 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:40:33.158850 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:40:33.167544 systemd[1]: Switching root. Mar 17 17:40:33.204524 systemd-journald[237]: Journal stopped Mar 17 17:40:34.218768 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Mar 17 17:40:34.218843 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:40:34.218860 kernel: SELinux: policy capability open_perms=1 Mar 17 17:40:34.218869 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:40:34.218878 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:40:34.218888 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:40:34.218901 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:40:34.218918 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:40:34.218927 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:40:34.218936 kernel: audit: type=1403 audit(1742233233.322:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:40:34.218946 systemd[1]: Successfully loaded SELinux policy in 35.235ms. Mar 17 17:40:34.218963 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.881ms. Mar 17 17:40:34.218974 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:40:34.218985 systemd[1]: Detected virtualization kvm. Mar 17 17:40:34.218995 systemd[1]: Detected architecture arm64. Mar 17 17:40:34.219004 systemd[1]: Detected first boot. Mar 17 17:40:34.219016 systemd[1]: Hostname set to . Mar 17 17:40:34.219026 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:40:34.219036 zram_generator::config[1061]: No configuration found. Mar 17 17:40:34.219049 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:40:34.219059 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:40:34.219069 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:40:34.219079 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:40:34.219089 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:40:34.219101 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:40:34.219112 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:40:34.221267 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:40:34.221324 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:40:34.221336 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:40:34.221346 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:40:34.221357 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:40:34.221367 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:40:34.221385 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:40:34.221396 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:40:34.221409 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:40:34.221419 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:40:34.221429 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:40:34.221439 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:40:34.221466 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:40:34.221479 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 17:40:34.221492 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:40:34.221502 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:40:34.221512 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:40:34.221523 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:40:34.221533 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:40:34.221543 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:40:34.221553 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:40:34.221564 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:40:34.221575 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:40:34.221588 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:40:34.221600 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:40:34.221613 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:40:34.221623 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:40:34.222725 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:40:34.222749 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:40:34.222760 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:40:34.222771 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:40:34.222781 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:40:34.222791 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:40:34.222801 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:40:34.222811 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:40:34.222821 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:40:34.222835 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:40:34.222846 systemd[1]: Reached target machines.target - Containers. Mar 17 17:40:34.222856 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:40:34.222867 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:40:34.222877 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:40:34.222887 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:40:34.222897 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:40:34.222908 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:40:34.222918 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:40:34.222930 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:40:34.222941 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:40:34.222951 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:40:34.222963 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:40:34.222973 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:40:34.222983 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:40:34.222993 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:40:34.223004 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:40:34.223016 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:40:34.223026 kernel: fuse: init (API version 7.39) Mar 17 17:40:34.223038 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:40:34.223049 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:40:34.223058 kernel: ACPI: bus type drm_connector registered Mar 17 17:40:34.223070 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:40:34.223080 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:40:34.223091 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:40:34.223101 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:40:34.223112 systemd[1]: Stopped verity-setup.service. Mar 17 17:40:34.223127 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:40:34.223137 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:40:34.223149 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:40:34.223160 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:40:34.223170 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:40:34.223181 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:40:34.223191 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:40:34.223202 kernel: loop: module loaded Mar 17 17:40:34.223211 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:40:34.223223 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:40:34.223234 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:40:34.223244 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:40:34.223255 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:40:34.223265 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:40:34.223276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:40:34.223288 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:40:34.223299 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:40:34.223311 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:40:34.223321 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:40:34.223332 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:40:34.223342 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:40:34.223353 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:40:34.223364 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:40:34.223374 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:40:34.223385 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:40:34.223397 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:40:34.223408 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:40:34.223419 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:40:34.223429 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:40:34.223441 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:40:34.223500 systemd-journald[1129]: Collecting audit messages is disabled. Mar 17 17:40:34.223524 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:40:34.223536 systemd-journald[1129]: Journal started Mar 17 17:40:34.223561 systemd-journald[1129]: Runtime Journal (/run/log/journal/80468e8807e246619d43671cd571b1d0) is 8M, max 76.6M, 68.6M free. Mar 17 17:40:33.928153 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:40:33.938238 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 17 17:40:33.939146 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:40:34.232672 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:40:34.235694 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:40:34.247409 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:40:34.247491 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:40:34.254655 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:40:34.257430 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:40:34.263591 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:40:34.267576 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:40:34.279171 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:40:34.279249 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:40:34.281705 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:40:34.283363 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:40:34.285683 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:40:34.287231 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:40:34.293088 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:40:34.315299 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:40:34.325717 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:40:34.331964 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:40:34.335724 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:40:34.339326 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:40:34.341795 kernel: loop0: detected capacity change from 0 to 201592 Mar 17 17:40:34.357872 systemd-journald[1129]: Time spent on flushing to /var/log/journal/80468e8807e246619d43671cd571b1d0 is 31.502ms for 1151 entries. Mar 17 17:40:34.357872 systemd-journald[1129]: System Journal (/var/log/journal/80468e8807e246619d43671cd571b1d0) is 8M, max 584.8M, 576.8M free. Mar 17 17:40:34.403602 systemd-journald[1129]: Received client request to flush runtime journal. Mar 17 17:40:34.404039 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:40:34.361265 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:40:34.386935 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Mar 17 17:40:34.386946 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Mar 17 17:40:34.389672 udevadm[1191]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:40:34.403167 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:40:34.408715 kernel: loop1: detected capacity change from 0 to 113512 Mar 17 17:40:34.419782 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:40:34.424705 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:40:34.427726 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:40:34.454736 kernel: loop2: detected capacity change from 0 to 8 Mar 17 17:40:34.480659 kernel: loop3: detected capacity change from 0 to 123192 Mar 17 17:40:34.493889 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:40:34.505956 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:40:34.520705 kernel: loop4: detected capacity change from 0 to 201592 Mar 17 17:40:34.537953 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Mar 17 17:40:34.537973 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Mar 17 17:40:34.548773 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:40:34.556663 kernel: loop5: detected capacity change from 0 to 113512 Mar 17 17:40:34.586657 kernel: loop6: detected capacity change from 0 to 8 Mar 17 17:40:34.588869 kernel: loop7: detected capacity change from 0 to 123192 Mar 17 17:40:34.599841 (sd-merge)[1210]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 17 17:40:34.600292 (sd-merge)[1210]: Merged extensions into '/usr'. Mar 17 17:40:34.605108 systemd[1]: Reload requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:40:34.605126 systemd[1]: Reloading... Mar 17 17:40:34.668666 zram_generator::config[1235]: No configuration found. Mar 17 17:40:34.840427 ldconfig[1158]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:40:34.853342 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:40:34.923000 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:40:34.923759 systemd[1]: Reloading finished in 318 ms. Mar 17 17:40:34.937231 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:40:34.940744 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:40:34.956011 systemd[1]: Starting ensure-sysext.service... Mar 17 17:40:34.960866 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:40:34.986571 systemd[1]: Reload requested from client PID 1276 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:40:34.986592 systemd[1]: Reloading... Mar 17 17:40:35.005380 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:40:35.005605 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:40:35.006297 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:40:35.006521 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Mar 17 17:40:35.006570 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Mar 17 17:40:35.010226 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:40:35.010245 systemd-tmpfiles[1277]: Skipping /boot Mar 17 17:40:35.034311 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:40:35.034538 systemd-tmpfiles[1277]: Skipping /boot Mar 17 17:40:35.103683 zram_generator::config[1309]: No configuration found. Mar 17 17:40:35.203374 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:40:35.273537 systemd[1]: Reloading finished in 285 ms. Mar 17 17:40:35.286468 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:40:35.298269 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:40:35.311001 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:40:35.317299 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:40:35.325979 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:40:35.330939 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:40:35.339053 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:40:35.343891 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:40:35.348389 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:40:35.352905 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:40:35.357023 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:40:35.360924 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:40:35.362104 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:40:35.362241 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:40:35.365800 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:40:35.370475 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:40:35.370666 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:40:35.370750 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:40:35.375053 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:40:35.385957 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:40:35.386695 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:40:35.386812 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:40:35.401219 systemd[1]: Finished ensure-sysext.service. Mar 17 17:40:35.419108 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:40:35.426226 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:40:35.428140 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:40:35.429909 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:40:35.438729 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:40:35.440998 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:40:35.441246 systemd-udevd[1355]: Using default interface naming scheme 'v255'. Mar 17 17:40:35.443134 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:40:35.445870 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:40:35.446084 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:40:35.447679 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:40:35.447876 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:40:35.452690 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:40:35.452769 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:40:35.460773 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:40:35.466341 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:40:35.472034 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:40:35.478737 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:40:35.487190 augenrules[1386]: No rules Mar 17 17:40:35.489260 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:40:35.489884 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:40:35.494613 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:40:35.495778 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:40:35.514863 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:40:35.623950 systemd-networkd[1399]: lo: Link UP Mar 17 17:40:35.623960 systemd-networkd[1399]: lo: Gained carrier Mar 17 17:40:35.625860 systemd-networkd[1399]: Enumeration completed Mar 17 17:40:35.625969 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:40:35.629933 systemd-resolved[1354]: Positive Trust Anchors: Mar 17 17:40:35.629963 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:40:35.629995 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:40:35.639614 systemd-resolved[1354]: Using system hostname 'ci-4230-1-0-9-a82243c43d'. Mar 17 17:40:35.647018 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:40:35.650656 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:40:35.651666 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:40:35.653831 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:40:35.655315 systemd[1]: Reached target network.target - Network. Mar 17 17:40:35.656057 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:40:35.657356 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:40:35.677386 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:40:35.678609 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 17 17:40:35.744296 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:40:35.744621 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:40:35.745741 systemd-networkd[1399]: eth0: Link UP Mar 17 17:40:35.745930 systemd-networkd[1399]: eth0: Gained carrier Mar 17 17:40:35.746001 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:40:35.774660 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1398) Mar 17 17:40:35.776085 systemd-networkd[1399]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:40:35.776094 systemd-networkd[1399]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:40:35.776770 systemd-networkd[1399]: eth1: Link UP Mar 17 17:40:35.776774 systemd-networkd[1399]: eth1: Gained carrier Mar 17 17:40:35.776793 systemd-networkd[1399]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:40:35.809789 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:40:35.809943 systemd-networkd[1399]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:40:35.816749 systemd-networkd[1399]: eth0: DHCPv4 address 128.140.94.11/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 17:40:35.817282 systemd-timesyncd[1368]: Network configuration changed, trying to establish connection. Mar 17 17:40:35.854183 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 17:40:35.856939 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 17 17:40:35.857074 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:40:35.861844 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:40:35.864934 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:40:35.869292 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:40:35.870140 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:40:35.873863 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:40:35.874513 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:40:35.874548 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:40:35.877065 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:40:35.877265 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:40:35.887183 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:40:35.889507 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:40:35.890573 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:40:35.904257 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:40:35.910070 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:40:35.912671 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:40:35.915426 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:40:35.934668 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Mar 17 17:40:35.934753 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 17 17:40:35.934769 kernel: [drm] features: -context_init Mar 17 17:40:35.935658 kernel: [drm] number of scanouts: 1 Mar 17 17:40:35.935726 kernel: [drm] number of cap sets: 0 Mar 17 17:40:35.939734 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 17 17:40:35.943817 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 17:40:35.943777 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:40:35.947893 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 17 17:40:35.958754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:40:35.959696 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:40:35.975003 systemd-timesyncd[1368]: Contacted time server 158.220.97.17:123 (1.flatcar.pool.ntp.org). Mar 17 17:40:35.975140 systemd-timesyncd[1368]: Initial clock synchronization to Mon 2025-03-17 17:40:36.197624 UTC. Mar 17 17:40:35.977003 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:40:36.039833 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:40:36.089911 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:40:36.099090 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:40:36.109314 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:40:36.139131 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:40:36.140357 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:40:36.141157 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:40:36.142016 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:40:36.143057 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:40:36.144179 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:40:36.145314 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:40:36.146908 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:40:36.147700 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:40:36.147746 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:40:36.148342 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:40:36.150550 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:40:36.153538 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:40:36.157237 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:40:36.158454 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:40:36.159335 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:40:36.162873 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:40:36.164311 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:40:36.166759 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:40:36.168217 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:40:36.169230 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:40:36.169965 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:40:36.170621 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:40:36.170682 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:40:36.178640 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:40:36.182912 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:40:36.183924 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:40:36.190893 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:40:36.197872 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:40:36.201015 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:40:36.203245 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:40:36.213013 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:40:36.223831 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:40:36.229897 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 17 17:40:36.240932 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:40:36.246299 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:40:36.254417 dbus-daemon[1473]: [system] SELinux support is enabled Mar 17 17:40:36.260102 jq[1474]: false Mar 17 17:40:36.261940 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:40:36.265106 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:40:36.266823 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:40:36.269997 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:40:36.274721 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:40:36.276525 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:40:36.284321 extend-filesystems[1477]: Found loop4 Mar 17 17:40:36.284321 extend-filesystems[1477]: Found loop5 Mar 17 17:40:36.284321 extend-filesystems[1477]: Found loop6 Mar 17 17:40:36.284321 extend-filesystems[1477]: Found loop7 Mar 17 17:40:36.284321 extend-filesystems[1477]: Found sda Mar 17 17:40:36.284321 extend-filesystems[1477]: Found sda1 Mar 17 17:40:36.284321 extend-filesystems[1477]: Found sda2 Mar 17 17:40:36.284321 extend-filesystems[1477]: Found sda3 Mar 17 17:40:36.284321 extend-filesystems[1477]: Found usr Mar 17 17:40:36.284321 extend-filesystems[1477]: Found sda4 Mar 17 17:40:36.284321 extend-filesystems[1477]: Found sda6 Mar 17 17:40:36.284321 extend-filesystems[1477]: Found sda7 Mar 17 17:40:36.284321 extend-filesystems[1477]: Found sda9 Mar 17 17:40:36.284321 extend-filesystems[1477]: Checking size of /dev/sda9 Mar 17 17:40:36.284706 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:40:36.295338 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:40:36.296773 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:40:36.297186 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:40:36.297428 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:40:36.312086 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:40:36.312749 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:40:36.335939 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:40:36.335995 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:40:36.337878 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:40:36.337908 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:40:36.345674 jq[1492]: true Mar 17 17:40:36.350163 extend-filesystems[1477]: Resized partition /dev/sda9 Mar 17 17:40:36.356170 coreos-metadata[1472]: Mar 17 17:40:36.356 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 17 17:40:36.356170 coreos-metadata[1472]: Mar 17 17:40:36.356 INFO Fetch successful Mar 17 17:40:36.356170 coreos-metadata[1472]: Mar 17 17:40:36.356 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 17 17:40:36.356170 coreos-metadata[1472]: Mar 17 17:40:36.356 INFO Fetch successful Mar 17 17:40:36.360248 extend-filesystems[1514]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:40:36.358073 (ntainerd)[1502]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:40:36.380101 tar[1501]: linux-arm64/LICENSE Mar 17 17:40:36.380101 tar[1501]: linux-arm64/helm Mar 17 17:40:36.381670 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 17 17:40:36.413434 update_engine[1490]: I20250317 17:40:36.413280 1490 main.cc:92] Flatcar Update Engine starting Mar 17 17:40:36.417146 update_engine[1490]: I20250317 17:40:36.417083 1490 update_check_scheduler.cc:74] Next update check in 5m29s Mar 17 17:40:36.420979 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:40:36.427509 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:40:36.442864 jq[1515]: true Mar 17 17:40:36.467060 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:40:36.472510 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:40:36.499742 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1423) Mar 17 17:40:36.535518 bash[1543]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:40:36.544112 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:40:36.557902 systemd[1]: Starting sshkeys.service... Mar 17 17:40:36.598427 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 17 17:40:36.603649 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:40:36.617589 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:40:36.622889 extend-filesystems[1514]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 17 17:40:36.622889 extend-filesystems[1514]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 17 17:40:36.622889 extend-filesystems[1514]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 17 17:40:36.640879 extend-filesystems[1477]: Resized filesystem in /dev/sda9 Mar 17 17:40:36.640879 extend-filesystems[1477]: Found sr0 Mar 17 17:40:36.625141 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:40:36.625332 systemd-logind[1486]: New seat seat0. Mar 17 17:40:36.625456 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:40:36.638675 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:40:36.638697 systemd-logind[1486]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Mar 17 17:40:36.639192 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:40:36.695784 coreos-metadata[1546]: Mar 17 17:40:36.693 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 17 17:40:36.698405 coreos-metadata[1546]: Mar 17 17:40:36.698 INFO Fetch successful Mar 17 17:40:36.708305 unknown[1546]: wrote ssh authorized keys file for user: core Mar 17 17:40:36.759118 update-ssh-keys[1558]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:40:36.754713 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:40:36.760941 systemd[1]: Finished sshkeys.service. Mar 17 17:40:36.827794 containerd[1502]: time="2025-03-17T17:40:36.827682553Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:40:36.855817 locksmithd[1521]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:40:36.897001 containerd[1502]: time="2025-03-17T17:40:36.896949814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:40:36.908990 containerd[1502]: time="2025-03-17T17:40:36.908928055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:40:36.911672 containerd[1502]: time="2025-03-17T17:40:36.909692817Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:40:36.911672 containerd[1502]: time="2025-03-17T17:40:36.909728874Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:40:36.911672 containerd[1502]: time="2025-03-17T17:40:36.909917629Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:40:36.911672 containerd[1502]: time="2025-03-17T17:40:36.909939337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:40:36.911672 containerd[1502]: time="2025-03-17T17:40:36.910014165Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:40:36.911672 containerd[1502]: time="2025-03-17T17:40:36.910026869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:40:36.911672 containerd[1502]: time="2025-03-17T17:40:36.910332716Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:40:36.911672 containerd[1502]: time="2025-03-17T17:40:36.910356850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:40:36.911672 containerd[1502]: time="2025-03-17T17:40:36.910373666Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:40:36.911672 containerd[1502]: time="2025-03-17T17:40:36.910383327Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:40:36.911672 containerd[1502]: time="2025-03-17T17:40:36.910479246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:40:36.912822 containerd[1502]: time="2025-03-17T17:40:36.912786853Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:40:36.913139 containerd[1502]: time="2025-03-17T17:40:36.913111202Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:40:36.913708 containerd[1502]: time="2025-03-17T17:40:36.913686962Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:40:36.913901 containerd[1502]: time="2025-03-17T17:40:36.913880281Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:40:36.914038 containerd[1502]: time="2025-03-17T17:40:36.914021754Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:40:36.924928 containerd[1502]: time="2025-03-17T17:40:36.924881489Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:40:36.925196 containerd[1502]: time="2025-03-17T17:40:36.925130146Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:40:36.925380 containerd[1502]: time="2025-03-17T17:40:36.925364414Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:40:36.925720 containerd[1502]: time="2025-03-17T17:40:36.925694066Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:40:36.926724 containerd[1502]: time="2025-03-17T17:40:36.926697495Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:40:36.927126 containerd[1502]: time="2025-03-17T17:40:36.927103085Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:40:36.927512 containerd[1502]: time="2025-03-17T17:40:36.927488734Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:40:36.929226 containerd[1502]: time="2025-03-17T17:40:36.928870536Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:40:36.929226 containerd[1502]: time="2025-03-17T17:40:36.928897301Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:40:36.929226 containerd[1502]: time="2025-03-17T17:40:36.928924683Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:40:36.929226 containerd[1502]: time="2025-03-17T17:40:36.928940471Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:40:36.929226 containerd[1502]: time="2025-03-17T17:40:36.928953586Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:40:36.929226 containerd[1502]: time="2025-03-17T17:40:36.928967852Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:40:36.929226 containerd[1502]: time="2025-03-17T17:40:36.928983065Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:40:36.929226 containerd[1502]: time="2025-03-17T17:40:36.929008514Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:40:36.929226 containerd[1502]: time="2025-03-17T17:40:36.929023192Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:40:36.929226 containerd[1502]: time="2025-03-17T17:40:36.929036225Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:40:36.929226 containerd[1502]: time="2025-03-17T17:40:36.929076311Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:40:36.929226 containerd[1502]: time="2025-03-17T17:40:36.929103488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:40:36.929226 containerd[1502]: time="2025-03-17T17:40:36.929119851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:40:36.929226 containerd[1502]: time="2025-03-17T17:40:36.929132226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:40:36.929520 containerd[1502]: time="2025-03-17T17:40:36.929153770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:40:36.929520 containerd[1502]: time="2025-03-17T17:40:36.929167420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:40:36.929520 containerd[1502]: time="2025-03-17T17:40:36.929180576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:40:36.929520 containerd[1502]: time="2025-03-17T17:40:36.929193486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:40:36.929520 containerd[1502]: time="2025-03-17T17:40:36.929207753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:40:36.929720 containerd[1502]: time="2025-03-17T17:40:36.929677645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:40:36.929798 containerd[1502]: time="2025-03-17T17:40:36.929768055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:40:36.929849 containerd[1502]: time="2025-03-17T17:40:36.929835399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:40:36.930675 containerd[1502]: time="2025-03-17T17:40:36.929935800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:40:36.930675 containerd[1502]: time="2025-03-17T17:40:36.929965978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:40:36.930790 containerd[1502]: time="2025-03-17T17:40:36.929982670Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:40:36.930937 containerd[1502]: time="2025-03-17T17:40:36.930917808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:40:36.931087 containerd[1502]: time="2025-03-17T17:40:36.931068162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:40:36.931164 containerd[1502]: time="2025-03-17T17:40:36.931152117Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:40:36.933678 containerd[1502]: time="2025-03-17T17:40:36.931574317Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:40:36.933678 containerd[1502]: time="2025-03-17T17:40:36.931608729Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:40:36.933678 containerd[1502]: time="2025-03-17T17:40:36.931621228Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:40:36.933678 containerd[1502]: time="2025-03-17T17:40:36.931644129Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:40:36.933678 containerd[1502]: time="2025-03-17T17:40:36.931671099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:40:36.933678 containerd[1502]: time="2025-03-17T17:40:36.931691944Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:40:36.933678 containerd[1502]: time="2025-03-17T17:40:36.931704936Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:40:36.933678 containerd[1502]: time="2025-03-17T17:40:36.931737170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:40:36.933900 containerd[1502]: time="2025-03-17T17:40:36.932196002Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:40:36.933900 containerd[1502]: time="2025-03-17T17:40:36.932257262Z" level=info msg="Connect containerd service" Mar 17 17:40:36.933900 containerd[1502]: time="2025-03-17T17:40:36.932302940Z" level=info msg="using legacy CRI server" Mar 17 17:40:36.933900 containerd[1502]: time="2025-03-17T17:40:36.932311820Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:40:36.933900 containerd[1502]: time="2025-03-17T17:40:36.932589093Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:40:36.937819 containerd[1502]: time="2025-03-17T17:40:36.937689329Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:40:36.938236 containerd[1502]: time="2025-03-17T17:40:36.938155521Z" level=info msg="Start subscribing containerd event" Mar 17 17:40:36.938236 containerd[1502]: time="2025-03-17T17:40:36.938232363Z" level=info msg="Start recovering state" Mar 17 17:40:36.938485 containerd[1502]: time="2025-03-17T17:40:36.938320224Z" level=info msg="Start event monitor" Mar 17 17:40:36.938485 containerd[1502]: time="2025-03-17T17:40:36.938341356Z" level=info msg="Start snapshots syncer" Mar 17 17:40:36.938485 containerd[1502]: time="2025-03-17T17:40:36.938353814Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:40:36.938485 containerd[1502]: time="2025-03-17T17:40:36.938363311Z" level=info msg="Start streaming server" Mar 17 17:40:36.941812 containerd[1502]: time="2025-03-17T17:40:36.941681418Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:40:36.941812 containerd[1502]: time="2025-03-17T17:40:36.941770348Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:40:36.942017 containerd[1502]: time="2025-03-17T17:40:36.942001902Z" level=info msg="containerd successfully booted in 0.117952s" Mar 17 17:40:36.942139 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:40:37.104804 systemd-networkd[1399]: eth1: Gained IPv6LL Mar 17 17:40:37.115472 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:40:37.119961 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:40:37.131928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:40:37.137476 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:40:37.182728 tar[1501]: linux-arm64/README.md Mar 17 17:40:37.200698 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:40:37.204971 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:40:37.552843 systemd-networkd[1399]: eth0: Gained IPv6LL Mar 17 17:40:37.566925 sshd_keygen[1498]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:40:37.591752 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:40:37.602256 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:40:37.608467 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:40:37.609569 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:40:37.618055 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:40:37.629203 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:40:37.641038 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:40:37.644997 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 17:40:37.646839 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:40:37.993844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:40:37.996968 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:40:37.998035 (kubelet)[1605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:40:38.000927 systemd[1]: Startup finished in 795ms (kernel) + 7.642s (initrd) + 4.714s (userspace) = 13.151s. Mar 17 17:40:38.519443 kubelet[1605]: E0317 17:40:38.519340 1605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:40:38.521193 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:40:38.521348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:40:38.521749 systemd[1]: kubelet.service: Consumed 844ms CPU time, 247M memory peak. Mar 17 17:40:48.772757 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:40:48.784108 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:40:48.899565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:40:48.904949 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:40:48.963054 kubelet[1624]: E0317 17:40:48.962978 1624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:40:48.967804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:40:48.968137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:40:48.969797 systemd[1]: kubelet.service: Consumed 162ms CPU time, 102.4M memory peak. Mar 17 17:40:59.218712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:40:59.225932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:40:59.351619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:40:59.356718 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:40:59.403519 kubelet[1639]: E0317 17:40:59.403417 1639 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:40:59.408510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:40:59.408832 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:40:59.409605 systemd[1]: kubelet.service: Consumed 152ms CPU time, 102.3M memory peak. Mar 17 17:41:09.659654 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:41:09.671978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:09.806081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:09.806548 (kubelet)[1655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:41:09.859772 kubelet[1655]: E0317 17:41:09.859670 1655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:41:09.862025 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:41:09.862247 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:41:09.862710 systemd[1]: kubelet.service: Consumed 153ms CPU time, 99.2M memory peak. Mar 17 17:41:20.113496 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 17:41:20.118981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:20.251346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:20.257021 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:41:20.309091 kubelet[1669]: E0317 17:41:20.309017 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:41:20.312040 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:41:20.312439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:41:20.312937 systemd[1]: kubelet.service: Consumed 160ms CPU time, 101.7M memory peak. Mar 17 17:41:21.895834 update_engine[1490]: I20250317 17:41:21.894802 1490 update_attempter.cc:509] Updating boot flags... Mar 17 17:41:21.954728 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1686) Mar 17 17:41:22.028695 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1682) Mar 17 17:41:30.414184 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 17:41:30.420877 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:30.533703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:30.537971 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:41:30.582498 kubelet[1703]: E0317 17:41:30.582420 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:41:30.585457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:41:30.585654 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:41:30.586472 systemd[1]: kubelet.service: Consumed 150ms CPU time, 102.2M memory peak. Mar 17 17:41:40.664754 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 17:41:40.682064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:40.803892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:40.805775 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:41:40.848136 kubelet[1717]: E0317 17:41:40.848090 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:41:40.850105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:41:40.850246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:41:40.850528 systemd[1]: kubelet.service: Consumed 146ms CPU time, 101.9M memory peak. Mar 17 17:41:50.914510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 17 17:41:50.926138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:51.057127 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:51.062308 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:41:51.112860 kubelet[1732]: E0317 17:41:51.112734 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:41:51.115622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:41:51.116022 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:41:51.116360 systemd[1]: kubelet.service: Consumed 160ms CPU time, 102.1M memory peak. Mar 17 17:42:01.165248 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 17 17:42:01.174215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:01.314028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:01.314059 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:42:01.371739 kubelet[1747]: E0317 17:42:01.371695 1747 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:42:01.374458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:42:01.374702 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:42:01.375944 systemd[1]: kubelet.service: Consumed 163ms CPU time, 102.4M memory peak. Mar 17 17:42:11.414489 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 17 17:42:11.425349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:11.548720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:11.562651 (kubelet)[1763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:42:11.611910 kubelet[1763]: E0317 17:42:11.611824 1763 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:42:11.615119 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:42:11.615422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:42:11.616680 systemd[1]: kubelet.service: Consumed 164ms CPU time, 102.3M memory peak. Mar 17 17:42:21.664944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 17 17:42:21.675364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:21.787014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:21.792210 (kubelet)[1778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:42:21.840596 kubelet[1778]: E0317 17:42:21.840467 1778 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:42:21.844831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:42:21.845121 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:42:21.845908 systemd[1]: kubelet.service: Consumed 158ms CPU time, 102.7M memory peak. Mar 17 17:42:30.440835 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:42:30.450019 systemd[1]: Started sshd@0-128.140.94.11:22-139.178.89.65:46934.service - OpenSSH per-connection server daemon (139.178.89.65:46934). Mar 17 17:42:31.459045 sshd[1786]: Accepted publickey for core from 139.178.89.65 port 46934 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:42:31.462189 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:31.477901 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:42:31.490561 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:42:31.495578 systemd-logind[1486]: New session 1 of user core. Mar 17 17:42:31.507543 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:42:31.516709 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:42:31.534323 (systemd)[1790]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:42:31.537784 systemd-logind[1486]: New session c1 of user core. Mar 17 17:42:31.678155 systemd[1790]: Queued start job for default target default.target. Mar 17 17:42:31.689853 systemd[1790]: Created slice app.slice - User Application Slice. Mar 17 17:42:31.690172 systemd[1790]: Reached target paths.target - Paths. Mar 17 17:42:31.690279 systemd[1790]: Reached target timers.target - Timers. Mar 17 17:42:31.693013 systemd[1790]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:42:31.708115 systemd[1790]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:42:31.708243 systemd[1790]: Reached target sockets.target - Sockets. Mar 17 17:42:31.708289 systemd[1790]: Reached target basic.target - Basic System. Mar 17 17:42:31.708316 systemd[1790]: Reached target default.target - Main User Target. Mar 17 17:42:31.708342 systemd[1790]: Startup finished in 161ms. Mar 17 17:42:31.708848 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:42:31.716966 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:42:31.914038 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Mar 17 17:42:31.927080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:32.080534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:32.086573 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:42:32.136648 kubelet[1807]: E0317 17:42:32.135860 1807 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:42:32.139085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:42:32.139393 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:42:32.139957 systemd[1]: kubelet.service: Consumed 155ms CPU time, 101.8M memory peak. Mar 17 17:42:32.419146 systemd[1]: Started sshd@1-128.140.94.11:22-139.178.89.65:35362.service - OpenSSH per-connection server daemon (139.178.89.65:35362). Mar 17 17:42:33.395512 sshd[1816]: Accepted publickey for core from 139.178.89.65 port 35362 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:42:33.398467 sshd-session[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:33.403769 systemd-logind[1486]: New session 2 of user core. Mar 17 17:42:33.411901 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:42:34.071449 sshd[1818]: Connection closed by 139.178.89.65 port 35362 Mar 17 17:42:34.072062 sshd-session[1816]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:34.077547 systemd[1]: sshd@1-128.140.94.11:22-139.178.89.65:35362.service: Deactivated successfully. Mar 17 17:42:34.079803 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:42:34.081682 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:42:34.082919 systemd-logind[1486]: Removed session 2. Mar 17 17:42:34.256535 systemd[1]: Started sshd@2-128.140.94.11:22-139.178.89.65:35370.service - OpenSSH per-connection server daemon (139.178.89.65:35370). Mar 17 17:42:35.237777 sshd[1824]: Accepted publickey for core from 139.178.89.65 port 35370 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:42:35.240540 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:35.247677 systemd-logind[1486]: New session 3 of user core. Mar 17 17:42:35.253553 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:42:35.912987 sshd[1826]: Connection closed by 139.178.89.65 port 35370 Mar 17 17:42:35.912861 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:35.918204 systemd[1]: sshd@2-128.140.94.11:22-139.178.89.65:35370.service: Deactivated successfully. Mar 17 17:42:35.920496 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:42:35.922067 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:42:35.923722 systemd-logind[1486]: Removed session 3. Mar 17 17:42:36.094554 systemd[1]: Started sshd@3-128.140.94.11:22-139.178.89.65:35382.service - OpenSSH per-connection server daemon (139.178.89.65:35382). Mar 17 17:42:37.081965 sshd[1832]: Accepted publickey for core from 139.178.89.65 port 35382 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:42:37.084419 sshd-session[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:37.090340 systemd-logind[1486]: New session 4 of user core. Mar 17 17:42:37.105964 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:42:37.764738 sshd[1834]: Connection closed by 139.178.89.65 port 35382 Mar 17 17:42:37.766234 sshd-session[1832]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:37.769941 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:42:37.772829 systemd[1]: sshd@3-128.140.94.11:22-139.178.89.65:35382.service: Deactivated successfully. Mar 17 17:42:37.775879 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:42:37.776967 systemd-logind[1486]: Removed session 4. Mar 17 17:42:37.941204 systemd[1]: Started sshd@4-128.140.94.11:22-139.178.89.65:35394.service - OpenSSH per-connection server daemon (139.178.89.65:35394). Mar 17 17:42:38.921515 sshd[1840]: Accepted publickey for core from 139.178.89.65 port 35394 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:42:38.923498 sshd-session[1840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:38.928787 systemd-logind[1486]: New session 5 of user core. Mar 17 17:42:38.935368 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:42:39.453095 sudo[1843]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:42:39.453509 sudo[1843]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:42:39.471196 sudo[1843]: pam_unix(sudo:session): session closed for user root Mar 17 17:42:39.629682 sshd[1842]: Connection closed by 139.178.89.65 port 35394 Mar 17 17:42:39.631078 sshd-session[1840]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:39.638213 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:42:39.638523 systemd[1]: sshd@4-128.140.94.11:22-139.178.89.65:35394.service: Deactivated successfully. Mar 17 17:42:39.642602 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:42:39.644198 systemd-logind[1486]: Removed session 5. Mar 17 17:42:39.805047 systemd[1]: Started sshd@5-128.140.94.11:22-139.178.89.65:35396.service - OpenSSH per-connection server daemon (139.178.89.65:35396). Mar 17 17:42:40.806794 sshd[1849]: Accepted publickey for core from 139.178.89.65 port 35396 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:42:40.810833 sshd-session[1849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:40.818091 systemd-logind[1486]: New session 6 of user core. Mar 17 17:42:40.825061 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:42:41.334001 sudo[1853]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:42:41.334329 sudo[1853]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:42:41.338906 sudo[1853]: pam_unix(sudo:session): session closed for user root Mar 17 17:42:41.345519 sudo[1852]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:42:41.345906 sudo[1852]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:42:41.369871 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:42:41.399885 augenrules[1875]: No rules Mar 17 17:42:41.400624 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:42:41.400889 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:42:41.402453 sudo[1852]: pam_unix(sudo:session): session closed for user root Mar 17 17:42:41.561782 sshd[1851]: Connection closed by 139.178.89.65 port 35396 Mar 17 17:42:41.562540 sshd-session[1849]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:41.566857 systemd[1]: sshd@5-128.140.94.11:22-139.178.89.65:35396.service: Deactivated successfully. Mar 17 17:42:41.569823 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:42:41.571618 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:42:41.573076 systemd-logind[1486]: Removed session 6. Mar 17 17:42:41.753143 systemd[1]: Started sshd@6-128.140.94.11:22-139.178.89.65:39266.service - OpenSSH per-connection server daemon (139.178.89.65:39266). Mar 17 17:42:42.164819 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Mar 17 17:42:42.174091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:42.303934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:42.305979 (kubelet)[1894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:42:42.350655 kubelet[1894]: E0317 17:42:42.350542 1894 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:42:42.353250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:42:42.353446 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:42:42.354089 systemd[1]: kubelet.service: Consumed 157ms CPU time, 102.2M memory peak. Mar 17 17:42:42.734255 sshd[1884]: Accepted publickey for core from 139.178.89.65 port 39266 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:42:42.736203 sshd-session[1884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:42.743349 systemd-logind[1486]: New session 7 of user core. Mar 17 17:42:42.749898 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:42:43.250936 sudo[1902]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:42:43.251223 sudo[1902]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:42:43.604065 (dockerd)[1919]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:42:43.606115 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:42:43.846114 dockerd[1919]: time="2025-03-17T17:42:43.845943145Z" level=info msg="Starting up" Mar 17 17:42:43.922364 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3267283782-merged.mount: Deactivated successfully. Mar 17 17:42:43.943818 dockerd[1919]: time="2025-03-17T17:42:43.943533565Z" level=info msg="Loading containers: start." Mar 17 17:42:44.096751 kernel: Initializing XFRM netlink socket Mar 17 17:42:44.189856 systemd-networkd[1399]: docker0: Link UP Mar 17 17:42:44.217327 dockerd[1919]: time="2025-03-17T17:42:44.217246164Z" level=info msg="Loading containers: done." Mar 17 17:42:44.241025 dockerd[1919]: time="2025-03-17T17:42:44.240372564Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:42:44.241025 dockerd[1919]: time="2025-03-17T17:42:44.240506045Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 17:42:44.241025 dockerd[1919]: time="2025-03-17T17:42:44.240832888Z" level=info msg="Daemon has completed initialization" Mar 17 17:42:44.289253 dockerd[1919]: time="2025-03-17T17:42:44.289206703Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:42:44.289333 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:42:44.919740 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3074232911-merged.mount: Deactivated successfully. Mar 17 17:42:45.072334 containerd[1502]: time="2025-03-17T17:42:45.071851734Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 17 17:42:45.685234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3737462252.mount: Deactivated successfully. Mar 17 17:42:46.534000 containerd[1502]: time="2025-03-17T17:42:46.533938712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:46.535961 containerd[1502]: time="2025-03-17T17:42:46.535875889Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=26232042" Mar 17 17:42:46.535961 containerd[1502]: time="2025-03-17T17:42:46.535917529Z" level=info msg="ImageCreate event name:\"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:46.540665 containerd[1502]: time="2025-03-17T17:42:46.539606841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:46.542158 containerd[1502]: time="2025-03-17T17:42:46.541907142Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"26228750\" in 1.470012448s" Mar 17 17:42:46.542158 containerd[1502]: time="2025-03-17T17:42:46.541955542Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\"" Mar 17 17:42:46.543179 containerd[1502]: time="2025-03-17T17:42:46.543145593Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 17 17:42:47.961696 containerd[1502]: time="2025-03-17T17:42:47.961300244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:47.963358 containerd[1502]: time="2025-03-17T17:42:47.963208862Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=22530052" Mar 17 17:42:47.964587 containerd[1502]: time="2025-03-17T17:42:47.964503715Z" level=info msg="ImageCreate event name:\"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:47.968415 containerd[1502]: time="2025-03-17T17:42:47.968324832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:47.970476 containerd[1502]: time="2025-03-17T17:42:47.969918447Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"23970828\" in 1.426570333s" Mar 17 17:42:47.970476 containerd[1502]: time="2025-03-17T17:42:47.969966848Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\"" Mar 17 17:42:47.971088 containerd[1502]: time="2025-03-17T17:42:47.971010378Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 17 17:42:48.945706 containerd[1502]: time="2025-03-17T17:42:48.944467958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:48.946842 containerd[1502]: time="2025-03-17T17:42:48.946766222Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=17482581" Mar 17 17:42:48.948974 containerd[1502]: time="2025-03-17T17:42:48.948920365Z" level=info msg="ImageCreate event name:\"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:48.954960 containerd[1502]: time="2025-03-17T17:42:48.954891108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:48.956810 containerd[1502]: time="2025-03-17T17:42:48.956657127Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"18923375\" in 985.583228ms" Mar 17 17:42:48.956810 containerd[1502]: time="2025-03-17T17:42:48.956696127Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\"" Mar 17 17:42:48.957579 containerd[1502]: time="2025-03-17T17:42:48.957511336Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 17:42:49.983857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3174171198.mount: Deactivated successfully. Mar 17 17:42:50.387751 containerd[1502]: time="2025-03-17T17:42:50.386639843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:50.388085 containerd[1502]: time="2025-03-17T17:42:50.387779817Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=27370121" Mar 17 17:42:50.388688 containerd[1502]: time="2025-03-17T17:42:50.388491825Z" level=info msg="ImageCreate event name:\"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:50.392342 containerd[1502]: time="2025-03-17T17:42:50.392260872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:50.394242 containerd[1502]: time="2025-03-17T17:42:50.393709689Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"27369114\" in 1.436150913s" Mar 17 17:42:50.394242 containerd[1502]: time="2025-03-17T17:42:50.393749010Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\"" Mar 17 17:42:50.394607 containerd[1502]: time="2025-03-17T17:42:50.394559380Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 17 17:42:51.021509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3130226684.mount: Deactivated successfully. Mar 17 17:42:51.726662 containerd[1502]: time="2025-03-17T17:42:51.725148420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:51.727058 containerd[1502]: time="2025-03-17T17:42:51.726877442Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Mar 17 17:42:51.727672 containerd[1502]: time="2025-03-17T17:42:51.727340448Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:51.732491 containerd[1502]: time="2025-03-17T17:42:51.732420395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:51.734346 containerd[1502]: time="2025-03-17T17:42:51.733649891Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.33901971s" Mar 17 17:42:51.734346 containerd[1502]: time="2025-03-17T17:42:51.733694491Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Mar 17 17:42:51.734717 containerd[1502]: time="2025-03-17T17:42:51.734690304Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 17:42:52.283870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895815013.mount: Deactivated successfully. Mar 17 17:42:52.292146 containerd[1502]: time="2025-03-17T17:42:52.292069597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:52.293896 containerd[1502]: time="2025-03-17T17:42:52.293826542Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Mar 17 17:42:52.295264 containerd[1502]: time="2025-03-17T17:42:52.295130200Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:52.298316 containerd[1502]: time="2025-03-17T17:42:52.298229482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:52.299588 containerd[1502]: time="2025-03-17T17:42:52.298952092Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 564.153427ms" Mar 17 17:42:52.299588 containerd[1502]: time="2025-03-17T17:42:52.298989693Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 17 17:42:52.300141 containerd[1502]: time="2025-03-17T17:42:52.300107868Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 17 17:42:52.414460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Mar 17 17:42:52.419918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:52.561903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:52.564117 (kubelet)[2241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:42:52.606768 kubelet[2241]: E0317 17:42:52.606195 2241 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:42:52.610495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:42:52.610858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:42:52.611492 systemd[1]: kubelet.service: Consumed 159ms CPU time, 101.6M memory peak. Mar 17 17:42:52.857309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount893795334.mount: Deactivated successfully. Mar 17 17:42:54.326702 containerd[1502]: time="2025-03-17T17:42:54.326564209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:54.330469 containerd[1502]: time="2025-03-17T17:42:54.329165808Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812491" Mar 17 17:42:54.332419 containerd[1502]: time="2025-03-17T17:42:54.332370857Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:54.337832 containerd[1502]: time="2025-03-17T17:42:54.336790645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:54.338557 containerd[1502]: time="2025-03-17T17:42:54.338519231Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.03818468s" Mar 17 17:42:54.338713 containerd[1502]: time="2025-03-17T17:42:54.338693194Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Mar 17 17:42:59.337425 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:59.337588 systemd[1]: kubelet.service: Consumed 159ms CPU time, 101.6M memory peak. Mar 17 17:42:59.346840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:59.394985 systemd[1]: Reload requested from client PID 2331 ('systemctl') (unit session-7.scope)... Mar 17 17:42:59.395010 systemd[1]: Reloading... Mar 17 17:42:59.563621 zram_generator::config[2376]: No configuration found. Mar 17 17:42:59.684408 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:42:59.789233 systemd[1]: Reloading finished in 393 ms. Mar 17 17:42:59.839286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:59.849014 (kubelet)[2417]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:42:59.850482 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:59.851067 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:42:59.851452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:59.851515 systemd[1]: kubelet.service: Consumed 105ms CPU time, 90.1M memory peak. Mar 17 17:42:59.860042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:59.983192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:59.992104 (kubelet)[2429]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:43:00.042331 kubelet[2429]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:43:00.042331 kubelet[2429]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:43:00.042331 kubelet[2429]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:43:00.042331 kubelet[2429]: I0317 17:43:00.040335 2429 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:43:00.539136 kubelet[2429]: I0317 17:43:00.539096 2429 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:43:00.540656 kubelet[2429]: I0317 17:43:00.539271 2429 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:43:00.540656 kubelet[2429]: I0317 17:43:00.539585 2429 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:43:00.573296 kubelet[2429]: E0317 17:43:00.573233 2429 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://128.140.94.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 128.140.94.11:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:00.576887 kubelet[2429]: I0317 17:43:00.576786 2429 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:43:00.587426 kubelet[2429]: E0317 17:43:00.586933 2429 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:43:00.587426 kubelet[2429]: I0317 17:43:00.586991 2429 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:43:00.590785 kubelet[2429]: I0317 17:43:00.590757 2429 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:43:00.591229 kubelet[2429]: I0317 17:43:00.591191 2429 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:43:00.591528 kubelet[2429]: I0317 17:43:00.591306 2429 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-0-9-a82243c43d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:43:00.591826 kubelet[2429]: I0317 17:43:00.591808 2429 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:43:00.591898 kubelet[2429]: I0317 17:43:00.591888 2429 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:43:00.592172 kubelet[2429]: I0317 17:43:00.592156 2429 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:43:00.596652 kubelet[2429]: I0317 17:43:00.596613 2429 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:43:00.596873 kubelet[2429]: I0317 17:43:00.596768 2429 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:43:00.596873 kubelet[2429]: I0317 17:43:00.596798 2429 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:43:00.596873 kubelet[2429]: I0317 17:43:00.596810 2429 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:43:00.600865 kubelet[2429]: W0317 17:43:00.600702 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://128.140.94.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-9-a82243c43d&limit=500&resourceVersion=0": dial tcp 128.140.94.11:6443: connect: connection refused Mar 17 17:43:00.600865 kubelet[2429]: E0317 17:43:00.600853 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://128.140.94.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-9-a82243c43d&limit=500&resourceVersion=0\": dial tcp 128.140.94.11:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:00.602672 kubelet[2429]: W0317 17:43:00.602159 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://128.140.94.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 128.140.94.11:6443: connect: connection refused Mar 17 17:43:00.602672 kubelet[2429]: E0317 17:43:00.602224 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://128.140.94.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 128.140.94.11:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:00.604658 kubelet[2429]: I0317 17:43:00.602940 2429 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:43:00.604658 kubelet[2429]: I0317 17:43:00.603586 2429 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:43:00.604658 kubelet[2429]: W0317 17:43:00.603845 2429 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:43:00.606523 kubelet[2429]: I0317 17:43:00.606484 2429 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:43:00.606692 kubelet[2429]: I0317 17:43:00.606681 2429 server.go:1287] "Started kubelet" Mar 17 17:43:00.613425 kubelet[2429]: E0317 17:43:00.613029 2429 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://128.140.94.11:6443/api/v1/namespaces/default/events\": dial tcp 128.140.94.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-1-0-9-a82243c43d.182da80766e33db5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-0-9-a82243c43d,UID:ci-4230-1-0-9-a82243c43d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-0-9-a82243c43d,},FirstTimestamp:2025-03-17 17:43:00.606655925 +0000 UTC m=+0.609951686,LastTimestamp:2025-03-17 17:43:00.606655925 +0000 UTC m=+0.609951686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-0-9-a82243c43d,}" Mar 17 17:43:00.614523 kubelet[2429]: I0317 17:43:00.614448 2429 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:43:00.615312 kubelet[2429]: I0317 17:43:00.615280 2429 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:43:00.615439 kubelet[2429]: I0317 17:43:00.615365 2429 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:43:00.615570 kubelet[2429]: I0317 17:43:00.615553 2429 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:43:00.617572 kubelet[2429]: I0317 17:43:00.617523 2429 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:43:00.621267 kubelet[2429]: I0317 17:43:00.620834 2429 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:43:00.622856 kubelet[2429]: I0317 17:43:00.622829 2429 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:43:00.625699 kubelet[2429]: I0317 17:43:00.625149 2429 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:43:00.625699 kubelet[2429]: I0317 17:43:00.625215 2429 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:43:00.625699 kubelet[2429]: E0317 17:43:00.625278 2429 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-1-0-9-a82243c43d\" not found" Mar 17 17:43:00.626516 kubelet[2429]: W0317 17:43:00.626468 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://128.140.94.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 128.140.94.11:6443: connect: connection refused Mar 17 17:43:00.626737 kubelet[2429]: E0317 17:43:00.626712 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://128.140.94.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 128.140.94.11:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:00.626919 kubelet[2429]: E0317 17:43:00.626886 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.94.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-9-a82243c43d?timeout=10s\": dial tcp 128.140.94.11:6443: connect: connection refused" interval="200ms" Mar 17 17:43:00.627936 kubelet[2429]: E0317 17:43:00.627913 2429 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:43:00.628365 kubelet[2429]: I0317 17:43:00.628347 2429 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:43:00.628577 kubelet[2429]: I0317 17:43:00.628557 2429 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:43:00.631657 kubelet[2429]: I0317 17:43:00.629913 2429 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:43:00.640241 kubelet[2429]: I0317 17:43:00.640185 2429 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:43:00.641430 kubelet[2429]: I0317 17:43:00.641354 2429 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:43:00.641430 kubelet[2429]: I0317 17:43:00.641430 2429 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:43:00.641538 kubelet[2429]: I0317 17:43:00.641455 2429 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:43:00.641538 kubelet[2429]: I0317 17:43:00.641462 2429 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:43:00.641538 kubelet[2429]: E0317 17:43:00.641513 2429 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:43:00.649759 kubelet[2429]: W0317 17:43:00.649613 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://128.140.94.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 128.140.94.11:6443: connect: connection refused Mar 17 17:43:00.649759 kubelet[2429]: E0317 17:43:00.649714 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://128.140.94.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 128.140.94.11:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:00.665279 kubelet[2429]: I0317 17:43:00.665255 2429 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:43:00.665808 kubelet[2429]: I0317 17:43:00.665500 2429 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:43:00.665808 kubelet[2429]: I0317 17:43:00.665546 2429 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:43:00.669242 kubelet[2429]: I0317 17:43:00.668807 2429 policy_none.go:49] "None policy: Start" Mar 17 17:43:00.669242 kubelet[2429]: I0317 17:43:00.668848 2429 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:43:00.669242 kubelet[2429]: I0317 17:43:00.668874 2429 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:43:00.678068 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:43:00.695045 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:43:00.700647 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:43:00.713606 kubelet[2429]: I0317 17:43:00.713574 2429 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:43:00.714018 kubelet[2429]: I0317 17:43:00.713997 2429 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:43:00.714147 kubelet[2429]: I0317 17:43:00.714105 2429 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:43:00.714595 kubelet[2429]: I0317 17:43:00.714569 2429 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:43:00.716214 kubelet[2429]: E0317 17:43:00.716190 2429 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:43:00.716406 kubelet[2429]: E0317 17:43:00.716390 2429 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-1-0-9-a82243c43d\" not found" Mar 17 17:43:00.757182 systemd[1]: Created slice kubepods-burstable-podfa20ffa41a9aa87f33dec9c34f934696.slice - libcontainer container kubepods-burstable-podfa20ffa41a9aa87f33dec9c34f934696.slice. Mar 17 17:43:00.793287 kubelet[2429]: E0317 17:43:00.791148 2429 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-0-9-a82243c43d\" not found" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:00.796288 systemd[1]: Created slice kubepods-burstable-pod5bbb65e0063ae97e117c8d233b03d1e2.slice - libcontainer container kubepods-burstable-pod5bbb65e0063ae97e117c8d233b03d1e2.slice. Mar 17 17:43:00.798605 kubelet[2429]: E0317 17:43:00.798505 2429 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-0-9-a82243c43d\" not found" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:00.811313 systemd[1]: Created slice kubepods-burstable-pod62b1657b587aed5bbd30c4b58139b275.slice - libcontainer container kubepods-burstable-pod62b1657b587aed5bbd30c4b58139b275.slice. Mar 17 17:43:00.815022 kubelet[2429]: E0317 17:43:00.814813 2429 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-0-9-a82243c43d\" not found" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:00.816364 kubelet[2429]: I0317 17:43:00.816317 2429 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:00.816854 kubelet[2429]: E0317 17:43:00.816824 2429 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://128.140.94.11:6443/api/v1/nodes\": dial tcp 128.140.94.11:6443: connect: connection refused" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:00.827809 kubelet[2429]: E0317 17:43:00.827758 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.94.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-9-a82243c43d?timeout=10s\": dial tcp 128.140.94.11:6443: connect: connection refused" interval="400ms" Mar 17 17:43:00.927768 kubelet[2429]: I0317 17:43:00.927425 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa20ffa41a9aa87f33dec9c34f934696-k8s-certs\") pod \"kube-apiserver-ci-4230-1-0-9-a82243c43d\" (UID: \"fa20ffa41a9aa87f33dec9c34f934696\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:00.927768 kubelet[2429]: I0317 17:43:00.927506 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbb65e0063ae97e117c8d233b03d1e2-ca-certs\") pod \"kube-controller-manager-ci-4230-1-0-9-a82243c43d\" (UID: \"5bbb65e0063ae97e117c8d233b03d1e2\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:00.927768 kubelet[2429]: I0317 17:43:00.927549 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbb65e0063ae97e117c8d233b03d1e2-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-0-9-a82243c43d\" (UID: \"5bbb65e0063ae97e117c8d233b03d1e2\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:00.927768 kubelet[2429]: I0317 17:43:00.927598 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/62b1657b587aed5bbd30c4b58139b275-kubeconfig\") pod \"kube-scheduler-ci-4230-1-0-9-a82243c43d\" (UID: \"62b1657b587aed5bbd30c4b58139b275\") " pod="kube-system/kube-scheduler-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:00.927768 kubelet[2429]: I0317 17:43:00.927688 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa20ffa41a9aa87f33dec9c34f934696-ca-certs\") pod \"kube-apiserver-ci-4230-1-0-9-a82243c43d\" (UID: \"fa20ffa41a9aa87f33dec9c34f934696\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:00.928108 kubelet[2429]: I0317 17:43:00.927752 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa20ffa41a9aa87f33dec9c34f934696-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-0-9-a82243c43d\" (UID: \"fa20ffa41a9aa87f33dec9c34f934696\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:00.928108 kubelet[2429]: I0317 17:43:00.927790 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbb65e0063ae97e117c8d233b03d1e2-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-0-9-a82243c43d\" (UID: \"5bbb65e0063ae97e117c8d233b03d1e2\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:00.928108 kubelet[2429]: I0317 17:43:00.927824 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbb65e0063ae97e117c8d233b03d1e2-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-0-9-a82243c43d\" (UID: \"5bbb65e0063ae97e117c8d233b03d1e2\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:00.928108 kubelet[2429]: I0317 17:43:00.927862 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbb65e0063ae97e117c8d233b03d1e2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-0-9-a82243c43d\" (UID: \"5bbb65e0063ae97e117c8d233b03d1e2\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:01.020957 kubelet[2429]: I0317 17:43:01.020482 2429 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:01.021116 kubelet[2429]: E0317 17:43:01.021036 2429 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://128.140.94.11:6443/api/v1/nodes\": dial tcp 128.140.94.11:6443: connect: connection refused" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:01.096676 containerd[1502]: time="2025-03-17T17:43:01.095925196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-0-9-a82243c43d,Uid:fa20ffa41a9aa87f33dec9c34f934696,Namespace:kube-system,Attempt:0,}" Mar 17 17:43:01.100528 containerd[1502]: time="2025-03-17T17:43:01.100484446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-0-9-a82243c43d,Uid:5bbb65e0063ae97e117c8d233b03d1e2,Namespace:kube-system,Attempt:0,}" Mar 17 17:43:01.116486 containerd[1502]: time="2025-03-17T17:43:01.116422920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-0-9-a82243c43d,Uid:62b1657b587aed5bbd30c4b58139b275,Namespace:kube-system,Attempt:0,}" Mar 17 17:43:01.229524 kubelet[2429]: E0317 17:43:01.229465 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.94.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-9-a82243c43d?timeout=10s\": dial tcp 128.140.94.11:6443: connect: connection refused" interval="800ms" Mar 17 17:43:01.424785 kubelet[2429]: I0317 17:43:01.424719 2429 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:01.425321 kubelet[2429]: E0317 17:43:01.425222 2429 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://128.140.94.11:6443/api/v1/nodes\": dial tcp 128.140.94.11:6443: connect: connection refused" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:01.618534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1469316.mount: Deactivated successfully. Mar 17 17:43:01.627656 containerd[1502]: time="2025-03-17T17:43:01.625607096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:43:01.632701 containerd[1502]: time="2025-03-17T17:43:01.632623914Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Mar 17 17:43:01.634238 containerd[1502]: time="2025-03-17T17:43:01.634176544Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:43:01.636468 containerd[1502]: time="2025-03-17T17:43:01.636420148Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:43:01.636694 containerd[1502]: time="2025-03-17T17:43:01.636510070Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:43:01.638313 containerd[1502]: time="2025-03-17T17:43:01.638243784Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:43:01.642035 containerd[1502]: time="2025-03-17T17:43:01.641982858Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:43:01.643331 containerd[1502]: time="2025-03-17T17:43:01.643287083Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 526.734361ms" Mar 17 17:43:01.646204 containerd[1502]: time="2025-03-17T17:43:01.646144740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:43:01.649354 containerd[1502]: time="2025-03-17T17:43:01.649277081Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 552.723553ms" Mar 17 17:43:01.652673 containerd[1502]: time="2025-03-17T17:43:01.652360182Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.789694ms" Mar 17 17:43:01.767807 containerd[1502]: time="2025-03-17T17:43:01.765774213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:01.767807 containerd[1502]: time="2025-03-17T17:43:01.765877895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:01.767807 containerd[1502]: time="2025-03-17T17:43:01.765897215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:01.767807 containerd[1502]: time="2025-03-17T17:43:01.765994897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:01.782143 containerd[1502]: time="2025-03-17T17:43:01.779730607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:01.782143 containerd[1502]: time="2025-03-17T17:43:01.781868489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:01.782143 containerd[1502]: time="2025-03-17T17:43:01.781905650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:01.785129 containerd[1502]: time="2025-03-17T17:43:01.783189595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:01.785129 containerd[1502]: time="2025-03-17T17:43:01.783001472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:01.786910 containerd[1502]: time="2025-03-17T17:43:01.785492561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:01.786910 containerd[1502]: time="2025-03-17T17:43:01.785564002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:01.786910 containerd[1502]: time="2025-03-17T17:43:01.786764346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:01.812842 systemd[1]: Started cri-containerd-4078df2880d99cb88168b3e8afcaa20fec11ca5ee5ff458b952c3a96c7d6b229.scope - libcontainer container 4078df2880d99cb88168b3e8afcaa20fec11ca5ee5ff458b952c3a96c7d6b229. Mar 17 17:43:01.814679 systemd[1]: Started cri-containerd-a010718dab6fafad41ba477d33b414ae76b65e09fb54becf20ba7fc8ca63af57.scope - libcontainer container a010718dab6fafad41ba477d33b414ae76b65e09fb54becf20ba7fc8ca63af57. Mar 17 17:43:01.828671 kubelet[2429]: W0317 17:43:01.827315 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://128.140.94.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-9-a82243c43d&limit=500&resourceVersion=0": dial tcp 128.140.94.11:6443: connect: connection refused Mar 17 17:43:01.828671 kubelet[2429]: E0317 17:43:01.827385 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://128.140.94.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-9-a82243c43d&limit=500&resourceVersion=0\": dial tcp 128.140.94.11:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:01.840895 systemd[1]: Started cri-containerd-2c91e758c46bb5561b10662cae4b5fbe8246276a3397ae32e005ff85125fca08.scope - libcontainer container 2c91e758c46bb5561b10662cae4b5fbe8246276a3397ae32e005ff85125fca08. Mar 17 17:43:01.859652 kubelet[2429]: W0317 17:43:01.859079 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://128.140.94.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 128.140.94.11:6443: connect: connection refused Mar 17 17:43:01.859652 kubelet[2429]: E0317 17:43:01.859127 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://128.140.94.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 128.140.94.11:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:01.899101 containerd[1502]: time="2025-03-17T17:43:01.899059715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-0-9-a82243c43d,Uid:5bbb65e0063ae97e117c8d233b03d1e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a010718dab6fafad41ba477d33b414ae76b65e09fb54becf20ba7fc8ca63af57\"" Mar 17 17:43:01.916175 containerd[1502]: time="2025-03-17T17:43:01.916050609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-0-9-a82243c43d,Uid:62b1657b587aed5bbd30c4b58139b275,Namespace:kube-system,Attempt:0,} returns sandbox id \"4078df2880d99cb88168b3e8afcaa20fec11ca5ee5ff458b952c3a96c7d6b229\"" Mar 17 17:43:01.916591 containerd[1502]: time="2025-03-17T17:43:01.916480817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-0-9-a82243c43d,Uid:fa20ffa41a9aa87f33dec9c34f934696,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c91e758c46bb5561b10662cae4b5fbe8246276a3397ae32e005ff85125fca08\"" Mar 17 17:43:01.917561 containerd[1502]: time="2025-03-17T17:43:01.917237272Z" level=info msg="CreateContainer within sandbox \"a010718dab6fafad41ba477d33b414ae76b65e09fb54becf20ba7fc8ca63af57\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:43:01.920384 containerd[1502]: time="2025-03-17T17:43:01.920319813Z" level=info msg="CreateContainer within sandbox \"4078df2880d99cb88168b3e8afcaa20fec11ca5ee5ff458b952c3a96c7d6b229\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:43:01.922085 containerd[1502]: time="2025-03-17T17:43:01.922044367Z" level=info msg="CreateContainer within sandbox \"2c91e758c46bb5561b10662cae4b5fbe8246276a3397ae32e005ff85125fca08\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:43:01.941642 containerd[1502]: time="2025-03-17T17:43:01.941577991Z" level=info msg="CreateContainer within sandbox \"a010718dab6fafad41ba477d33b414ae76b65e09fb54becf20ba7fc8ca63af57\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e516feddfc2e35937dd812da6fba73f3a3ad743f0f5f158b3b2a3d326262f1c1\"" Mar 17 17:43:01.943185 containerd[1502]: time="2025-03-17T17:43:01.943147462Z" level=info msg="StartContainer for \"e516feddfc2e35937dd812da6fba73f3a3ad743f0f5f158b3b2a3d326262f1c1\"" Mar 17 17:43:01.945215 containerd[1502]: time="2025-03-17T17:43:01.945113181Z" level=info msg="CreateContainer within sandbox \"4078df2880d99cb88168b3e8afcaa20fec11ca5ee5ff458b952c3a96c7d6b229\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"90a8c4241d1705f890adaa7837754a77524a1d81fef55f52d2e91a91149edca0\"" Mar 17 17:43:01.947231 containerd[1502]: time="2025-03-17T17:43:01.945997998Z" level=info msg="StartContainer for \"90a8c4241d1705f890adaa7837754a77524a1d81fef55f52d2e91a91149edca0\"" Mar 17 17:43:01.948293 containerd[1502]: time="2025-03-17T17:43:01.948227442Z" level=info msg="CreateContainer within sandbox \"2c91e758c46bb5561b10662cae4b5fbe8246276a3397ae32e005ff85125fca08\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5e19998894eabb22d7b58de8fa343b33105f9a4ba9c7ee49bb91f4ef12eb0951\"" Mar 17 17:43:01.948738 containerd[1502]: time="2025-03-17T17:43:01.948712051Z" level=info msg="StartContainer for \"5e19998894eabb22d7b58de8fa343b33105f9a4ba9c7ee49bb91f4ef12eb0951\"" Mar 17 17:43:01.952455 kubelet[2429]: W0317 17:43:01.952358 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://128.140.94.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 128.140.94.11:6443: connect: connection refused Mar 17 17:43:01.952564 kubelet[2429]: E0317 17:43:01.952466 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://128.140.94.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 128.140.94.11:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:01.955287 kubelet[2429]: W0317 17:43:01.955220 2429 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://128.140.94.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 128.140.94.11:6443: connect: connection refused Mar 17 17:43:01.955388 kubelet[2429]: E0317 17:43:01.955302 2429 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://128.140.94.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 128.140.94.11:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:43:01.987857 systemd[1]: Started cri-containerd-90a8c4241d1705f890adaa7837754a77524a1d81fef55f52d2e91a91149edca0.scope - libcontainer container 90a8c4241d1705f890adaa7837754a77524a1d81fef55f52d2e91a91149edca0. Mar 17 17:43:01.989484 systemd[1]: Started cri-containerd-e516feddfc2e35937dd812da6fba73f3a3ad743f0f5f158b3b2a3d326262f1c1.scope - libcontainer container e516feddfc2e35937dd812da6fba73f3a3ad743f0f5f158b3b2a3d326262f1c1. Mar 17 17:43:01.999020 systemd[1]: Started cri-containerd-5e19998894eabb22d7b58de8fa343b33105f9a4ba9c7ee49bb91f4ef12eb0951.scope - libcontainer container 5e19998894eabb22d7b58de8fa343b33105f9a4ba9c7ee49bb91f4ef12eb0951. Mar 17 17:43:02.031282 kubelet[2429]: E0317 17:43:02.030388 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.94.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-9-a82243c43d?timeout=10s\": dial tcp 128.140.94.11:6443: connect: connection refused" interval="1.6s" Mar 17 17:43:02.051299 containerd[1502]: time="2025-03-17T17:43:02.051220535Z" level=info msg="StartContainer for \"e516feddfc2e35937dd812da6fba73f3a3ad743f0f5f158b3b2a3d326262f1c1\" returns successfully" Mar 17 17:43:02.066184 containerd[1502]: time="2025-03-17T17:43:02.065806390Z" level=info msg="StartContainer for \"5e19998894eabb22d7b58de8fa343b33105f9a4ba9c7ee49bb91f4ef12eb0951\" returns successfully" Mar 17 17:43:02.087843 containerd[1502]: time="2025-03-17T17:43:02.087708393Z" level=info msg="StartContainer for \"90a8c4241d1705f890adaa7837754a77524a1d81fef55f52d2e91a91149edca0\" returns successfully" Mar 17 17:43:02.229007 kubelet[2429]: I0317 17:43:02.228974 2429 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:02.669078 kubelet[2429]: E0317 17:43:02.669008 2429 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-0-9-a82243c43d\" not found" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:02.680963 kubelet[2429]: E0317 17:43:02.680929 2429 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-0-9-a82243c43d\" not found" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:02.683090 kubelet[2429]: E0317 17:43:02.683034 2429 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-0-9-a82243c43d\" not found" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:03.683943 kubelet[2429]: E0317 17:43:03.683907 2429 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-0-9-a82243c43d\" not found" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:03.684391 kubelet[2429]: E0317 17:43:03.684339 2429 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-0-9-a82243c43d\" not found" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:05.767021 kubelet[2429]: E0317 17:43:05.766965 2429 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-1-0-9-a82243c43d\" not found" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:05.884593 kubelet[2429]: I0317 17:43:05.884552 2429 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:05.925897 kubelet[2429]: I0317 17:43:05.925839 2429 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:05.967578 kubelet[2429]: E0317 17:43:05.967495 2429 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-1-0-9-a82243c43d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:05.967578 kubelet[2429]: I0317 17:43:05.967535 2429 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:05.969647 kubelet[2429]: E0317 17:43:05.969575 2429 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-1-0-9-a82243c43d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:05.969647 kubelet[2429]: I0317 17:43:05.969620 2429 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:05.981506 kubelet[2429]: E0317 17:43:05.981415 2429 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-1-0-9-a82243c43d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:06.610677 kubelet[2429]: I0317 17:43:06.609791 2429 apiserver.go:52] "Watching apiserver" Mar 17 17:43:06.626130 kubelet[2429]: I0317 17:43:06.625943 2429 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:43:06.740034 kubelet[2429]: I0317 17:43:06.739842 2429 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:07.610231 kubelet[2429]: I0317 17:43:07.610177 2429 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:07.944286 systemd[1]: Reload requested from client PID 2701 ('systemctl') (unit session-7.scope)... Mar 17 17:43:07.944302 systemd[1]: Reloading... Mar 17 17:43:08.057662 zram_generator::config[2746]: No configuration found. Mar 17 17:43:08.179703 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:43:08.306822 systemd[1]: Reloading finished in 362 ms. Mar 17 17:43:08.336773 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:43:08.351130 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:43:08.352742 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:43:08.352836 systemd[1]: kubelet.service: Consumed 1.071s CPU time, 123.3M memory peak. Mar 17 17:43:08.359148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:43:08.504610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:43:08.518105 (kubelet)[2791]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:43:08.590616 kubelet[2791]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:43:08.591649 kubelet[2791]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:43:08.591649 kubelet[2791]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:43:08.591649 kubelet[2791]: I0317 17:43:08.591451 2791 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:43:08.602434 kubelet[2791]: I0317 17:43:08.602383 2791 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:43:08.602716 kubelet[2791]: I0317 17:43:08.602585 2791 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:43:08.603265 kubelet[2791]: I0317 17:43:08.603167 2791 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:43:08.605216 kubelet[2791]: I0317 17:43:08.605186 2791 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:43:08.608547 kubelet[2791]: I0317 17:43:08.608243 2791 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:43:08.617273 kubelet[2791]: E0317 17:43:08.617219 2791 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:43:08.617273 kubelet[2791]: I0317 17:43:08.617260 2791 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:43:08.621009 kubelet[2791]: I0317 17:43:08.620969 2791 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:43:08.621216 kubelet[2791]: I0317 17:43:08.621184 2791 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:43:08.621395 kubelet[2791]: I0317 17:43:08.621217 2791 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-0-9-a82243c43d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:43:08.621481 kubelet[2791]: I0317 17:43:08.621404 2791 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:43:08.621481 kubelet[2791]: I0317 17:43:08.621415 2791 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:43:08.621481 kubelet[2791]: I0317 17:43:08.621458 2791 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:43:08.621755 kubelet[2791]: I0317 17:43:08.621738 2791 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:43:08.622158 kubelet[2791]: I0317 17:43:08.621762 2791 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:43:08.622158 kubelet[2791]: I0317 17:43:08.621784 2791 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:43:08.622158 kubelet[2791]: I0317 17:43:08.621794 2791 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:43:08.625844 kubelet[2791]: I0317 17:43:08.625816 2791 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:43:08.626716 kubelet[2791]: I0317 17:43:08.626689 2791 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:43:08.628273 kubelet[2791]: I0317 17:43:08.628252 2791 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:43:08.628452 kubelet[2791]: I0317 17:43:08.628441 2791 server.go:1287] "Started kubelet" Mar 17 17:43:08.634119 kubelet[2791]: I0317 17:43:08.634042 2791 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:43:08.644650 kubelet[2791]: I0317 17:43:08.640682 2791 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:43:08.644650 kubelet[2791]: E0317 17:43:08.640943 2791 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-1-0-9-a82243c43d\" not found" Mar 17 17:43:08.644650 kubelet[2791]: I0317 17:43:08.641304 2791 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:43:08.644650 kubelet[2791]: I0317 17:43:08.642200 2791 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:43:08.644650 kubelet[2791]: I0317 17:43:08.642388 2791 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:43:08.646657 kubelet[2791]: I0317 17:43:08.645743 2791 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:43:08.652877 kubelet[2791]: I0317 17:43:08.645891 2791 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:43:08.653242 kubelet[2791]: I0317 17:43:08.653221 2791 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:43:08.653356 kubelet[2791]: I0317 17:43:08.646351 2791 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:43:08.655107 kubelet[2791]: I0317 17:43:08.655065 2791 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:43:08.661098 kubelet[2791]: I0317 17:43:08.661065 2791 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:43:08.661098 kubelet[2791]: I0317 17:43:08.661089 2791 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:43:08.674578 kubelet[2791]: I0317 17:43:08.674335 2791 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:43:08.692163 kubelet[2791]: I0317 17:43:08.689001 2791 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:43:08.692163 kubelet[2791]: I0317 17:43:08.689036 2791 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:43:08.692163 kubelet[2791]: I0317 17:43:08.689054 2791 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:43:08.692163 kubelet[2791]: I0317 17:43:08.689075 2791 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:43:08.692163 kubelet[2791]: E0317 17:43:08.689124 2791 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:43:08.742576 kubelet[2791]: I0317 17:43:08.741412 2791 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:43:08.742576 kubelet[2791]: I0317 17:43:08.741434 2791 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:43:08.742576 kubelet[2791]: I0317 17:43:08.741455 2791 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:43:08.742576 kubelet[2791]: I0317 17:43:08.741702 2791 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:43:08.742576 kubelet[2791]: I0317 17:43:08.741719 2791 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:43:08.742576 kubelet[2791]: I0317 17:43:08.741737 2791 policy_none.go:49] "None policy: Start" Mar 17 17:43:08.742576 kubelet[2791]: I0317 17:43:08.741746 2791 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:43:08.742576 kubelet[2791]: I0317 17:43:08.741758 2791 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:43:08.742576 kubelet[2791]: I0317 17:43:08.741864 2791 state_mem.go:75] "Updated machine memory state" Mar 17 17:43:08.748998 kubelet[2791]: I0317 17:43:08.748969 2791 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:43:08.749460 kubelet[2791]: I0317 17:43:08.749438 2791 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:43:08.750190 kubelet[2791]: I0317 17:43:08.750146 2791 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:43:08.750964 kubelet[2791]: I0317 17:43:08.750937 2791 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:43:08.752901 kubelet[2791]: E0317 17:43:08.752872 2791 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:43:08.791480 kubelet[2791]: I0317 17:43:08.791398 2791 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:08.792108 kubelet[2791]: I0317 17:43:08.792067 2791 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:08.792908 kubelet[2791]: I0317 17:43:08.792852 2791 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:08.806516 kubelet[2791]: E0317 17:43:08.806434 2791 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-1-0-9-a82243c43d\" already exists" pod="kube-system/kube-scheduler-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:08.808771 kubelet[2791]: E0317 17:43:08.808611 2791 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-1-0-9-a82243c43d\" already exists" pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:08.843579 kubelet[2791]: I0317 17:43:08.843225 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/62b1657b587aed5bbd30c4b58139b275-kubeconfig\") pod \"kube-scheduler-ci-4230-1-0-9-a82243c43d\" (UID: \"62b1657b587aed5bbd30c4b58139b275\") " pod="kube-system/kube-scheduler-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:08.843579 kubelet[2791]: I0317 17:43:08.843278 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa20ffa41a9aa87f33dec9c34f934696-ca-certs\") pod \"kube-apiserver-ci-4230-1-0-9-a82243c43d\" (UID: \"fa20ffa41a9aa87f33dec9c34f934696\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:08.843579 kubelet[2791]: I0317 17:43:08.843305 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa20ffa41a9aa87f33dec9c34f934696-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-0-9-a82243c43d\" (UID: \"fa20ffa41a9aa87f33dec9c34f934696\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:08.843579 kubelet[2791]: I0317 17:43:08.843332 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbb65e0063ae97e117c8d233b03d1e2-ca-certs\") pod \"kube-controller-manager-ci-4230-1-0-9-a82243c43d\" (UID: \"5bbb65e0063ae97e117c8d233b03d1e2\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:08.843579 kubelet[2791]: I0317 17:43:08.843359 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbb65e0063ae97e117c8d233b03d1e2-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-0-9-a82243c43d\" (UID: \"5bbb65e0063ae97e117c8d233b03d1e2\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:08.843974 kubelet[2791]: I0317 17:43:08.843384 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbb65e0063ae97e117c8d233b03d1e2-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-0-9-a82243c43d\" (UID: \"5bbb65e0063ae97e117c8d233b03d1e2\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:08.843974 kubelet[2791]: I0317 17:43:08.843405 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa20ffa41a9aa87f33dec9c34f934696-k8s-certs\") pod \"kube-apiserver-ci-4230-1-0-9-a82243c43d\" (UID: \"fa20ffa41a9aa87f33dec9c34f934696\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:08.843974 kubelet[2791]: I0317 17:43:08.843435 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbb65e0063ae97e117c8d233b03d1e2-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-0-9-a82243c43d\" (UID: \"5bbb65e0063ae97e117c8d233b03d1e2\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:08.843974 kubelet[2791]: I0317 17:43:08.843476 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbb65e0063ae97e117c8d233b03d1e2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-0-9-a82243c43d\" (UID: \"5bbb65e0063ae97e117c8d233b03d1e2\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:08.860240 kubelet[2791]: I0317 17:43:08.860202 2791 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:08.873984 kubelet[2791]: I0317 17:43:08.873642 2791 kubelet_node_status.go:125] "Node was previously registered" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:08.873984 kubelet[2791]: I0317 17:43:08.873731 2791 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230-1-0-9-a82243c43d" Mar 17 17:43:08.942066 sudo[2825]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:43:08.942378 sudo[2825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:43:09.399260 sudo[2825]: pam_unix(sudo:session): session closed for user root Mar 17 17:43:09.623684 kubelet[2791]: I0317 17:43:09.623240 2791 apiserver.go:52] "Watching apiserver" Mar 17 17:43:09.643201 kubelet[2791]: I0317 17:43:09.643122 2791 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:43:09.717971 kubelet[2791]: I0317 17:43:09.716578 2791 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:09.718840 kubelet[2791]: I0317 17:43:09.718815 2791 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:09.732465 kubelet[2791]: E0317 17:43:09.732110 2791 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-1-0-9-a82243c43d\" already exists" pod="kube-system/kube-scheduler-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:09.737848 kubelet[2791]: E0317 17:43:09.737595 2791 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-1-0-9-a82243c43d\" already exists" pod="kube-system/kube-apiserver-ci-4230-1-0-9-a82243c43d" Mar 17 17:43:09.778160 kubelet[2791]: I0317 17:43:09.778090 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-1-0-9-a82243c43d" podStartSLOduration=1.77806613 podStartE2EDuration="1.77806613s" podCreationTimestamp="2025-03-17 17:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:43:09.759716976 +0000 UTC m=+1.234505724" watchObservedRunningTime="2025-03-17 17:43:09.77806613 +0000 UTC m=+1.252854878" Mar 17 17:43:09.779914 kubelet[2791]: I0317 17:43:09.779726 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-1-0-9-a82243c43d" podStartSLOduration=3.779696328 podStartE2EDuration="3.779696328s" podCreationTimestamp="2025-03-17 17:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:43:09.777720442 +0000 UTC m=+1.252509190" watchObservedRunningTime="2025-03-17 17:43:09.779696328 +0000 UTC m=+1.254485076" Mar 17 17:43:09.794992 kubelet[2791]: I0317 17:43:09.794891 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a82243c43d" podStartSLOduration=2.794872368 podStartE2EDuration="2.794872368s" podCreationTimestamp="2025-03-17 17:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:43:09.794268113 +0000 UTC m=+1.269056861" watchObservedRunningTime="2025-03-17 17:43:09.794872368 +0000 UTC m=+1.269661116" Mar 17 17:43:11.561303 sudo[1902]: pam_unix(sudo:session): session closed for user root Mar 17 17:43:11.718318 sshd[1901]: Connection closed by 139.178.89.65 port 39266 Mar 17 17:43:11.719605 sshd-session[1884]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:11.725118 systemd[1]: sshd@6-128.140.94.11:22-139.178.89.65:39266.service: Deactivated successfully. Mar 17 17:43:11.728064 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:43:11.728404 systemd[1]: session-7.scope: Consumed 7.586s CPU time, 264.5M memory peak. Mar 17 17:43:11.729865 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:43:11.731297 systemd-logind[1486]: Removed session 7. Mar 17 17:43:14.646623 kubelet[2791]: I0317 17:43:14.646534 2791 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:43:14.647704 containerd[1502]: time="2025-03-17T17:43:14.647525902Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:43:14.648853 kubelet[2791]: I0317 17:43:14.648056 2791 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:43:15.445286 systemd[1]: Created slice kubepods-besteffort-pod64e1f699_90d5_49ab_b8e7_ba90dc577706.slice - libcontainer container kubepods-besteffort-pod64e1f699_90d5_49ab_b8e7_ba90dc577706.slice. Mar 17 17:43:15.462908 systemd[1]: Created slice kubepods-burstable-pod349e066e_c1be_47c7_b7f8_9a38bec4202a.slice - libcontainer container kubepods-burstable-pod349e066e_c1be_47c7_b7f8_9a38bec4202a.slice. Mar 17 17:43:15.484407 kubelet[2791]: I0317 17:43:15.483937 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-cilium-run\") pod \"cilium-t4pf6\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " pod="kube-system/cilium-t4pf6" Mar 17 17:43:15.484407 kubelet[2791]: I0317 17:43:15.484010 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-bpf-maps\") pod \"cilium-t4pf6\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " pod="kube-system/cilium-t4pf6" Mar 17 17:43:15.484407 kubelet[2791]: I0317 17:43:15.484031 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/349e066e-c1be-47c7-b7f8-9a38bec4202a-cilium-config-path\") pod \"cilium-t4pf6\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " pod="kube-system/cilium-t4pf6" Mar 17 17:43:15.484407 kubelet[2791]: I0317 17:43:15.484049 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-xtables-lock\") pod \"cilium-t4pf6\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " pod="kube-system/cilium-t4pf6" Mar 17 17:43:15.484407 kubelet[2791]: I0317 17:43:15.484064 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-host-proc-sys-net\") pod \"cilium-t4pf6\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " pod="kube-system/cilium-t4pf6" Mar 17 17:43:15.484407 kubelet[2791]: I0317 17:43:15.484079 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64e1f699-90d5-49ab-b8e7-ba90dc577706-xtables-lock\") pod \"kube-proxy-tp4mk\" (UID: \"64e1f699-90d5-49ab-b8e7-ba90dc577706\") " pod="kube-system/kube-proxy-tp4mk" Mar 17 17:43:15.484834 kubelet[2791]: I0317 17:43:15.484094 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64e1f699-90d5-49ab-b8e7-ba90dc577706-lib-modules\") pod \"kube-proxy-tp4mk\" (UID: \"64e1f699-90d5-49ab-b8e7-ba90dc577706\") " pod="kube-system/kube-proxy-tp4mk" Mar 17 17:43:15.484834 kubelet[2791]: I0317 17:43:15.484110 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw7pn\" (UniqueName: \"kubernetes.io/projected/349e066e-c1be-47c7-b7f8-9a38bec4202a-kube-api-access-vw7pn\") pod \"cilium-t4pf6\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " pod="kube-system/cilium-t4pf6" Mar 17 17:43:15.484834 kubelet[2791]: I0317 17:43:15.484126 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg9kh\" (UniqueName: \"kubernetes.io/projected/64e1f699-90d5-49ab-b8e7-ba90dc577706-kube-api-access-mg9kh\") pod \"kube-proxy-tp4mk\" (UID: \"64e1f699-90d5-49ab-b8e7-ba90dc577706\") " pod="kube-system/kube-proxy-tp4mk" Mar 17 17:43:15.484834 kubelet[2791]: I0317 17:43:15.484143 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-hostproc\") pod \"cilium-t4pf6\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " pod="kube-system/cilium-t4pf6" Mar 17 17:43:15.484834 kubelet[2791]: I0317 17:43:15.484157 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-cni-path\") pod \"cilium-t4pf6\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " pod="kube-system/cilium-t4pf6" Mar 17 17:43:15.484834 kubelet[2791]: I0317 17:43:15.484171 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-etc-cni-netd\") pod \"cilium-t4pf6\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " pod="kube-system/cilium-t4pf6" Mar 17 17:43:15.484981 kubelet[2791]: I0317 17:43:15.484185 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-lib-modules\") pod \"cilium-t4pf6\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " pod="kube-system/cilium-t4pf6" Mar 17 17:43:15.484981 kubelet[2791]: I0317 17:43:15.484201 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/349e066e-c1be-47c7-b7f8-9a38bec4202a-clustermesh-secrets\") pod \"cilium-t4pf6\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " pod="kube-system/cilium-t4pf6" Mar 17 17:43:15.484981 kubelet[2791]: I0317 17:43:15.484219 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/64e1f699-90d5-49ab-b8e7-ba90dc577706-kube-proxy\") pod \"kube-proxy-tp4mk\" (UID: \"64e1f699-90d5-49ab-b8e7-ba90dc577706\") " pod="kube-system/kube-proxy-tp4mk" Mar 17 17:43:15.484981 kubelet[2791]: I0317 17:43:15.484234 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-cilium-cgroup\") pod \"cilium-t4pf6\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " pod="kube-system/cilium-t4pf6" Mar 17 17:43:15.484981 kubelet[2791]: I0317 17:43:15.484250 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/349e066e-c1be-47c7-b7f8-9a38bec4202a-hubble-tls\") pod \"cilium-t4pf6\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " pod="kube-system/cilium-t4pf6" Mar 17 17:43:15.484981 kubelet[2791]: I0317 17:43:15.484270 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-host-proc-sys-kernel\") pod \"cilium-t4pf6\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " pod="kube-system/cilium-t4pf6" Mar 17 17:43:15.613095 kubelet[2791]: E0317 17:43:15.611306 2791 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 17:43:15.613095 kubelet[2791]: E0317 17:43:15.611341 2791 projected.go:194] Error preparing data for projected volume kube-api-access-mg9kh for pod kube-system/kube-proxy-tp4mk: configmap "kube-root-ca.crt" not found Mar 17 17:43:15.613095 kubelet[2791]: E0317 17:43:15.611443 2791 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/64e1f699-90d5-49ab-b8e7-ba90dc577706-kube-api-access-mg9kh podName:64e1f699-90d5-49ab-b8e7-ba90dc577706 nodeName:}" failed. No retries permitted until 2025-03-17 17:43:16.111417292 +0000 UTC m=+7.586206040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mg9kh" (UniqueName: "kubernetes.io/projected/64e1f699-90d5-49ab-b8e7-ba90dc577706-kube-api-access-mg9kh") pod "kube-proxy-tp4mk" (UID: "64e1f699-90d5-49ab-b8e7-ba90dc577706") : configmap "kube-root-ca.crt" not found Mar 17 17:43:15.614977 kubelet[2791]: E0317 17:43:15.614830 2791 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 17:43:15.614977 kubelet[2791]: E0317 17:43:15.614865 2791 projected.go:194] Error preparing data for projected volume kube-api-access-vw7pn for pod kube-system/cilium-t4pf6: configmap "kube-root-ca.crt" not found Mar 17 17:43:15.614977 kubelet[2791]: E0317 17:43:15.614925 2791 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/349e066e-c1be-47c7-b7f8-9a38bec4202a-kube-api-access-vw7pn podName:349e066e-c1be-47c7-b7f8-9a38bec4202a nodeName:}" failed. No retries permitted until 2025-03-17 17:43:16.114905383 +0000 UTC m=+7.589694131 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vw7pn" (UniqueName: "kubernetes.io/projected/349e066e-c1be-47c7-b7f8-9a38bec4202a-kube-api-access-vw7pn") pod "cilium-t4pf6" (UID: "349e066e-c1be-47c7-b7f8-9a38bec4202a") : configmap "kube-root-ca.crt" not found Mar 17 17:43:15.762416 systemd[1]: Created slice kubepods-besteffort-pod4954672b_ad5c_4662_bae7_9b2f2cb140a9.slice - libcontainer container kubepods-besteffort-pod4954672b_ad5c_4662_bae7_9b2f2cb140a9.slice. Mar 17 17:43:15.786387 kubelet[2791]: I0317 17:43:15.786251 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4954672b-ad5c-4662-bae7-9b2f2cb140a9-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zjdhr\" (UID: \"4954672b-ad5c-4662-bae7-9b2f2cb140a9\") " pod="kube-system/cilium-operator-6c4d7847fc-zjdhr" Mar 17 17:43:15.786387 kubelet[2791]: I0317 17:43:15.786316 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twrgp\" (UniqueName: \"kubernetes.io/projected/4954672b-ad5c-4662-bae7-9b2f2cb140a9-kube-api-access-twrgp\") pod \"cilium-operator-6c4d7847fc-zjdhr\" (UID: \"4954672b-ad5c-4662-bae7-9b2f2cb140a9\") " pod="kube-system/cilium-operator-6c4d7847fc-zjdhr" Mar 17 17:43:16.068223 containerd[1502]: time="2025-03-17T17:43:16.068017256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zjdhr,Uid:4954672b-ad5c-4662-bae7-9b2f2cb140a9,Namespace:kube-system,Attempt:0,}" Mar 17 17:43:16.100131 containerd[1502]: time="2025-03-17T17:43:16.099987540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:16.102045 containerd[1502]: time="2025-03-17T17:43:16.100060422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:16.102045 containerd[1502]: time="2025-03-17T17:43:16.100492674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:16.102045 containerd[1502]: time="2025-03-17T17:43:16.100717559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:16.122893 systemd[1]: Started cri-containerd-7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97.scope - libcontainer container 7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97. Mar 17 17:43:16.165917 containerd[1502]: time="2025-03-17T17:43:16.165849640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zjdhr,Uid:4954672b-ad5c-4662-bae7-9b2f2cb140a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97\"" Mar 17 17:43:16.168986 containerd[1502]: time="2025-03-17T17:43:16.168946402Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:43:16.356347 containerd[1502]: time="2025-03-17T17:43:16.356198069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tp4mk,Uid:64e1f699-90d5-49ab-b8e7-ba90dc577706,Namespace:kube-system,Attempt:0,}" Mar 17 17:43:16.373695 containerd[1502]: time="2025-03-17T17:43:16.373201158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t4pf6,Uid:349e066e-c1be-47c7-b7f8-9a38bec4202a,Namespace:kube-system,Attempt:0,}" Mar 17 17:43:16.387577 containerd[1502]: time="2025-03-17T17:43:16.386982882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:16.387577 containerd[1502]: time="2025-03-17T17:43:16.387051524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:16.387577 containerd[1502]: time="2025-03-17T17:43:16.387066804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:16.387577 containerd[1502]: time="2025-03-17T17:43:16.387145086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:16.408895 systemd[1]: Started cri-containerd-b5d50d2b9736cff4307bfcecf5183773caeccc340c8ec29976e2f8d677a32059.scope - libcontainer container b5d50d2b9736cff4307bfcecf5183773caeccc340c8ec29976e2f8d677a32059. Mar 17 17:43:16.416497 containerd[1502]: time="2025-03-17T17:43:16.414881379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:16.416497 containerd[1502]: time="2025-03-17T17:43:16.414940501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:16.416497 containerd[1502]: time="2025-03-17T17:43:16.414960901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:16.416497 containerd[1502]: time="2025-03-17T17:43:16.415050104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:16.439852 systemd[1]: Started cri-containerd-0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552.scope - libcontainer container 0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552. Mar 17 17:43:16.450357 containerd[1502]: time="2025-03-17T17:43:16.450313235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tp4mk,Uid:64e1f699-90d5-49ab-b8e7-ba90dc577706,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5d50d2b9736cff4307bfcecf5183773caeccc340c8ec29976e2f8d677a32059\"" Mar 17 17:43:16.458789 containerd[1502]: time="2025-03-17T17:43:16.458675336Z" level=info msg="CreateContainer within sandbox \"b5d50d2b9736cff4307bfcecf5183773caeccc340c8ec29976e2f8d677a32059\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:43:16.475821 containerd[1502]: time="2025-03-17T17:43:16.475771868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t4pf6,Uid:349e066e-c1be-47c7-b7f8-9a38bec4202a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\"" Mar 17 17:43:16.482078 containerd[1502]: time="2025-03-17T17:43:16.482015633Z" level=info msg="CreateContainer within sandbox \"b5d50d2b9736cff4307bfcecf5183773caeccc340c8ec29976e2f8d677a32059\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9d9af724fa9f4c1d8dd8b78fa8080b6f287d69b6100ddda674ecaa84680a82dc\"" Mar 17 17:43:16.482971 containerd[1502]: time="2025-03-17T17:43:16.482827414Z" level=info msg="StartContainer for \"9d9af724fa9f4c1d8dd8b78fa8080b6f287d69b6100ddda674ecaa84680a82dc\"" Mar 17 17:43:16.510774 systemd[1]: Started cri-containerd-9d9af724fa9f4c1d8dd8b78fa8080b6f287d69b6100ddda674ecaa84680a82dc.scope - libcontainer container 9d9af724fa9f4c1d8dd8b78fa8080b6f287d69b6100ddda674ecaa84680a82dc. Mar 17 17:43:16.546836 containerd[1502]: time="2025-03-17T17:43:16.546788184Z" level=info msg="StartContainer for \"9d9af724fa9f4c1d8dd8b78fa8080b6f287d69b6100ddda674ecaa84680a82dc\" returns successfully" Mar 17 17:43:16.763847 kubelet[2791]: I0317 17:43:16.763738 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tp4mk" podStartSLOduration=1.763588791 podStartE2EDuration="1.763588791s" podCreationTimestamp="2025-03-17 17:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:43:16.763308344 +0000 UTC m=+8.238097692" watchObservedRunningTime="2025-03-17 17:43:16.763588791 +0000 UTC m=+8.238377579" Mar 17 17:43:21.731011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2780387320.mount: Deactivated successfully. Mar 17 17:43:22.202624 containerd[1502]: time="2025-03-17T17:43:22.202567879Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:22.203976 containerd[1502]: time="2025-03-17T17:43:22.203805234Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 17 17:43:22.204884 containerd[1502]: time="2025-03-17T17:43:22.204845863Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:22.207657 containerd[1502]: time="2025-03-17T17:43:22.207534980Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 6.038417773s" Mar 17 17:43:22.207657 containerd[1502]: time="2025-03-17T17:43:22.207578301Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 17:43:22.207657 containerd[1502]: time="2025-03-17T17:43:22.209870646Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:43:22.216612 containerd[1502]: time="2025-03-17T17:43:22.216563235Z" level=info msg="CreateContainer within sandbox \"7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:43:22.238125 containerd[1502]: time="2025-03-17T17:43:22.238062485Z" level=info msg="CreateContainer within sandbox \"7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a\"" Mar 17 17:43:22.240539 containerd[1502]: time="2025-03-17T17:43:22.240495834Z" level=info msg="StartContainer for \"d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a\"" Mar 17 17:43:22.277977 systemd[1]: Started cri-containerd-d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a.scope - libcontainer container d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a. Mar 17 17:43:22.309124 containerd[1502]: time="2025-03-17T17:43:22.308593603Z" level=info msg="StartContainer for \"d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a\" returns successfully" Mar 17 17:43:26.708100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1873311208.mount: Deactivated successfully. Mar 17 17:43:28.241851 containerd[1502]: time="2025-03-17T17:43:28.241790862Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:28.243204 containerd[1502]: time="2025-03-17T17:43:28.242926096Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 17 17:43:28.245662 containerd[1502]: time="2025-03-17T17:43:28.244580906Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:28.247656 containerd[1502]: time="2025-03-17T17:43:28.247596356Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.037683029s" Mar 17 17:43:28.247786 containerd[1502]: time="2025-03-17T17:43:28.247659838Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 17:43:28.251673 containerd[1502]: time="2025-03-17T17:43:28.251586075Z" level=info msg="CreateContainer within sandbox \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:43:28.266911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447063260.mount: Deactivated successfully. Mar 17 17:43:28.272107 containerd[1502]: time="2025-03-17T17:43:28.271771839Z" level=info msg="CreateContainer within sandbox \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7\"" Mar 17 17:43:28.273962 containerd[1502]: time="2025-03-17T17:43:28.273874982Z" level=info msg="StartContainer for \"78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7\"" Mar 17 17:43:28.312887 systemd[1]: Started cri-containerd-78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7.scope - libcontainer container 78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7. Mar 17 17:43:28.353535 containerd[1502]: time="2025-03-17T17:43:28.353366201Z" level=info msg="StartContainer for \"78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7\" returns successfully" Mar 17 17:43:28.366321 systemd[1]: cri-containerd-78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7.scope: Deactivated successfully. Mar 17 17:43:28.466593 containerd[1502]: time="2025-03-17T17:43:28.466250979Z" level=info msg="shim disconnected" id=78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7 namespace=k8s.io Mar 17 17:43:28.466593 containerd[1502]: time="2025-03-17T17:43:28.466340941Z" level=warning msg="cleaning up after shim disconnected" id=78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7 namespace=k8s.io Mar 17 17:43:28.466593 containerd[1502]: time="2025-03-17T17:43:28.466354542Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:43:28.820182 containerd[1502]: time="2025-03-17T17:43:28.819083336Z" level=info msg="CreateContainer within sandbox \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:43:28.838247 containerd[1502]: time="2025-03-17T17:43:28.838188188Z" level=info msg="CreateContainer within sandbox \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9\"" Mar 17 17:43:28.839730 containerd[1502]: time="2025-03-17T17:43:28.839046254Z" level=info msg="StartContainer for \"a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9\"" Mar 17 17:43:28.844654 kubelet[2791]: I0317 17:43:28.843675 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zjdhr" podStartSLOduration=7.802813191 podStartE2EDuration="13.843653911s" podCreationTimestamp="2025-03-17 17:43:15 +0000 UTC" firstStartedPulling="2025-03-17 17:43:16.168013137 +0000 UTC m=+7.642801885" lastFinishedPulling="2025-03-17 17:43:22.208853857 +0000 UTC m=+13.683642605" observedRunningTime="2025-03-17 17:43:22.880489769 +0000 UTC m=+14.355278517" watchObservedRunningTime="2025-03-17 17:43:28.843653911 +0000 UTC m=+20.318442659" Mar 17 17:43:28.871844 systemd[1]: Started cri-containerd-a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9.scope - libcontainer container a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9. Mar 17 17:43:28.899996 containerd[1502]: time="2025-03-17T17:43:28.899898234Z" level=info msg="StartContainer for \"a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9\" returns successfully" Mar 17 17:43:28.917165 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:43:28.917489 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:43:28.917785 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:43:28.927196 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:43:28.927476 systemd[1]: cri-containerd-a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9.scope: Deactivated successfully. Mar 17 17:43:28.948833 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:43:28.963577 containerd[1502]: time="2025-03-17T17:43:28.963453416Z" level=info msg="shim disconnected" id=a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9 namespace=k8s.io Mar 17 17:43:28.963577 containerd[1502]: time="2025-03-17T17:43:28.963552219Z" level=warning msg="cleaning up after shim disconnected" id=a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9 namespace=k8s.io Mar 17 17:43:28.963577 containerd[1502]: time="2025-03-17T17:43:28.963577220Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:43:29.266105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7-rootfs.mount: Deactivated successfully. Mar 17 17:43:29.821023 containerd[1502]: time="2025-03-17T17:43:29.820596057Z" level=info msg="CreateContainer within sandbox \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:43:29.851710 containerd[1502]: time="2025-03-17T17:43:29.851166779Z" level=info msg="CreateContainer within sandbox \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50\"" Mar 17 17:43:29.857359 containerd[1502]: time="2025-03-17T17:43:29.856414537Z" level=info msg="StartContainer for \"d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50\"" Mar 17 17:43:29.897906 systemd[1]: Started cri-containerd-d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50.scope - libcontainer container d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50. Mar 17 17:43:29.933682 containerd[1502]: time="2025-03-17T17:43:29.932275425Z" level=info msg="StartContainer for \"d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50\" returns successfully" Mar 17 17:43:29.937265 systemd[1]: cri-containerd-d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50.scope: Deactivated successfully. Mar 17 17:43:29.969382 containerd[1502]: time="2025-03-17T17:43:29.969308622Z" level=info msg="shim disconnected" id=d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50 namespace=k8s.io Mar 17 17:43:29.969836 containerd[1502]: time="2025-03-17T17:43:29.969420545Z" level=warning msg="cleaning up after shim disconnected" id=d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50 namespace=k8s.io Mar 17 17:43:29.969836 containerd[1502]: time="2025-03-17T17:43:29.969431746Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:43:30.267429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50-rootfs.mount: Deactivated successfully. Mar 17 17:43:30.827947 containerd[1502]: time="2025-03-17T17:43:30.827773941Z" level=info msg="CreateContainer within sandbox \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:43:30.856561 containerd[1502]: time="2025-03-17T17:43:30.856503654Z" level=info msg="CreateContainer within sandbox \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b\"" Mar 17 17:43:30.860385 containerd[1502]: time="2025-03-17T17:43:30.858820965Z" level=info msg="StartContainer for \"68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b\"" Mar 17 17:43:30.894942 systemd[1]: run-containerd-runc-k8s.io-68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b-runc.6U2O9w.mount: Deactivated successfully. Mar 17 17:43:30.902821 systemd[1]: Started cri-containerd-68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b.scope - libcontainer container 68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b. Mar 17 17:43:30.930521 systemd[1]: cri-containerd-68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b.scope: Deactivated successfully. Mar 17 17:43:30.936924 containerd[1502]: time="2025-03-17T17:43:30.936787414Z" level=info msg="StartContainer for \"68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b\" returns successfully" Mar 17 17:43:30.969221 containerd[1502]: time="2025-03-17T17:43:30.969150717Z" level=info msg="shim disconnected" id=68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b namespace=k8s.io Mar 17 17:43:30.969221 containerd[1502]: time="2025-03-17T17:43:30.969218399Z" level=warning msg="cleaning up after shim disconnected" id=68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b namespace=k8s.io Mar 17 17:43:30.969565 containerd[1502]: time="2025-03-17T17:43:30.969247600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:43:31.264087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b-rootfs.mount: Deactivated successfully. Mar 17 17:43:31.834395 containerd[1502]: time="2025-03-17T17:43:31.834213628Z" level=info msg="CreateContainer within sandbox \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:43:31.863547 containerd[1502]: time="2025-03-17T17:43:31.863459124Z" level=info msg="CreateContainer within sandbox \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533\"" Mar 17 17:43:31.865297 containerd[1502]: time="2025-03-17T17:43:31.865250218Z" level=info msg="StartContainer for \"c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533\"" Mar 17 17:43:31.901871 systemd[1]: Started cri-containerd-c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533.scope - libcontainer container c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533. Mar 17 17:43:31.939500 containerd[1502]: time="2025-03-17T17:43:31.939438089Z" level=info msg="StartContainer for \"c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533\" returns successfully" Mar 17 17:43:32.057703 kubelet[2791]: I0317 17:43:32.057052 2791 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 17:43:32.106018 kubelet[2791]: I0317 17:43:32.104898 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgqd2\" (UniqueName: \"kubernetes.io/projected/0a135f8f-6e25-41a1-a5c0-8034a27ccdf7-kube-api-access-sgqd2\") pod \"coredns-668d6bf9bc-crwkx\" (UID: \"0a135f8f-6e25-41a1-a5c0-8034a27ccdf7\") " pod="kube-system/coredns-668d6bf9bc-crwkx" Mar 17 17:43:32.106018 kubelet[2791]: I0317 17:43:32.104938 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a135f8f-6e25-41a1-a5c0-8034a27ccdf7-config-volume\") pod \"coredns-668d6bf9bc-crwkx\" (UID: \"0a135f8f-6e25-41a1-a5c0-8034a27ccdf7\") " pod="kube-system/coredns-668d6bf9bc-crwkx" Mar 17 17:43:32.112309 systemd[1]: Created slice kubepods-burstable-pod0a135f8f_6e25_41a1_a5c0_8034a27ccdf7.slice - libcontainer container kubepods-burstable-pod0a135f8f_6e25_41a1_a5c0_8034a27ccdf7.slice. Mar 17 17:43:32.123446 systemd[1]: Created slice kubepods-burstable-pod3b506da1_c77f_4c00_8a5e_2df66def19e0.slice - libcontainer container kubepods-burstable-pod3b506da1_c77f_4c00_8a5e_2df66def19e0.slice. Mar 17 17:43:32.205749 kubelet[2791]: I0317 17:43:32.205710 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b506da1-c77f-4c00-8a5e-2df66def19e0-config-volume\") pod \"coredns-668d6bf9bc-vwhjp\" (UID: \"3b506da1-c77f-4c00-8a5e-2df66def19e0\") " pod="kube-system/coredns-668d6bf9bc-vwhjp" Mar 17 17:43:32.206574 kubelet[2791]: I0317 17:43:32.206544 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl2c7\" (UniqueName: \"kubernetes.io/projected/3b506da1-c77f-4c00-8a5e-2df66def19e0-kube-api-access-hl2c7\") pod \"coredns-668d6bf9bc-vwhjp\" (UID: \"3b506da1-c77f-4c00-8a5e-2df66def19e0\") " pod="kube-system/coredns-668d6bf9bc-vwhjp" Mar 17 17:43:32.422982 containerd[1502]: time="2025-03-17T17:43:32.422926179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-crwkx,Uid:0a135f8f-6e25-41a1-a5c0-8034a27ccdf7,Namespace:kube-system,Attempt:0,}" Mar 17 17:43:32.431458 containerd[1502]: time="2025-03-17T17:43:32.430982267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vwhjp,Uid:3b506da1-c77f-4c00-8a5e-2df66def19e0,Namespace:kube-system,Attempt:0,}" Mar 17 17:43:32.864389 kubelet[2791]: I0317 17:43:32.864233 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t4pf6" podStartSLOduration=6.094039732 podStartE2EDuration="17.864213741s" podCreationTimestamp="2025-03-17 17:43:15 +0000 UTC" firstStartedPulling="2025-03-17 17:43:16.478797788 +0000 UTC m=+7.953586496" lastFinishedPulling="2025-03-17 17:43:28.248971757 +0000 UTC m=+19.723760505" observedRunningTime="2025-03-17 17:43:32.858220116 +0000 UTC m=+24.333008864" watchObservedRunningTime="2025-03-17 17:43:32.864213741 +0000 UTC m=+24.339002489" Mar 17 17:43:34.204423 systemd-networkd[1399]: cilium_host: Link UP Mar 17 17:43:34.204551 systemd-networkd[1399]: cilium_net: Link UP Mar 17 17:43:34.204708 systemd-networkd[1399]: cilium_net: Gained carrier Mar 17 17:43:34.207905 systemd-networkd[1399]: cilium_host: Gained carrier Mar 17 17:43:34.344225 systemd-networkd[1399]: cilium_vxlan: Link UP Mar 17 17:43:34.344236 systemd-networkd[1399]: cilium_vxlan: Gained carrier Mar 17 17:43:34.408902 systemd-networkd[1399]: cilium_host: Gained IPv6LL Mar 17 17:43:34.569030 systemd-networkd[1399]: cilium_net: Gained IPv6LL Mar 17 17:43:34.656661 kernel: NET: Registered PF_ALG protocol family Mar 17 17:43:35.457878 systemd-networkd[1399]: lxc_health: Link UP Mar 17 17:43:35.472810 systemd-networkd[1399]: cilium_vxlan: Gained IPv6LL Mar 17 17:43:35.473756 systemd-networkd[1399]: lxc_health: Gained carrier Mar 17 17:43:36.022581 kernel: eth0: renamed from tmpc1abf Mar 17 17:43:36.027236 systemd-networkd[1399]: lxc5ecc64c0f299: Link UP Mar 17 17:43:36.036719 kernel: eth0: renamed from tmpb8057 Mar 17 17:43:36.043300 systemd-networkd[1399]: lxc88098ded54eb: Link UP Mar 17 17:43:36.044396 systemd-networkd[1399]: lxc5ecc64c0f299: Gained carrier Mar 17 17:43:36.044548 systemd-networkd[1399]: lxc88098ded54eb: Gained carrier Mar 17 17:43:36.754193 systemd-networkd[1399]: lxc_health: Gained IPv6LL Mar 17 17:43:37.138138 systemd-networkd[1399]: lxc88098ded54eb: Gained IPv6LL Mar 17 17:43:38.035597 systemd-networkd[1399]: lxc5ecc64c0f299: Gained IPv6LL Mar 17 17:43:40.196058 containerd[1502]: time="2025-03-17T17:43:40.195923692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:40.198593 containerd[1502]: time="2025-03-17T17:43:40.195990454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:40.198593 containerd[1502]: time="2025-03-17T17:43:40.198310729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:40.198593 containerd[1502]: time="2025-03-17T17:43:40.198425333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:40.234170 systemd[1]: run-containerd-runc-k8s.io-b8057eaad7c9fb06223f3f315136732d463fec888524ce3c8f77992532f10acf-runc.d4K7ba.mount: Deactivated successfully. Mar 17 17:43:40.241951 systemd[1]: Started cri-containerd-b8057eaad7c9fb06223f3f315136732d463fec888524ce3c8f77992532f10acf.scope - libcontainer container b8057eaad7c9fb06223f3f315136732d463fec888524ce3c8f77992532f10acf. Mar 17 17:43:40.258276 containerd[1502]: time="2025-03-17T17:43:40.257516562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:40.258276 containerd[1502]: time="2025-03-17T17:43:40.258173344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:40.258849 containerd[1502]: time="2025-03-17T17:43:40.258736722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:40.259051 containerd[1502]: time="2025-03-17T17:43:40.258984570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:40.293959 systemd[1]: Started cri-containerd-c1abff1af4db38529f281576a7fb17de28b8942db9ebed14ee96d4364abf3359.scope - libcontainer container c1abff1af4db38529f281576a7fb17de28b8942db9ebed14ee96d4364abf3359. Mar 17 17:43:40.338293 containerd[1502]: time="2025-03-17T17:43:40.338251052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vwhjp,Uid:3b506da1-c77f-4c00-8a5e-2df66def19e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8057eaad7c9fb06223f3f315136732d463fec888524ce3c8f77992532f10acf\"" Mar 17 17:43:40.345406 containerd[1502]: time="2025-03-17T17:43:40.345323520Z" level=info msg="CreateContainer within sandbox \"b8057eaad7c9fb06223f3f315136732d463fec888524ce3c8f77992532f10acf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:43:40.372712 containerd[1502]: time="2025-03-17T17:43:40.371488246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-crwkx,Uid:0a135f8f-6e25-41a1-a5c0-8034a27ccdf7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1abff1af4db38529f281576a7fb17de28b8942db9ebed14ee96d4364abf3359\"" Mar 17 17:43:40.374112 containerd[1502]: time="2025-03-17T17:43:40.373814481Z" level=info msg="CreateContainer within sandbox \"b8057eaad7c9fb06223f3f315136732d463fec888524ce3c8f77992532f10acf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d788f57ae06f025e295c318d5fc88fe589418d37789e04bd7715b953e663d88a\"" Mar 17 17:43:40.374737 containerd[1502]: time="2025-03-17T17:43:40.374509503Z" level=info msg="StartContainer for \"d788f57ae06f025e295c318d5fc88fe589418d37789e04bd7715b953e663d88a\"" Mar 17 17:43:40.379426 containerd[1502]: time="2025-03-17T17:43:40.379176534Z" level=info msg="CreateContainer within sandbox \"c1abff1af4db38529f281576a7fb17de28b8942db9ebed14ee96d4364abf3359\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:43:40.413524 containerd[1502]: time="2025-03-17T17:43:40.412893824Z" level=info msg="CreateContainer within sandbox \"c1abff1af4db38529f281576a7fb17de28b8942db9ebed14ee96d4364abf3359\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d3888cc64531efb4bbc44e22740f26f0cbeac03d75f23fdde9651fdf2d19d26\"" Mar 17 17:43:40.415262 containerd[1502]: time="2025-03-17T17:43:40.415183298Z" level=info msg="StartContainer for \"6d3888cc64531efb4bbc44e22740f26f0cbeac03d75f23fdde9651fdf2d19d26\"" Mar 17 17:43:40.417519 systemd[1]: Started cri-containerd-d788f57ae06f025e295c318d5fc88fe589418d37789e04bd7715b953e663d88a.scope - libcontainer container d788f57ae06f025e295c318d5fc88fe589418d37789e04bd7715b953e663d88a. Mar 17 17:43:40.452841 systemd[1]: Started cri-containerd-6d3888cc64531efb4bbc44e22740f26f0cbeac03d75f23fdde9651fdf2d19d26.scope - libcontainer container 6d3888cc64531efb4bbc44e22740f26f0cbeac03d75f23fdde9651fdf2d19d26. Mar 17 17:43:40.499344 containerd[1502]: time="2025-03-17T17:43:40.498972125Z" level=info msg="StartContainer for \"d788f57ae06f025e295c318d5fc88fe589418d37789e04bd7715b953e663d88a\" returns successfully" Mar 17 17:43:40.525273 containerd[1502]: time="2025-03-17T17:43:40.525036168Z" level=info msg="StartContainer for \"6d3888cc64531efb4bbc44e22740f26f0cbeac03d75f23fdde9651fdf2d19d26\" returns successfully" Mar 17 17:43:40.882144 kubelet[2791]: I0317 17:43:40.880522 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-crwkx" podStartSLOduration=25.880503095 podStartE2EDuration="25.880503095s" podCreationTimestamp="2025-03-17 17:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:43:40.87694006 +0000 UTC m=+32.351728888" watchObservedRunningTime="2025-03-17 17:43:40.880503095 +0000 UTC m=+32.355291843" Mar 17 17:43:40.901831 kubelet[2791]: I0317 17:43:40.901719 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vwhjp" podStartSLOduration=25.90169818 podStartE2EDuration="25.90169818s" podCreationTimestamp="2025-03-17 17:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:43:40.899481628 +0000 UTC m=+32.374270376" watchObservedRunningTime="2025-03-17 17:43:40.90169818 +0000 UTC m=+32.376486928" Mar 17 17:46:05.947979 update_engine[1490]: I20250317 17:46:05.947894 1490 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 17 17:46:05.952653 update_engine[1490]: I20250317 17:46:05.948261 1490 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 17 17:46:05.952653 update_engine[1490]: I20250317 17:46:05.948539 1490 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 17 17:46:05.952653 update_engine[1490]: I20250317 17:46:05.949419 1490 omaha_request_params.cc:62] Current group set to beta Mar 17 17:46:05.952653 update_engine[1490]: I20250317 17:46:05.949538 1490 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 17 17:46:05.952653 update_engine[1490]: I20250317 17:46:05.949592 1490 update_attempter.cc:643] Scheduling an action processor start. Mar 17 17:46:05.952653 update_engine[1490]: I20250317 17:46:05.949616 1490 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 17:46:05.952653 update_engine[1490]: I20250317 17:46:05.949676 1490 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 17 17:46:05.952653 update_engine[1490]: I20250317 17:46:05.949745 1490 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 17:46:05.952653 update_engine[1490]: I20250317 17:46:05.949753 1490 omaha_request_action.cc:272] Request: Mar 17 17:46:05.952653 update_engine[1490]: Mar 17 17:46:05.952653 update_engine[1490]: Mar 17 17:46:05.952653 update_engine[1490]: Mar 17 17:46:05.952653 update_engine[1490]: Mar 17 17:46:05.952653 update_engine[1490]: Mar 17 17:46:05.952653 update_engine[1490]: Mar 17 17:46:05.952653 update_engine[1490]: Mar 17 17:46:05.952653 update_engine[1490]: Mar 17 17:46:05.952653 update_engine[1490]: I20250317 17:46:05.949759 1490 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:46:05.952653 update_engine[1490]: I20250317 17:46:05.952246 1490 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:46:05.953145 locksmithd[1521]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 17 17:46:05.953466 update_engine[1490]: I20250317 17:46:05.952729 1490 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:46:05.953679 update_engine[1490]: E20250317 17:46:05.953562 1490 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:46:05.953752 update_engine[1490]: I20250317 17:46:05.953691 1490 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 17 17:46:15.856888 update_engine[1490]: I20250317 17:46:15.856772 1490 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:46:15.857736 update_engine[1490]: I20250317 17:46:15.857177 1490 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:46:15.857736 update_engine[1490]: I20250317 17:46:15.857517 1490 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:46:15.858168 update_engine[1490]: E20250317 17:46:15.857939 1490 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:46:15.858168 update_engine[1490]: I20250317 17:46:15.858052 1490 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 17 17:46:25.854499 update_engine[1490]: I20250317 17:46:25.853816 1490 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:46:25.854499 update_engine[1490]: I20250317 17:46:25.854079 1490 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:46:25.854499 update_engine[1490]: I20250317 17:46:25.854341 1490 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:46:25.855671 update_engine[1490]: E20250317 17:46:25.855381 1490 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:46:25.855671 update_engine[1490]: I20250317 17:46:25.855592 1490 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 17 17:46:35.859509 update_engine[1490]: I20250317 17:46:35.858304 1490 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:46:35.859509 update_engine[1490]: I20250317 17:46:35.858613 1490 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:46:35.859509 update_engine[1490]: I20250317 17:46:35.858987 1490 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:46:35.860673 update_engine[1490]: E20250317 17:46:35.860414 1490 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:46:35.860673 update_engine[1490]: I20250317 17:46:35.860615 1490 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 17:46:35.861742 update_engine[1490]: I20250317 17:46:35.860870 1490 omaha_request_action.cc:617] Omaha request response: Mar 17 17:46:35.861742 update_engine[1490]: E20250317 17:46:35.860976 1490 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 17 17:46:35.861742 update_engine[1490]: I20250317 17:46:35.860998 1490 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 17 17:46:35.861742 update_engine[1490]: I20250317 17:46:35.861006 1490 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 17:46:35.861742 update_engine[1490]: I20250317 17:46:35.861011 1490 update_attempter.cc:306] Processing Done. Mar 17 17:46:35.861742 update_engine[1490]: E20250317 17:46:35.861027 1490 update_attempter.cc:619] Update failed. Mar 17 17:46:35.861742 update_engine[1490]: I20250317 17:46:35.861033 1490 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 17 17:46:35.861742 update_engine[1490]: I20250317 17:46:35.861038 1490 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 17 17:46:35.861742 update_engine[1490]: I20250317 17:46:35.861044 1490 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 17 17:46:35.861742 update_engine[1490]: I20250317 17:46:35.861120 1490 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 17:46:35.861742 update_engine[1490]: I20250317 17:46:35.861145 1490 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 17:46:35.861742 update_engine[1490]: I20250317 17:46:35.861152 1490 omaha_request_action.cc:272] Request: Mar 17 17:46:35.861742 update_engine[1490]: Mar 17 17:46:35.861742 update_engine[1490]: Mar 17 17:46:35.861742 update_engine[1490]: Mar 17 17:46:35.861742 update_engine[1490]: Mar 17 17:46:35.861742 update_engine[1490]: Mar 17 17:46:35.861742 update_engine[1490]: Mar 17 17:46:35.861742 update_engine[1490]: I20250317 17:46:35.861161 1490 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:46:35.862922 update_engine[1490]: I20250317 17:46:35.861358 1490 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:46:35.862922 update_engine[1490]: I20250317 17:46:35.861604 1490 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:46:35.862987 locksmithd[1521]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 17 17:46:35.863339 update_engine[1490]: E20250317 17:46:35.862912 1490 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:46:35.863339 update_engine[1490]: I20250317 17:46:35.863034 1490 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 17:46:35.863339 update_engine[1490]: I20250317 17:46:35.863044 1490 omaha_request_action.cc:617] Omaha request response: Mar 17 17:46:35.863339 update_engine[1490]: I20250317 17:46:35.863054 1490 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 17:46:35.863339 update_engine[1490]: I20250317 17:46:35.863060 1490 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 17:46:35.863339 update_engine[1490]: I20250317 17:46:35.863066 1490 update_attempter.cc:306] Processing Done. Mar 17 17:46:35.863339 update_engine[1490]: I20250317 17:46:35.863073 1490 update_attempter.cc:310] Error event sent. Mar 17 17:46:35.863339 update_engine[1490]: I20250317 17:46:35.863085 1490 update_check_scheduler.cc:74] Next update check in 49m16s Mar 17 17:46:35.863905 locksmithd[1521]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 17 17:47:58.935180 systemd[1]: Started sshd@7-128.140.94.11:22-139.178.89.65:41832.service - OpenSSH per-connection server daemon (139.178.89.65:41832). Mar 17 17:47:59.937418 sshd[4212]: Accepted publickey for core from 139.178.89.65 port 41832 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:47:59.939565 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:59.946170 systemd-logind[1486]: New session 8 of user core. Mar 17 17:47:59.952847 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:48:00.726013 sshd[4214]: Connection closed by 139.178.89.65 port 41832 Mar 17 17:48:00.726602 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:00.732766 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:48:00.733325 systemd[1]: sshd@7-128.140.94.11:22-139.178.89.65:41832.service: Deactivated successfully. Mar 17 17:48:00.736297 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:48:00.738186 systemd-logind[1486]: Removed session 8. Mar 17 17:48:05.903085 systemd[1]: Started sshd@8-128.140.94.11:22-139.178.89.65:54242.service - OpenSSH per-connection server daemon (139.178.89.65:54242). Mar 17 17:48:06.903103 sshd[4227]: Accepted publickey for core from 139.178.89.65 port 54242 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:06.905987 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:06.914324 systemd-logind[1486]: New session 9 of user core. Mar 17 17:48:06.919911 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:48:07.657401 sshd[4229]: Connection closed by 139.178.89.65 port 54242 Mar 17 17:48:07.656300 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:07.663462 systemd[1]: sshd@8-128.140.94.11:22-139.178.89.65:54242.service: Deactivated successfully. Mar 17 17:48:07.668662 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:48:07.670193 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:48:07.671366 systemd-logind[1486]: Removed session 9. Mar 17 17:48:12.836072 systemd[1]: Started sshd@9-128.140.94.11:22-139.178.89.65:42336.service - OpenSSH per-connection server daemon (139.178.89.65:42336). Mar 17 17:48:13.821674 sshd[4244]: Accepted publickey for core from 139.178.89.65 port 42336 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:13.823896 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:13.829768 systemd-logind[1486]: New session 10 of user core. Mar 17 17:48:13.842089 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:48:14.579034 sshd[4246]: Connection closed by 139.178.89.65 port 42336 Mar 17 17:48:14.579768 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:14.585416 systemd[1]: sshd@9-128.140.94.11:22-139.178.89.65:42336.service: Deactivated successfully. Mar 17 17:48:14.590573 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:48:14.592341 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:48:14.594131 systemd-logind[1486]: Removed session 10. Mar 17 17:48:19.761046 systemd[1]: Started sshd@10-128.140.94.11:22-139.178.89.65:42346.service - OpenSSH per-connection server daemon (139.178.89.65:42346). Mar 17 17:48:20.770651 sshd[4260]: Accepted publickey for core from 139.178.89.65 port 42346 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:20.772529 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:20.779791 systemd-logind[1486]: New session 11 of user core. Mar 17 17:48:20.787847 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:48:21.538751 sshd[4262]: Connection closed by 139.178.89.65 port 42346 Mar 17 17:48:21.539905 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:21.545137 systemd[1]: sshd@10-128.140.94.11:22-139.178.89.65:42346.service: Deactivated successfully. Mar 17 17:48:21.547564 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:48:21.551774 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:48:21.553384 systemd-logind[1486]: Removed session 11. Mar 17 17:48:21.717041 systemd[1]: Started sshd@11-128.140.94.11:22-139.178.89.65:42998.service - OpenSSH per-connection server daemon (139.178.89.65:42998). Mar 17 17:48:22.722916 sshd[4275]: Accepted publickey for core from 139.178.89.65 port 42998 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:22.724921 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:22.730962 systemd-logind[1486]: New session 12 of user core. Mar 17 17:48:22.741317 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:48:23.523677 sshd[4277]: Connection closed by 139.178.89.65 port 42998 Mar 17 17:48:23.522734 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:23.529157 systemd[1]: sshd@11-128.140.94.11:22-139.178.89.65:42998.service: Deactivated successfully. Mar 17 17:48:23.533243 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:48:23.536514 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:48:23.538255 systemd-logind[1486]: Removed session 12. Mar 17 17:48:23.701159 systemd[1]: Started sshd@12-128.140.94.11:22-139.178.89.65:43014.service - OpenSSH per-connection server daemon (139.178.89.65:43014). Mar 17 17:48:24.690520 sshd[4287]: Accepted publickey for core from 139.178.89.65 port 43014 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:24.691934 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:24.698869 systemd-logind[1486]: New session 13 of user core. Mar 17 17:48:24.705225 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:48:25.454380 sshd[4289]: Connection closed by 139.178.89.65 port 43014 Mar 17 17:48:25.455192 sshd-session[4287]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:25.459903 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:48:25.460916 systemd[1]: sshd@12-128.140.94.11:22-139.178.89.65:43014.service: Deactivated successfully. Mar 17 17:48:25.464220 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:48:25.465866 systemd-logind[1486]: Removed session 13. Mar 17 17:48:30.628031 systemd[1]: Started sshd@13-128.140.94.11:22-139.178.89.65:43022.service - OpenSSH per-connection server daemon (139.178.89.65:43022). Mar 17 17:48:31.626240 sshd[4301]: Accepted publickey for core from 139.178.89.65 port 43022 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:31.628706 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:31.634984 systemd-logind[1486]: New session 14 of user core. Mar 17 17:48:31.641000 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:48:32.394481 sshd[4303]: Connection closed by 139.178.89.65 port 43022 Mar 17 17:48:32.395698 sshd-session[4301]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:32.400703 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:48:32.401456 systemd[1]: sshd@13-128.140.94.11:22-139.178.89.65:43022.service: Deactivated successfully. Mar 17 17:48:32.403481 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:48:32.406850 systemd-logind[1486]: Removed session 14. Mar 17 17:48:32.573940 systemd[1]: Started sshd@14-128.140.94.11:22-139.178.89.65:53160.service - OpenSSH per-connection server daemon (139.178.89.65:53160). Mar 17 17:48:33.566518 sshd[4316]: Accepted publickey for core from 139.178.89.65 port 53160 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:33.566322 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:33.575424 systemd-logind[1486]: New session 15 of user core. Mar 17 17:48:33.581878 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:48:34.392983 sshd[4318]: Connection closed by 139.178.89.65 port 53160 Mar 17 17:48:34.392823 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:34.400022 systemd[1]: sshd@14-128.140.94.11:22-139.178.89.65:53160.service: Deactivated successfully. Mar 17 17:48:34.402587 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:48:34.405532 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:48:34.407431 systemd-logind[1486]: Removed session 15. Mar 17 17:48:34.579536 systemd[1]: Started sshd@15-128.140.94.11:22-139.178.89.65:53172.service - OpenSSH per-connection server daemon (139.178.89.65:53172). Mar 17 17:48:35.580686 sshd[4328]: Accepted publickey for core from 139.178.89.65 port 53172 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:35.581766 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:35.587203 systemd-logind[1486]: New session 16 of user core. Mar 17 17:48:35.596535 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:48:37.302696 sshd[4331]: Connection closed by 139.178.89.65 port 53172 Mar 17 17:48:37.304036 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:37.308838 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:48:37.310758 systemd[1]: sshd@15-128.140.94.11:22-139.178.89.65:53172.service: Deactivated successfully. Mar 17 17:48:37.315866 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:48:37.320137 systemd-logind[1486]: Removed session 16. Mar 17 17:48:37.482041 systemd[1]: Started sshd@16-128.140.94.11:22-139.178.89.65:53184.service - OpenSSH per-connection server daemon (139.178.89.65:53184). Mar 17 17:48:38.470575 sshd[4348]: Accepted publickey for core from 139.178.89.65 port 53184 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:38.473185 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:38.479792 systemd-logind[1486]: New session 17 of user core. Mar 17 17:48:38.483898 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:48:39.366736 sshd[4350]: Connection closed by 139.178.89.65 port 53184 Mar 17 17:48:39.367840 sshd-session[4348]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:39.372704 systemd[1]: sshd@16-128.140.94.11:22-139.178.89.65:53184.service: Deactivated successfully. Mar 17 17:48:39.375492 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:48:39.376421 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:48:39.377929 systemd-logind[1486]: Removed session 17. Mar 17 17:48:39.543965 systemd[1]: Started sshd@17-128.140.94.11:22-139.178.89.65:53190.service - OpenSSH per-connection server daemon (139.178.89.65:53190). Mar 17 17:48:40.540239 sshd[4360]: Accepted publickey for core from 139.178.89.65 port 53190 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:40.542844 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:40.550074 systemd-logind[1486]: New session 18 of user core. Mar 17 17:48:40.554884 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:48:41.299994 sshd[4362]: Connection closed by 139.178.89.65 port 53190 Mar 17 17:48:41.300608 sshd-session[4360]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:41.305471 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:48:41.305765 systemd[1]: sshd@17-128.140.94.11:22-139.178.89.65:53190.service: Deactivated successfully. Mar 17 17:48:41.309253 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:48:41.311570 systemd-logind[1486]: Removed session 18. Mar 17 17:48:46.489796 systemd[1]: Started sshd@18-128.140.94.11:22-139.178.89.65:33010.service - OpenSSH per-connection server daemon (139.178.89.65:33010). Mar 17 17:48:47.493019 sshd[4376]: Accepted publickey for core from 139.178.89.65 port 33010 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:47.494569 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:47.502693 systemd-logind[1486]: New session 19 of user core. Mar 17 17:48:47.507859 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:48:48.260703 sshd[4380]: Connection closed by 139.178.89.65 port 33010 Mar 17 17:48:48.260553 sshd-session[4376]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:48.266216 systemd[1]: sshd@18-128.140.94.11:22-139.178.89.65:33010.service: Deactivated successfully. Mar 17 17:48:48.269409 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:48:48.272224 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:48:48.273829 systemd-logind[1486]: Removed session 19. Mar 17 17:48:53.439643 systemd[1]: Started sshd@19-128.140.94.11:22-139.178.89.65:60820.service - OpenSSH per-connection server daemon (139.178.89.65:60820). Mar 17 17:48:54.429873 sshd[4392]: Accepted publickey for core from 139.178.89.65 port 60820 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:54.432028 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:54.439051 systemd-logind[1486]: New session 20 of user core. Mar 17 17:48:54.441855 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:48:55.178665 sshd[4394]: Connection closed by 139.178.89.65 port 60820 Mar 17 17:48:55.179489 sshd-session[4392]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:55.184458 systemd-logind[1486]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:48:55.184524 systemd[1]: sshd@19-128.140.94.11:22-139.178.89.65:60820.service: Deactivated successfully. Mar 17 17:48:55.187829 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:48:55.190876 systemd-logind[1486]: Removed session 20. Mar 17 17:48:55.357985 systemd[1]: Started sshd@20-128.140.94.11:22-139.178.89.65:60828.service - OpenSSH per-connection server daemon (139.178.89.65:60828). Mar 17 17:48:56.347219 sshd[4406]: Accepted publickey for core from 139.178.89.65 port 60828 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:48:56.350022 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:56.358462 systemd-logind[1486]: New session 21 of user core. Mar 17 17:48:56.363859 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:48:58.461977 containerd[1502]: time="2025-03-17T17:48:58.460413694Z" level=info msg="StopContainer for \"d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a\" with timeout 30 (s)" Mar 17 17:48:58.465810 containerd[1502]: time="2025-03-17T17:48:58.465163863Z" level=info msg="Stop container \"d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a\" with signal terminated" Mar 17 17:48:58.467250 systemd[1]: run-containerd-runc-k8s.io-c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533-runc.UqG5eK.mount: Deactivated successfully. Mar 17 17:48:58.487782 systemd[1]: cri-containerd-d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a.scope: Deactivated successfully. Mar 17 17:48:58.490607 containerd[1502]: time="2025-03-17T17:48:58.490538855Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:48:58.507143 containerd[1502]: time="2025-03-17T17:48:58.507098746Z" level=info msg="StopContainer for \"c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533\" with timeout 2 (s)" Mar 17 17:48:58.507822 containerd[1502]: time="2025-03-17T17:48:58.507766302Z" level=info msg="Stop container \"c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533\" with signal terminated" Mar 17 17:48:58.518074 systemd-networkd[1399]: lxc_health: Link DOWN Mar 17 17:48:58.518082 systemd-networkd[1399]: lxc_health: Lost carrier Mar 17 17:48:58.536179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a-rootfs.mount: Deactivated successfully. Mar 17 17:48:58.544866 systemd[1]: cri-containerd-c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533.scope: Deactivated successfully. Mar 17 17:48:58.546198 systemd[1]: cri-containerd-c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533.scope: Consumed 8.462s CPU time, 123.8M memory peak, 144K read from disk, 12.9M written to disk. Mar 17 17:48:58.549980 containerd[1502]: time="2025-03-17T17:48:58.549618066Z" level=info msg="shim disconnected" id=d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a namespace=k8s.io Mar 17 17:48:58.549980 containerd[1502]: time="2025-03-17T17:48:58.549824984Z" level=warning msg="cleaning up after shim disconnected" id=d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a namespace=k8s.io Mar 17 17:48:58.549980 containerd[1502]: time="2025-03-17T17:48:58.549834384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:48:58.575052 containerd[1502]: time="2025-03-17T17:48:58.574755820Z" level=info msg="StopContainer for \"d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a\" returns successfully" Mar 17 17:48:58.578099 containerd[1502]: time="2025-03-17T17:48:58.577772240Z" level=info msg="StopPodSandbox for \"7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97\"" Mar 17 17:48:58.578099 containerd[1502]: time="2025-03-17T17:48:58.578045078Z" level=info msg="Container to stop \"d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:48:58.580584 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97-shm.mount: Deactivated successfully. Mar 17 17:48:58.585668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533-rootfs.mount: Deactivated successfully. Mar 17 17:48:58.594093 containerd[1502]: time="2025-03-17T17:48:58.594021893Z" level=info msg="shim disconnected" id=c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533 namespace=k8s.io Mar 17 17:48:58.594334 containerd[1502]: time="2025-03-17T17:48:58.594315691Z" level=warning msg="cleaning up after shim disconnected" id=c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533 namespace=k8s.io Mar 17 17:48:58.594404 containerd[1502]: time="2025-03-17T17:48:58.594390971Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:48:58.595459 systemd[1]: cri-containerd-7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97.scope: Deactivated successfully. Mar 17 17:48:58.618787 containerd[1502]: time="2025-03-17T17:48:58.618656291Z" level=info msg="StopContainer for \"c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533\" returns successfully" Mar 17 17:48:58.619228 containerd[1502]: time="2025-03-17T17:48:58.619196207Z" level=info msg="StopPodSandbox for \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\"" Mar 17 17:48:58.619286 containerd[1502]: time="2025-03-17T17:48:58.619246207Z" level=info msg="Container to stop \"78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:48:58.619286 containerd[1502]: time="2025-03-17T17:48:58.619258527Z" level=info msg="Container to stop \"a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:48:58.619286 containerd[1502]: time="2025-03-17T17:48:58.619267407Z" level=info msg="Container to stop \"d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:48:58.619286 containerd[1502]: time="2025-03-17T17:48:58.619275367Z" level=info msg="Container to stop \"68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:48:58.619286 containerd[1502]: time="2025-03-17T17:48:58.619283407Z" level=info msg="Container to stop \"c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:48:58.626380 systemd[1]: cri-containerd-0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552.scope: Deactivated successfully. Mar 17 17:48:58.634354 containerd[1502]: time="2025-03-17T17:48:58.634187828Z" level=info msg="shim disconnected" id=7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97 namespace=k8s.io Mar 17 17:48:58.634354 containerd[1502]: time="2025-03-17T17:48:58.634241308Z" level=warning msg="cleaning up after shim disconnected" id=7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97 namespace=k8s.io Mar 17 17:48:58.634354 containerd[1502]: time="2025-03-17T17:48:58.634249388Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:48:58.652214 containerd[1502]: time="2025-03-17T17:48:58.650853078Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:48:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:48:58.652214 containerd[1502]: time="2025-03-17T17:48:58.652048591Z" level=info msg="TearDown network for sandbox \"7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97\" successfully" Mar 17 17:48:58.652214 containerd[1502]: time="2025-03-17T17:48:58.652075070Z" level=info msg="StopPodSandbox for \"7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97\" returns successfully" Mar 17 17:48:58.661437 containerd[1502]: time="2025-03-17T17:48:58.661361449Z" level=info msg="shim disconnected" id=0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552 namespace=k8s.io Mar 17 17:48:58.661437 containerd[1502]: time="2025-03-17T17:48:58.661414649Z" level=warning msg="cleaning up after shim disconnected" id=0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552 namespace=k8s.io Mar 17 17:48:58.661437 containerd[1502]: time="2025-03-17T17:48:58.661424129Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:48:58.677411 containerd[1502]: time="2025-03-17T17:48:58.677355384Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:48:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:48:58.678691 containerd[1502]: time="2025-03-17T17:48:58.678548656Z" level=info msg="TearDown network for sandbox \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\" successfully" Mar 17 17:48:58.678691 containerd[1502]: time="2025-03-17T17:48:58.678583416Z" level=info msg="StopPodSandbox for \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\" returns successfully" Mar 17 17:48:58.738151 kubelet[2791]: I0317 17:48:58.737998 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-cilium-run\") pod \"349e066e-c1be-47c7-b7f8-9a38bec4202a\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " Mar 17 17:48:58.739037 kubelet[2791]: I0317 17:48:58.738905 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "349e066e-c1be-47c7-b7f8-9a38bec4202a" (UID: "349e066e-c1be-47c7-b7f8-9a38bec4202a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:58.739304 kubelet[2791]: I0317 17:48:58.739126 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "349e066e-c1be-47c7-b7f8-9a38bec4202a" (UID: "349e066e-c1be-47c7-b7f8-9a38bec4202a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:58.739304 kubelet[2791]: I0317 17:48:58.738074 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-xtables-lock\") pod \"349e066e-c1be-47c7-b7f8-9a38bec4202a\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " Mar 17 17:48:58.739304 kubelet[2791]: I0317 17:48:58.739273 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-host-proc-sys-net\") pod \"349e066e-c1be-47c7-b7f8-9a38bec4202a\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " Mar 17 17:48:58.739304 kubelet[2791]: I0317 17:48:58.739305 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-lib-modules\") pod \"349e066e-c1be-47c7-b7f8-9a38bec4202a\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " Mar 17 17:48:58.742807 kubelet[2791]: I0317 17:48:58.739331 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-host-proc-sys-kernel\") pod \"349e066e-c1be-47c7-b7f8-9a38bec4202a\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " Mar 17 17:48:58.742807 kubelet[2791]: I0317 17:48:58.739363 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/349e066e-c1be-47c7-b7f8-9a38bec4202a-clustermesh-secrets\") pod \"349e066e-c1be-47c7-b7f8-9a38bec4202a\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " Mar 17 17:48:58.742807 kubelet[2791]: I0317 17:48:58.739384 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-hostproc\") pod \"349e066e-c1be-47c7-b7f8-9a38bec4202a\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " Mar 17 17:48:58.742807 kubelet[2791]: I0317 17:48:58.739385 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "349e066e-c1be-47c7-b7f8-9a38bec4202a" (UID: "349e066e-c1be-47c7-b7f8-9a38bec4202a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:58.742807 kubelet[2791]: I0317 17:48:58.739403 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-cilium-cgroup\") pod \"349e066e-c1be-47c7-b7f8-9a38bec4202a\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " Mar 17 17:48:58.742807 kubelet[2791]: I0317 17:48:58.739428 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twrgp\" (UniqueName: \"kubernetes.io/projected/4954672b-ad5c-4662-bae7-9b2f2cb140a9-kube-api-access-twrgp\") pod \"4954672b-ad5c-4662-bae7-9b2f2cb140a9\" (UID: \"4954672b-ad5c-4662-bae7-9b2f2cb140a9\") " Mar 17 17:48:58.743236 kubelet[2791]: I0317 17:48:58.739453 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/349e066e-c1be-47c7-b7f8-9a38bec4202a-cilium-config-path\") pod \"349e066e-c1be-47c7-b7f8-9a38bec4202a\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " Mar 17 17:48:58.743236 kubelet[2791]: I0317 17:48:58.739461 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "349e066e-c1be-47c7-b7f8-9a38bec4202a" (UID: "349e066e-c1be-47c7-b7f8-9a38bec4202a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:58.743236 kubelet[2791]: I0317 17:48:58.739474 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-etc-cni-netd\") pod \"349e066e-c1be-47c7-b7f8-9a38bec4202a\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " Mar 17 17:48:58.743236 kubelet[2791]: I0317 17:48:58.739501 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-bpf-maps\") pod \"349e066e-c1be-47c7-b7f8-9a38bec4202a\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " Mar 17 17:48:58.743236 kubelet[2791]: I0317 17:48:58.739499 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-hostproc" (OuterVolumeSpecName: "hostproc") pod "349e066e-c1be-47c7-b7f8-9a38bec4202a" (UID: "349e066e-c1be-47c7-b7f8-9a38bec4202a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:58.743382 kubelet[2791]: I0317 17:48:58.739524 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw7pn\" (UniqueName: \"kubernetes.io/projected/349e066e-c1be-47c7-b7f8-9a38bec4202a-kube-api-access-vw7pn\") pod \"349e066e-c1be-47c7-b7f8-9a38bec4202a\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " Mar 17 17:48:58.743382 kubelet[2791]: I0317 17:48:58.739597 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-cni-path\") pod \"349e066e-c1be-47c7-b7f8-9a38bec4202a\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " Mar 17 17:48:58.743382 kubelet[2791]: I0317 17:48:58.739678 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/349e066e-c1be-47c7-b7f8-9a38bec4202a-hubble-tls\") pod \"349e066e-c1be-47c7-b7f8-9a38bec4202a\" (UID: \"349e066e-c1be-47c7-b7f8-9a38bec4202a\") " Mar 17 17:48:58.743382 kubelet[2791]: I0317 17:48:58.739746 2791 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4954672b-ad5c-4662-bae7-9b2f2cb140a9-cilium-config-path\") pod \"4954672b-ad5c-4662-bae7-9b2f2cb140a9\" (UID: \"4954672b-ad5c-4662-bae7-9b2f2cb140a9\") " Mar 17 17:48:58.743382 kubelet[2791]: I0317 17:48:58.739872 2791 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-cilium-run\") on node \"ci-4230-1-0-9-a82243c43d\" DevicePath \"\"" Mar 17 17:48:58.743382 kubelet[2791]: I0317 17:48:58.739898 2791 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-xtables-lock\") on node \"ci-4230-1-0-9-a82243c43d\" DevicePath \"\"" Mar 17 17:48:58.743382 kubelet[2791]: I0317 17:48:58.739950 2791 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-host-proc-sys-net\") on node \"ci-4230-1-0-9-a82243c43d\" DevicePath \"\"" Mar 17 17:48:58.743533 kubelet[2791]: I0317 17:48:58.739979 2791 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-lib-modules\") on node \"ci-4230-1-0-9-a82243c43d\" DevicePath \"\"" Mar 17 17:48:58.743533 kubelet[2791]: I0317 17:48:58.740003 2791 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-hostproc\") on node \"ci-4230-1-0-9-a82243c43d\" DevicePath \"\"" Mar 17 17:48:58.743533 kubelet[2791]: I0317 17:48:58.742691 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/349e066e-c1be-47c7-b7f8-9a38bec4202a-kube-api-access-vw7pn" (OuterVolumeSpecName: "kube-api-access-vw7pn") pod "349e066e-c1be-47c7-b7f8-9a38bec4202a" (UID: "349e066e-c1be-47c7-b7f8-9a38bec4202a"). InnerVolumeSpecName "kube-api-access-vw7pn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:48:58.743533 kubelet[2791]: I0317 17:48:58.742751 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "349e066e-c1be-47c7-b7f8-9a38bec4202a" (UID: "349e066e-c1be-47c7-b7f8-9a38bec4202a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:58.747066 kubelet[2791]: I0317 17:48:58.746463 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/349e066e-c1be-47c7-b7f8-9a38bec4202a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "349e066e-c1be-47c7-b7f8-9a38bec4202a" (UID: "349e066e-c1be-47c7-b7f8-9a38bec4202a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 17:48:58.747066 kubelet[2791]: I0317 17:48:58.746535 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "349e066e-c1be-47c7-b7f8-9a38bec4202a" (UID: "349e066e-c1be-47c7-b7f8-9a38bec4202a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:58.747066 kubelet[2791]: I0317 17:48:58.746554 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "349e066e-c1be-47c7-b7f8-9a38bec4202a" (UID: "349e066e-c1be-47c7-b7f8-9a38bec4202a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:58.748065 kubelet[2791]: I0317 17:48:58.747564 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "349e066e-c1be-47c7-b7f8-9a38bec4202a" (UID: "349e066e-c1be-47c7-b7f8-9a38bec4202a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:58.748065 kubelet[2791]: I0317 17:48:58.747776 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-cni-path" (OuterVolumeSpecName: "cni-path") pod "349e066e-c1be-47c7-b7f8-9a38bec4202a" (UID: "349e066e-c1be-47c7-b7f8-9a38bec4202a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:58.748589 kubelet[2791]: I0317 17:48:58.748547 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4954672b-ad5c-4662-bae7-9b2f2cb140a9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4954672b-ad5c-4662-bae7-9b2f2cb140a9" (UID: "4954672b-ad5c-4662-bae7-9b2f2cb140a9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 17:48:58.749155 kubelet[2791]: I0317 17:48:58.749119 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4954672b-ad5c-4662-bae7-9b2f2cb140a9-kube-api-access-twrgp" (OuterVolumeSpecName: "kube-api-access-twrgp") pod "4954672b-ad5c-4662-bae7-9b2f2cb140a9" (UID: "4954672b-ad5c-4662-bae7-9b2f2cb140a9"). InnerVolumeSpecName "kube-api-access-twrgp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:48:58.751656 kubelet[2791]: I0317 17:48:58.751613 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/349e066e-c1be-47c7-b7f8-9a38bec4202a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "349e066e-c1be-47c7-b7f8-9a38bec4202a" (UID: "349e066e-c1be-47c7-b7f8-9a38bec4202a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:48:58.754177 kubelet[2791]: I0317 17:48:58.753990 2791 scope.go:117] "RemoveContainer" containerID="d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a" Mar 17 17:48:58.756322 kubelet[2791]: I0317 17:48:58.755949 2791 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349e066e-c1be-47c7-b7f8-9a38bec4202a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "349e066e-c1be-47c7-b7f8-9a38bec4202a" (UID: "349e066e-c1be-47c7-b7f8-9a38bec4202a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 17:48:58.761851 containerd[1502]: time="2025-03-17T17:48:58.760349477Z" level=info msg="RemoveContainer for \"d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a\"" Mar 17 17:48:58.765347 systemd[1]: Removed slice kubepods-besteffort-pod4954672b_ad5c_4662_bae7_9b2f2cb140a9.slice - libcontainer container kubepods-besteffort-pod4954672b_ad5c_4662_bae7_9b2f2cb140a9.slice. Mar 17 17:48:58.769034 containerd[1502]: time="2025-03-17T17:48:58.768905340Z" level=info msg="RemoveContainer for \"d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a\" returns successfully" Mar 17 17:48:58.770133 kubelet[2791]: I0317 17:48:58.770032 2791 scope.go:117] "RemoveContainer" containerID="d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a" Mar 17 17:48:58.770781 containerd[1502]: time="2025-03-17T17:48:58.770592569Z" level=error msg="ContainerStatus for \"d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a\": not found" Mar 17 17:48:58.772308 kubelet[2791]: E0317 17:48:58.772205 2791 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a\": not found" containerID="d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a" Mar 17 17:48:58.772491 kubelet[2791]: I0317 17:48:58.772253 2791 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a"} err="failed to get container status \"d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a\": rpc error: code = NotFound desc = an error occurred when try to find container \"d831a42c2a5e69e2f6a1d357301ef0a8849e16d7f0c0777191e45bdc1d047f8a\": not found" Mar 17 17:48:58.772491 kubelet[2791]: I0317 17:48:58.772447 2791 scope.go:117] "RemoveContainer" containerID="c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533" Mar 17 17:48:58.775735 systemd[1]: Removed slice kubepods-burstable-pod349e066e_c1be_47c7_b7f8_9a38bec4202a.slice - libcontainer container kubepods-burstable-pod349e066e_c1be_47c7_b7f8_9a38bec4202a.slice. Mar 17 17:48:58.775867 systemd[1]: kubepods-burstable-pod349e066e_c1be_47c7_b7f8_9a38bec4202a.slice: Consumed 8.559s CPU time, 124.2M memory peak, 144K read from disk, 12.9M written to disk. Mar 17 17:48:58.780291 containerd[1502]: time="2025-03-17T17:48:58.779948587Z" level=info msg="RemoveContainer for \"c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533\"" Mar 17 17:48:58.785716 containerd[1502]: time="2025-03-17T17:48:58.785501551Z" level=info msg="RemoveContainer for \"c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533\" returns successfully" Mar 17 17:48:58.787986 kubelet[2791]: I0317 17:48:58.787941 2791 scope.go:117] "RemoveContainer" containerID="68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b" Mar 17 17:48:58.793977 containerd[1502]: time="2025-03-17T17:48:58.793333539Z" level=info msg="RemoveContainer for \"68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b\"" Mar 17 17:48:58.802319 containerd[1502]: time="2025-03-17T17:48:58.802265040Z" level=info msg="RemoveContainer for \"68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b\" returns successfully" Mar 17 17:48:58.803059 kubelet[2791]: I0317 17:48:58.802838 2791 scope.go:117] "RemoveContainer" containerID="d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50" Mar 17 17:48:58.805813 containerd[1502]: time="2025-03-17T17:48:58.805767097Z" level=info msg="RemoveContainer for \"d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50\"" Mar 17 17:48:58.811166 containerd[1502]: time="2025-03-17T17:48:58.811120182Z" level=info msg="RemoveContainer for \"d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50\" returns successfully" Mar 17 17:48:58.811768 kubelet[2791]: I0317 17:48:58.811742 2791 scope.go:117] "RemoveContainer" containerID="a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9" Mar 17 17:48:58.815205 containerd[1502]: time="2025-03-17T17:48:58.815079076Z" level=info msg="RemoveContainer for \"a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9\"" Mar 17 17:48:58.824870 containerd[1502]: time="2025-03-17T17:48:58.822587306Z" level=info msg="RemoveContainer for \"a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9\" returns successfully" Mar 17 17:48:58.825044 kubelet[2791]: I0317 17:48:58.824221 2791 scope.go:117] "RemoveContainer" containerID="78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7" Mar 17 17:48:58.826472 containerd[1502]: time="2025-03-17T17:48:58.826384681Z" level=info msg="RemoveContainer for \"78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7\"" Mar 17 17:48:58.830284 containerd[1502]: time="2025-03-17T17:48:58.830223376Z" level=info msg="RemoveContainer for \"78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7\" returns successfully" Mar 17 17:48:58.830712 kubelet[2791]: I0317 17:48:58.830484 2791 scope.go:117] "RemoveContainer" containerID="c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533" Mar 17 17:48:58.830860 containerd[1502]: time="2025-03-17T17:48:58.830742132Z" level=error msg="ContainerStatus for \"c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533\": not found" Mar 17 17:48:58.831149 kubelet[2791]: E0317 17:48:58.831101 2791 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533\": not found" containerID="c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533" Mar 17 17:48:58.831443 kubelet[2791]: I0317 17:48:58.831292 2791 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533"} err="failed to get container status \"c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8b2d5890f60d91fc92df388c963d6791ad4179fa65c9ff4bbed997a5ecb8533\": not found" Mar 17 17:48:58.831659 kubelet[2791]: I0317 17:48:58.831328 2791 scope.go:117] "RemoveContainer" containerID="68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b" Mar 17 17:48:58.832032 containerd[1502]: time="2025-03-17T17:48:58.831866685Z" level=error msg="ContainerStatus for \"68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b\": not found" Mar 17 17:48:58.832294 kubelet[2791]: E0317 17:48:58.832146 2791 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b\": not found" containerID="68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b" Mar 17 17:48:58.832294 kubelet[2791]: I0317 17:48:58.832194 2791 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b"} err="failed to get container status \"68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b\": rpc error: code = NotFound desc = an error occurred when try to find container \"68d32643dd327894166fd6de7105ed46bc6f7aa79eb8088a3a204f49c51db96b\": not found" Mar 17 17:48:58.832294 kubelet[2791]: I0317 17:48:58.832223 2791 scope.go:117] "RemoveContainer" containerID="d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50" Mar 17 17:48:58.833078 containerd[1502]: time="2025-03-17T17:48:58.832700680Z" level=error msg="ContainerStatus for \"d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50\": not found" Mar 17 17:48:58.833162 kubelet[2791]: E0317 17:48:58.832887 2791 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50\": not found" containerID="d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50" Mar 17 17:48:58.833162 kubelet[2791]: I0317 17:48:58.832934 2791 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50"} err="failed to get container status \"d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7bc1c4a0eef05cd78135d968a37dd2ed939ba4f1231339cf6790007d85e6e50\": not found" Mar 17 17:48:58.833162 kubelet[2791]: I0317 17:48:58.832965 2791 scope.go:117] "RemoveContainer" containerID="a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9" Mar 17 17:48:58.833319 containerd[1502]: time="2025-03-17T17:48:58.833225276Z" level=error msg="ContainerStatus for \"a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9\": not found" Mar 17 17:48:58.833653 kubelet[2791]: E0317 17:48:58.833526 2791 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9\": not found" containerID="a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9" Mar 17 17:48:58.833653 kubelet[2791]: I0317 17:48:58.833567 2791 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9"} err="failed to get container status \"a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7d8aa286ea384d4450a5826cbfe211f3fd39567d22f4968be85ff8a3532b3b9\": not found" Mar 17 17:48:58.833653 kubelet[2791]: I0317 17:48:58.833590 2791 scope.go:117] "RemoveContainer" containerID="78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7" Mar 17 17:48:58.834241 containerd[1502]: time="2025-03-17T17:48:58.833969071Z" level=error msg="ContainerStatus for \"78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7\": not found" Mar 17 17:48:58.834327 kubelet[2791]: E0317 17:48:58.834123 2791 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7\": not found" containerID="78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7" Mar 17 17:48:58.834327 kubelet[2791]: I0317 17:48:58.834189 2791 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7"} err="failed to get container status \"78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"78552d99eb9c936dd099919f0caae899245f4bfd93e3f1d0e8c68943b366c8a7\": not found" Mar 17 17:48:58.841134 kubelet[2791]: I0317 17:48:58.840754 2791 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-host-proc-sys-kernel\") on node \"ci-4230-1-0-9-a82243c43d\" DevicePath \"\"" Mar 17 17:48:58.841134 kubelet[2791]: I0317 17:48:58.840868 2791 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/349e066e-c1be-47c7-b7f8-9a38bec4202a-clustermesh-secrets\") on node \"ci-4230-1-0-9-a82243c43d\" DevicePath \"\"" Mar 17 17:48:58.841134 kubelet[2791]: I0317 17:48:58.840944 2791 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twrgp\" (UniqueName: \"kubernetes.io/projected/4954672b-ad5c-4662-bae7-9b2f2cb140a9-kube-api-access-twrgp\") on node \"ci-4230-1-0-9-a82243c43d\" DevicePath \"\"" Mar 17 17:48:58.841134 kubelet[2791]: I0317 17:48:58.840968 2791 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-cilium-cgroup\") on node \"ci-4230-1-0-9-a82243c43d\" DevicePath \"\"" Mar 17 17:48:58.841134 kubelet[2791]: I0317 17:48:58.840987 2791 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/349e066e-c1be-47c7-b7f8-9a38bec4202a-cilium-config-path\") on node \"ci-4230-1-0-9-a82243c43d\" DevicePath \"\"" Mar 17 17:48:58.841134 kubelet[2791]: I0317 17:48:58.841006 2791 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-etc-cni-netd\") on node \"ci-4230-1-0-9-a82243c43d\" DevicePath \"\"" Mar 17 17:48:58.841134 kubelet[2791]: I0317 17:48:58.841025 2791 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-cni-path\") on node \"ci-4230-1-0-9-a82243c43d\" DevicePath \"\"" Mar 17 17:48:58.841134 kubelet[2791]: I0317 17:48:58.841042 2791 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/349e066e-c1be-47c7-b7f8-9a38bec4202a-hubble-tls\") on node \"ci-4230-1-0-9-a82243c43d\" DevicePath \"\"" Mar 17 17:48:58.841743 kubelet[2791]: I0317 17:48:58.841060 2791 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4954672b-ad5c-4662-bae7-9b2f2cb140a9-cilium-config-path\") on node \"ci-4230-1-0-9-a82243c43d\" DevicePath \"\"" Mar 17 17:48:58.841743 kubelet[2791]: I0317 17:48:58.841075 2791 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/349e066e-c1be-47c7-b7f8-9a38bec4202a-bpf-maps\") on node \"ci-4230-1-0-9-a82243c43d\" DevicePath \"\"" Mar 17 17:48:58.841743 kubelet[2791]: I0317 17:48:58.841094 2791 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vw7pn\" (UniqueName: \"kubernetes.io/projected/349e066e-c1be-47c7-b7f8-9a38bec4202a-kube-api-access-vw7pn\") on node \"ci-4230-1-0-9-a82243c43d\" DevicePath \"\"" Mar 17 17:48:58.873985 kubelet[2791]: E0317 17:48:58.873811 2791 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:48:59.455379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552-rootfs.mount: Deactivated successfully. Mar 17 17:48:59.455727 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552-shm.mount: Deactivated successfully. Mar 17 17:48:59.455901 systemd[1]: var-lib-kubelet-pods-349e066e\x2dc1be\x2d47c7\x2db7f8\x2d9a38bec4202a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvw7pn.mount: Deactivated successfully. Mar 17 17:48:59.456157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97-rootfs.mount: Deactivated successfully. Mar 17 17:48:59.456300 systemd[1]: var-lib-kubelet-pods-4954672b\x2dad5c\x2d4662\x2dbae7\x2d9b2f2cb140a9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtwrgp.mount: Deactivated successfully. Mar 17 17:48:59.456435 systemd[1]: var-lib-kubelet-pods-349e066e\x2dc1be\x2d47c7\x2db7f8\x2d9a38bec4202a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:48:59.456576 systemd[1]: var-lib-kubelet-pods-349e066e\x2dc1be\x2d47c7\x2db7f8\x2d9a38bec4202a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:49:00.525776 sshd[4408]: Connection closed by 139.178.89.65 port 60828 Mar 17 17:49:00.526912 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:00.533136 systemd-logind[1486]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:49:00.533888 systemd[1]: sshd@20-128.140.94.11:22-139.178.89.65:60828.service: Deactivated successfully. Mar 17 17:49:00.539048 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:49:00.540802 systemd-logind[1486]: Removed session 21. Mar 17 17:49:00.695567 kubelet[2791]: I0317 17:49:00.695445 2791 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="349e066e-c1be-47c7-b7f8-9a38bec4202a" path="/var/lib/kubelet/pods/349e066e-c1be-47c7-b7f8-9a38bec4202a/volumes" Mar 17 17:49:00.696424 kubelet[2791]: I0317 17:49:00.696387 2791 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4954672b-ad5c-4662-bae7-9b2f2cb140a9" path="/var/lib/kubelet/pods/4954672b-ad5c-4662-bae7-9b2f2cb140a9/volumes" Mar 17 17:49:00.705968 systemd[1]: Started sshd@21-128.140.94.11:22-139.178.89.65:60842.service - OpenSSH per-connection server daemon (139.178.89.65:60842). Mar 17 17:49:01.687504 sshd[4568]: Accepted publickey for core from 139.178.89.65 port 60842 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:49:01.691512 sshd-session[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:01.703201 systemd-logind[1486]: New session 22 of user core. Mar 17 17:49:01.708295 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:49:02.930715 kubelet[2791]: I0317 17:49:02.930678 2791 memory_manager.go:355] "RemoveStaleState removing state" podUID="4954672b-ad5c-4662-bae7-9b2f2cb140a9" containerName="cilium-operator" Mar 17 17:49:02.931906 kubelet[2791]: I0317 17:49:02.931713 2791 memory_manager.go:355] "RemoveStaleState removing state" podUID="349e066e-c1be-47c7-b7f8-9a38bec4202a" containerName="cilium-agent" Mar 17 17:49:02.943087 systemd[1]: Created slice kubepods-burstable-pod209646b1_8e20_408d_ad2c_f4235360a9ef.slice - libcontainer container kubepods-burstable-pod209646b1_8e20_408d_ad2c_f4235360a9ef.slice. Mar 17 17:49:02.945939 kubelet[2791]: W0317 17:49:02.944826 2791 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230-1-0-9-a82243c43d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-0-9-a82243c43d' and this object Mar 17 17:49:02.945939 kubelet[2791]: E0317 17:49:02.944894 2791 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230-1-0-9-a82243c43d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-0-9-a82243c43d' and this object" logger="UnhandledError" Mar 17 17:49:02.945939 kubelet[2791]: I0317 17:49:02.944949 2791 status_manager.go:890] "Failed to get status for pod" podUID="209646b1-8e20-408d-ad2c-f4235360a9ef" pod="kube-system/cilium-8qf58" err="pods \"cilium-8qf58\" is forbidden: User \"system:node:ci-4230-1-0-9-a82243c43d\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-0-9-a82243c43d' and this object" Mar 17 17:49:02.945939 kubelet[2791]: W0317 17:49:02.945000 2791 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4230-1-0-9-a82243c43d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-0-9-a82243c43d' and this object Mar 17 17:49:02.945939 kubelet[2791]: E0317 17:49:02.945014 2791 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4230-1-0-9-a82243c43d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-0-9-a82243c43d' and this object" logger="UnhandledError" Mar 17 17:49:02.946096 kubelet[2791]: W0317 17:49:02.945056 2791 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230-1-0-9-a82243c43d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-0-9-a82243c43d' and this object Mar 17 17:49:02.946096 kubelet[2791]: E0317 17:49:02.945067 2791 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4230-1-0-9-a82243c43d\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-0-9-a82243c43d' and this object" logger="UnhandledError" Mar 17 17:49:02.946096 kubelet[2791]: W0317 17:49:02.945115 2791 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230-1-0-9-a82243c43d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-0-9-a82243c43d' and this object Mar 17 17:49:02.946096 kubelet[2791]: E0317 17:49:02.945125 2791 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4230-1-0-9-a82243c43d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-0-9-a82243c43d' and this object" logger="UnhandledError" Mar 17 17:49:03.071901 kubelet[2791]: I0317 17:49:03.071784 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/209646b1-8e20-408d-ad2c-f4235360a9ef-hubble-tls\") pod \"cilium-8qf58\" (UID: \"209646b1-8e20-408d-ad2c-f4235360a9ef\") " pod="kube-system/cilium-8qf58" Mar 17 17:49:03.072818 kubelet[2791]: I0317 17:49:03.072334 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgbsz\" (UniqueName: \"kubernetes.io/projected/209646b1-8e20-408d-ad2c-f4235360a9ef-kube-api-access-hgbsz\") pod \"cilium-8qf58\" (UID: \"209646b1-8e20-408d-ad2c-f4235360a9ef\") " pod="kube-system/cilium-8qf58" Mar 17 17:49:03.072818 kubelet[2791]: I0317 17:49:03.072515 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/209646b1-8e20-408d-ad2c-f4235360a9ef-etc-cni-netd\") pod \"cilium-8qf58\" (UID: \"209646b1-8e20-408d-ad2c-f4235360a9ef\") " pod="kube-system/cilium-8qf58" Mar 17 17:49:03.072818 kubelet[2791]: I0317 17:49:03.072591 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/209646b1-8e20-408d-ad2c-f4235360a9ef-cilium-config-path\") pod \"cilium-8qf58\" (UID: \"209646b1-8e20-408d-ad2c-f4235360a9ef\") " pod="kube-system/cilium-8qf58" Mar 17 17:49:03.072818 kubelet[2791]: I0317 17:49:03.072674 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/209646b1-8e20-408d-ad2c-f4235360a9ef-host-proc-sys-kernel\") pod \"cilium-8qf58\" (UID: \"209646b1-8e20-408d-ad2c-f4235360a9ef\") " pod="kube-system/cilium-8qf58" Mar 17 17:49:03.072818 kubelet[2791]: I0317 17:49:03.072756 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/209646b1-8e20-408d-ad2c-f4235360a9ef-cilium-cgroup\") pod \"cilium-8qf58\" (UID: \"209646b1-8e20-408d-ad2c-f4235360a9ef\") " pod="kube-system/cilium-8qf58" Mar 17 17:49:03.073719 kubelet[2791]: I0317 17:49:03.073366 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/209646b1-8e20-408d-ad2c-f4235360a9ef-cilium-run\") pod \"cilium-8qf58\" (UID: \"209646b1-8e20-408d-ad2c-f4235360a9ef\") " pod="kube-system/cilium-8qf58" Mar 17 17:49:03.073719 kubelet[2791]: I0317 17:49:03.073535 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/209646b1-8e20-408d-ad2c-f4235360a9ef-lib-modules\") pod \"cilium-8qf58\" (UID: \"209646b1-8e20-408d-ad2c-f4235360a9ef\") " pod="kube-system/cilium-8qf58" Mar 17 17:49:03.073719 kubelet[2791]: I0317 17:49:03.073675 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/209646b1-8e20-408d-ad2c-f4235360a9ef-xtables-lock\") pod \"cilium-8qf58\" (UID: \"209646b1-8e20-408d-ad2c-f4235360a9ef\") " pod="kube-system/cilium-8qf58" Mar 17 17:49:03.074362 kubelet[2791]: I0317 17:49:03.073759 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/209646b1-8e20-408d-ad2c-f4235360a9ef-hostproc\") pod \"cilium-8qf58\" (UID: \"209646b1-8e20-408d-ad2c-f4235360a9ef\") " pod="kube-system/cilium-8qf58" Mar 17 17:49:03.074362 kubelet[2791]: I0317 17:49:03.073814 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/209646b1-8e20-408d-ad2c-f4235360a9ef-cni-path\") pod \"cilium-8qf58\" (UID: \"209646b1-8e20-408d-ad2c-f4235360a9ef\") " pod="kube-system/cilium-8qf58" Mar 17 17:49:03.074362 kubelet[2791]: I0317 17:49:03.073851 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/209646b1-8e20-408d-ad2c-f4235360a9ef-host-proc-sys-net\") pod \"cilium-8qf58\" (UID: \"209646b1-8e20-408d-ad2c-f4235360a9ef\") " pod="kube-system/cilium-8qf58" Mar 17 17:49:03.074362 kubelet[2791]: I0317 17:49:03.073915 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/209646b1-8e20-408d-ad2c-f4235360a9ef-bpf-maps\") pod \"cilium-8qf58\" (UID: \"209646b1-8e20-408d-ad2c-f4235360a9ef\") " pod="kube-system/cilium-8qf58" Mar 17 17:49:03.074362 kubelet[2791]: I0317 17:49:03.073942 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/209646b1-8e20-408d-ad2c-f4235360a9ef-clustermesh-secrets\") pod \"cilium-8qf58\" (UID: \"209646b1-8e20-408d-ad2c-f4235360a9ef\") " pod="kube-system/cilium-8qf58" Mar 17 17:49:03.074362 kubelet[2791]: I0317 17:49:03.073965 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/209646b1-8e20-408d-ad2c-f4235360a9ef-cilium-ipsec-secrets\") pod \"cilium-8qf58\" (UID: \"209646b1-8e20-408d-ad2c-f4235360a9ef\") " pod="kube-system/cilium-8qf58" Mar 17 17:49:03.083705 sshd[4570]: Connection closed by 139.178.89.65 port 60842 Mar 17 17:49:03.085038 sshd-session[4568]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:03.090023 systemd[1]: sshd@21-128.140.94.11:22-139.178.89.65:60842.service: Deactivated successfully. Mar 17 17:49:03.094218 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:49:03.096458 systemd-logind[1486]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:49:03.098341 systemd-logind[1486]: Removed session 22. Mar 17 17:49:03.266021 systemd[1]: Started sshd@22-128.140.94.11:22-139.178.89.65:55144.service - OpenSSH per-connection server daemon (139.178.89.65:55144). Mar 17 17:49:03.876217 kubelet[2791]: E0317 17:49:03.875800 2791 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:49:04.176753 kubelet[2791]: E0317 17:49:04.176501 2791 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Mar 17 17:49:04.176753 kubelet[2791]: E0317 17:49:04.176683 2791 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/209646b1-8e20-408d-ad2c-f4235360a9ef-cilium-config-path podName:209646b1-8e20-408d-ad2c-f4235360a9ef nodeName:}" failed. No retries permitted until 2025-03-17 17:49:04.676611723 +0000 UTC m=+356.151400471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/209646b1-8e20-408d-ad2c-f4235360a9ef-cilium-config-path") pod "cilium-8qf58" (UID: "209646b1-8e20-408d-ad2c-f4235360a9ef") : failed to sync configmap cache: timed out waiting for the condition Mar 17 17:49:04.177805 kubelet[2791]: E0317 17:49:04.177369 2791 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Mar 17 17:49:04.177805 kubelet[2791]: E0317 17:49:04.177417 2791 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-8qf58: failed to sync secret cache: timed out waiting for the condition Mar 17 17:49:04.177805 kubelet[2791]: E0317 17:49:04.177514 2791 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/209646b1-8e20-408d-ad2c-f4235360a9ef-hubble-tls podName:209646b1-8e20-408d-ad2c-f4235360a9ef nodeName:}" failed. No retries permitted until 2025-03-17 17:49:04.677483039 +0000 UTC m=+356.152271827 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/209646b1-8e20-408d-ad2c-f4235360a9ef-hubble-tls") pod "cilium-8qf58" (UID: "209646b1-8e20-408d-ad2c-f4235360a9ef") : failed to sync secret cache: timed out waiting for the condition Mar 17 17:49:04.257569 sshd[4581]: Accepted publickey for core from 139.178.89.65 port 55144 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:49:04.259620 sshd-session[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:04.264772 systemd-logind[1486]: New session 23 of user core. Mar 17 17:49:04.275173 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:49:04.750498 containerd[1502]: time="2025-03-17T17:49:04.750092541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8qf58,Uid:209646b1-8e20-408d-ad2c-f4235360a9ef,Namespace:kube-system,Attempt:0,}" Mar 17 17:49:04.778830 containerd[1502]: time="2025-03-17T17:49:04.778312394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:04.778830 containerd[1502]: time="2025-03-17T17:49:04.778369434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:04.778830 containerd[1502]: time="2025-03-17T17:49:04.778382353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:04.778830 containerd[1502]: time="2025-03-17T17:49:04.778608712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:04.805869 systemd[1]: Started cri-containerd-52598367c54673eebb4584f771254f245940774d047d6caa735fffbaf72ae0f7.scope - libcontainer container 52598367c54673eebb4584f771254f245940774d047d6caa735fffbaf72ae0f7. Mar 17 17:49:04.832945 containerd[1502]: time="2025-03-17T17:49:04.832901710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8qf58,Uid:209646b1-8e20-408d-ad2c-f4235360a9ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"52598367c54673eebb4584f771254f245940774d047d6caa735fffbaf72ae0f7\"" Mar 17 17:49:04.836933 containerd[1502]: time="2025-03-17T17:49:04.836865169Z" level=info msg="CreateContainer within sandbox \"52598367c54673eebb4584f771254f245940774d047d6caa735fffbaf72ae0f7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:49:04.851898 containerd[1502]: time="2025-03-17T17:49:04.851699572Z" level=info msg="CreateContainer within sandbox \"52598367c54673eebb4584f771254f245940774d047d6caa735fffbaf72ae0f7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a0949fbe859d1fff33db1384af6c8a287384403f55a22e0b62a45003abb632f4\"" Mar 17 17:49:04.852734 containerd[1502]: time="2025-03-17T17:49:04.852680087Z" level=info msg="StartContainer for \"a0949fbe859d1fff33db1384af6c8a287384403f55a22e0b62a45003abb632f4\"" Mar 17 17:49:04.879874 systemd[1]: Started cri-containerd-a0949fbe859d1fff33db1384af6c8a287384403f55a22e0b62a45003abb632f4.scope - libcontainer container a0949fbe859d1fff33db1384af6c8a287384403f55a22e0b62a45003abb632f4. Mar 17 17:49:04.909864 containerd[1502]: time="2025-03-17T17:49:04.909797310Z" level=info msg="StartContainer for \"a0949fbe859d1fff33db1384af6c8a287384403f55a22e0b62a45003abb632f4\" returns successfully" Mar 17 17:49:04.917774 systemd[1]: cri-containerd-a0949fbe859d1fff33db1384af6c8a287384403f55a22e0b62a45003abb632f4.scope: Deactivated successfully. Mar 17 17:49:04.937777 sshd[4585]: Connection closed by 139.178.89.65 port 55144 Mar 17 17:49:04.939673 sshd-session[4581]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:04.945888 systemd[1]: sshd@22-128.140.94.11:22-139.178.89.65:55144.service: Deactivated successfully. Mar 17 17:49:04.949540 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:49:04.951935 systemd-logind[1486]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:49:04.955990 systemd-logind[1486]: Removed session 23. Mar 17 17:49:04.973854 containerd[1502]: time="2025-03-17T17:49:04.973705818Z" level=info msg="shim disconnected" id=a0949fbe859d1fff33db1384af6c8a287384403f55a22e0b62a45003abb632f4 namespace=k8s.io Mar 17 17:49:04.973854 containerd[1502]: time="2025-03-17T17:49:04.973801897Z" level=warning msg="cleaning up after shim disconnected" id=a0949fbe859d1fff33db1384af6c8a287384403f55a22e0b62a45003abb632f4 namespace=k8s.io Mar 17 17:49:04.973854 containerd[1502]: time="2025-03-17T17:49:04.973814857Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:05.114215 systemd[1]: Started sshd@23-128.140.94.11:22-139.178.89.65:55160.service - OpenSSH per-connection server daemon (139.178.89.65:55160). Mar 17 17:49:05.317079 kubelet[2791]: I0317 17:49:05.315742 2791 setters.go:602] "Node became not ready" node="ci-4230-1-0-9-a82243c43d" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:49:05Z","lastTransitionTime":"2025-03-17T17:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:49:05.802215 containerd[1502]: time="2025-03-17T17:49:05.802095250Z" level=info msg="CreateContainer within sandbox \"52598367c54673eebb4584f771254f245940774d047d6caa735fffbaf72ae0f7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:49:05.835908 containerd[1502]: time="2025-03-17T17:49:05.835843402Z" level=info msg="CreateContainer within sandbox \"52598367c54673eebb4584f771254f245940774d047d6caa735fffbaf72ae0f7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ae2f60d6af282fe54cfde55c5d49c5a5ad4895a8f2f9f7434fd44b1948f81807\"" Mar 17 17:49:05.837052 containerd[1502]: time="2025-03-17T17:49:05.836823477Z" level=info msg="StartContainer for \"ae2f60d6af282fe54cfde55c5d49c5a5ad4895a8f2f9f7434fd44b1948f81807\"" Mar 17 17:49:05.872015 systemd[1]: Started cri-containerd-ae2f60d6af282fe54cfde55c5d49c5a5ad4895a8f2f9f7434fd44b1948f81807.scope - libcontainer container ae2f60d6af282fe54cfde55c5d49c5a5ad4895a8f2f9f7434fd44b1948f81807. Mar 17 17:49:05.917695 containerd[1502]: time="2025-03-17T17:49:05.917227757Z" level=info msg="StartContainer for \"ae2f60d6af282fe54cfde55c5d49c5a5ad4895a8f2f9f7434fd44b1948f81807\" returns successfully" Mar 17 17:49:05.927365 systemd[1]: cri-containerd-ae2f60d6af282fe54cfde55c5d49c5a5ad4895a8f2f9f7434fd44b1948f81807.scope: Deactivated successfully. Mar 17 17:49:05.955576 containerd[1502]: time="2025-03-17T17:49:05.955271847Z" level=info msg="shim disconnected" id=ae2f60d6af282fe54cfde55c5d49c5a5ad4895a8f2f9f7434fd44b1948f81807 namespace=k8s.io Mar 17 17:49:05.955576 containerd[1502]: time="2025-03-17T17:49:05.955340927Z" level=warning msg="cleaning up after shim disconnected" id=ae2f60d6af282fe54cfde55c5d49c5a5ad4895a8f2f9f7434fd44b1948f81807 namespace=k8s.io Mar 17 17:49:05.955576 containerd[1502]: time="2025-03-17T17:49:05.955352327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:06.095002 sshd[4698]: Accepted publickey for core from 139.178.89.65 port 55160 ssh2: RSA SHA256:Jttd1rZ+ulYi7GH+BRtc3021KMKgFEk4z8ruhpXqUv8 Mar 17 17:49:06.099398 sshd-session[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:06.106267 systemd-logind[1486]: New session 24 of user core. Mar 17 17:49:06.109888 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:49:06.701002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae2f60d6af282fe54cfde55c5d49c5a5ad4895a8f2f9f7434fd44b1948f81807-rootfs.mount: Deactivated successfully. Mar 17 17:49:06.807707 containerd[1502]: time="2025-03-17T17:49:06.805865675Z" level=info msg="CreateContainer within sandbox \"52598367c54673eebb4584f771254f245940774d047d6caa735fffbaf72ae0f7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:49:06.836491 containerd[1502]: time="2025-03-17T17:49:06.836439330Z" level=info msg="CreateContainer within sandbox \"52598367c54673eebb4584f771254f245940774d047d6caa735fffbaf72ae0f7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"731a85a93cc87a429e321a58f6dcf627d54d8ba9d16a24fe7da7a5c99d02e25d\"" Mar 17 17:49:06.840605 containerd[1502]: time="2025-03-17T17:49:06.838851399Z" level=info msg="StartContainer for \"731a85a93cc87a429e321a58f6dcf627d54d8ba9d16a24fe7da7a5c99d02e25d\"" Mar 17 17:49:06.895565 systemd[1]: Started cri-containerd-731a85a93cc87a429e321a58f6dcf627d54d8ba9d16a24fe7da7a5c99d02e25d.scope - libcontainer container 731a85a93cc87a429e321a58f6dcf627d54d8ba9d16a24fe7da7a5c99d02e25d. Mar 17 17:49:06.938556 containerd[1502]: time="2025-03-17T17:49:06.937810008Z" level=info msg="StartContainer for \"731a85a93cc87a429e321a58f6dcf627d54d8ba9d16a24fe7da7a5c99d02e25d\" returns successfully" Mar 17 17:49:06.942439 systemd[1]: cri-containerd-731a85a93cc87a429e321a58f6dcf627d54d8ba9d16a24fe7da7a5c99d02e25d.scope: Deactivated successfully. Mar 17 17:49:06.970457 containerd[1502]: time="2025-03-17T17:49:06.970166095Z" level=info msg="shim disconnected" id=731a85a93cc87a429e321a58f6dcf627d54d8ba9d16a24fe7da7a5c99d02e25d namespace=k8s.io Mar 17 17:49:06.970457 containerd[1502]: time="2025-03-17T17:49:06.970226174Z" level=warning msg="cleaning up after shim disconnected" id=731a85a93cc87a429e321a58f6dcf627d54d8ba9d16a24fe7da7a5c99d02e25d namespace=k8s.io Mar 17 17:49:06.970457 containerd[1502]: time="2025-03-17T17:49:06.970234734Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:07.697666 systemd[1]: run-containerd-runc-k8s.io-731a85a93cc87a429e321a58f6dcf627d54d8ba9d16a24fe7da7a5c99d02e25d-runc.d2rVuP.mount: Deactivated successfully. Mar 17 17:49:07.697945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-731a85a93cc87a429e321a58f6dcf627d54d8ba9d16a24fe7da7a5c99d02e25d-rootfs.mount: Deactivated successfully. Mar 17 17:49:07.817624 containerd[1502]: time="2025-03-17T17:49:07.817584289Z" level=info msg="CreateContainer within sandbox \"52598367c54673eebb4584f771254f245940774d047d6caa735fffbaf72ae0f7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:49:07.837095 containerd[1502]: time="2025-03-17T17:49:07.836947681Z" level=info msg="CreateContainer within sandbox \"52598367c54673eebb4584f771254f245940774d047d6caa735fffbaf72ae0f7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ec278e2c2dfe5690a5242413faa8400006322cf06070858aca7b1ce31ce0703d\"" Mar 17 17:49:07.839202 containerd[1502]: time="2025-03-17T17:49:07.838921112Z" level=info msg="StartContainer for \"ec278e2c2dfe5690a5242413faa8400006322cf06070858aca7b1ce31ce0703d\"" Mar 17 17:49:07.873871 systemd[1]: Started cri-containerd-ec278e2c2dfe5690a5242413faa8400006322cf06070858aca7b1ce31ce0703d.scope - libcontainer container ec278e2c2dfe5690a5242413faa8400006322cf06070858aca7b1ce31ce0703d. Mar 17 17:49:07.903657 systemd[1]: cri-containerd-ec278e2c2dfe5690a5242413faa8400006322cf06070858aca7b1ce31ce0703d.scope: Deactivated successfully. Mar 17 17:49:07.908251 containerd[1502]: time="2025-03-17T17:49:07.908203559Z" level=info msg="StartContainer for \"ec278e2c2dfe5690a5242413faa8400006322cf06070858aca7b1ce31ce0703d\" returns successfully" Mar 17 17:49:07.931076 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec278e2c2dfe5690a5242413faa8400006322cf06070858aca7b1ce31ce0703d-rootfs.mount: Deactivated successfully. Mar 17 17:49:07.935889 containerd[1502]: time="2025-03-17T17:49:07.935781194Z" level=info msg="shim disconnected" id=ec278e2c2dfe5690a5242413faa8400006322cf06070858aca7b1ce31ce0703d namespace=k8s.io Mar 17 17:49:07.935889 containerd[1502]: time="2025-03-17T17:49:07.935876353Z" level=warning msg="cleaning up after shim disconnected" id=ec278e2c2dfe5690a5242413faa8400006322cf06070858aca7b1ce31ce0703d namespace=k8s.io Mar 17 17:49:07.935889 containerd[1502]: time="2025-03-17T17:49:07.935890233Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:08.697514 containerd[1502]: time="2025-03-17T17:49:08.697456537Z" level=info msg="StopPodSandbox for \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\"" Mar 17 17:49:08.697699 containerd[1502]: time="2025-03-17T17:49:08.697573097Z" level=info msg="TearDown network for sandbox \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\" successfully" Mar 17 17:49:08.697699 containerd[1502]: time="2025-03-17T17:49:08.697586337Z" level=info msg="StopPodSandbox for \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\" returns successfully" Mar 17 17:49:08.698202 containerd[1502]: time="2025-03-17T17:49:08.698144974Z" level=info msg="RemovePodSandbox for \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\"" Mar 17 17:49:08.698202 containerd[1502]: time="2025-03-17T17:49:08.698182094Z" level=info msg="Forcibly stopping sandbox \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\"" Mar 17 17:49:08.698345 containerd[1502]: time="2025-03-17T17:49:08.698237134Z" level=info msg="TearDown network for sandbox \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\" successfully" Mar 17 17:49:08.705662 containerd[1502]: time="2025-03-17T17:49:08.704426027Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:08.705662 containerd[1502]: time="2025-03-17T17:49:08.704540107Z" level=info msg="RemovePodSandbox \"0222937b0bfd9f0c024739c5358d94f8962ac5fd1c3392e33cc97856e2598552\" returns successfully" Mar 17 17:49:08.706321 containerd[1502]: time="2025-03-17T17:49:08.706020420Z" level=info msg="StopPodSandbox for \"7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97\"" Mar 17 17:49:08.706321 containerd[1502]: time="2025-03-17T17:49:08.706109660Z" level=info msg="TearDown network for sandbox \"7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97\" successfully" Mar 17 17:49:08.706321 containerd[1502]: time="2025-03-17T17:49:08.706119060Z" level=info msg="StopPodSandbox for \"7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97\" returns successfully" Mar 17 17:49:08.706903 containerd[1502]: time="2025-03-17T17:49:08.706737257Z" level=info msg="RemovePodSandbox for \"7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97\"" Mar 17 17:49:08.706903 containerd[1502]: time="2025-03-17T17:49:08.706784777Z" level=info msg="Forcibly stopping sandbox \"7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97\"" Mar 17 17:49:08.706903 containerd[1502]: time="2025-03-17T17:49:08.706856097Z" level=info msg="TearDown network for sandbox \"7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97\" successfully" Mar 17 17:49:08.710511 containerd[1502]: time="2025-03-17T17:49:08.710363722Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:49:08.710511 containerd[1502]: time="2025-03-17T17:49:08.710420881Z" level=info msg="RemovePodSandbox \"7677ad36438c85a5405c0fda856bc0d4d5f95248f4c06a2e7efb65ce2251fc97\" returns successfully" Mar 17 17:49:08.822326 containerd[1502]: time="2025-03-17T17:49:08.820138888Z" level=info msg="CreateContainer within sandbox \"52598367c54673eebb4584f771254f245940774d047d6caa735fffbaf72ae0f7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:49:08.856448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974427890.mount: Deactivated successfully. Mar 17 17:49:08.857214 containerd[1502]: time="2025-03-17T17:49:08.856444772Z" level=info msg="CreateContainer within sandbox \"52598367c54673eebb4584f771254f245940774d047d6caa735fffbaf72ae0f7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b234e76759f3428f17a5017faaf3e7ea5528cf420179facba5acb337fb3cfcc9\"" Mar 17 17:49:08.858349 containerd[1502]: time="2025-03-17T17:49:08.858312564Z" level=info msg="StartContainer for \"b234e76759f3428f17a5017faaf3e7ea5528cf420179facba5acb337fb3cfcc9\"" Mar 17 17:49:08.877535 kubelet[2791]: E0317 17:49:08.877424 2791 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:49:08.896062 systemd[1]: Started cri-containerd-b234e76759f3428f17a5017faaf3e7ea5528cf420179facba5acb337fb3cfcc9.scope - libcontainer container b234e76759f3428f17a5017faaf3e7ea5528cf420179facba5acb337fb3cfcc9. Mar 17 17:49:08.933022 containerd[1502]: time="2025-03-17T17:49:08.932857723Z" level=info msg="StartContainer for \"b234e76759f3428f17a5017faaf3e7ea5528cf420179facba5acb337fb3cfcc9\" returns successfully" Mar 17 17:49:09.281696 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 17 17:49:12.394134 systemd-networkd[1399]: lxc_health: Link UP Mar 17 17:49:12.411768 systemd-networkd[1399]: lxc_health: Gained carrier Mar 17 17:49:12.777433 kubelet[2791]: I0317 17:49:12.777354 2791 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8qf58" podStartSLOduration=10.777330412 podStartE2EDuration="10.777330412s" podCreationTimestamp="2025-03-17 17:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:49:09.851107912 +0000 UTC m=+361.325896700" watchObservedRunningTime="2025-03-17 17:49:12.777330412 +0000 UTC m=+364.252119160" Mar 17 17:49:13.115160 systemd[1]: run-containerd-runc-k8s.io-b234e76759f3428f17a5017faaf3e7ea5528cf420179facba5acb337fb3cfcc9-runc.MGA3s8.mount: Deactivated successfully. Mar 17 17:49:13.584981 systemd-networkd[1399]: lxc_health: Gained IPv6LL Mar 17 17:49:17.684923 sshd[4762]: Connection closed by 139.178.89.65 port 55160 Mar 17 17:49:17.684814 sshd-session[4698]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:17.692418 systemd-logind[1486]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:49:17.693187 systemd[1]: sshd@23-128.140.94.11:22-139.178.89.65:55160.service: Deactivated successfully. Mar 17 17:49:17.698943 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:49:17.702513 systemd-logind[1486]: Removed session 24.