Mar 17 18:47:48.895872 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 18:47:48.895899 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Mar 17 16:11:40 -00 2025 Mar 17 18:47:48.895910 kernel: KASLR enabled Mar 17 18:47:48.895916 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 17 18:47:48.895921 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Mar 17 18:47:48.895927 kernel: random: crng init done Mar 17 18:47:48.895934 kernel: secureboot: Secure boot disabled Mar 17 18:47:48.895939 kernel: ACPI: Early table checksum verification disabled Mar 17 18:47:48.895945 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Mar 17 18:47:48.895953 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Mar 17 18:47:48.895965 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:47:48.895971 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:47:48.895976 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:47:48.898017 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:47:48.898033 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:47:48.898047 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:47:48.898053 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:47:48.898059 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:47:48.898065 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:47:48.898071 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Mar 17 18:47:48.898078 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Mar 17 18:47:48.898084 kernel: NUMA: Failed to initialise from firmware Mar 17 18:47:48.898090 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Mar 17 18:47:48.898096 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Mar 17 18:47:48.898102 kernel: Zone ranges: Mar 17 18:47:48.898110 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 18:47:48.898117 kernel: DMA32 empty Mar 17 18:47:48.898123 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Mar 17 18:47:48.898129 kernel: Movable zone start for each node Mar 17 18:47:48.898135 kernel: Early memory node ranges Mar 17 18:47:48.898141 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Mar 17 18:47:48.898148 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Mar 17 18:47:48.898154 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Mar 17 18:47:48.898160 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Mar 17 18:47:48.898166 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Mar 17 18:47:48.898172 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Mar 17 18:47:48.898178 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Mar 17 18:47:48.898186 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Mar 17 18:47:48.898192 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Mar 17 18:47:48.898199 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Mar 17 18:47:48.898208 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 17 18:47:48.898214 kernel: psci: probing for conduit method from ACPI. Mar 17 18:47:48.898221 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 18:47:48.898229 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 18:47:48.898235 kernel: psci: Trusted OS migration not required Mar 17 18:47:48.898242 kernel: psci: SMC Calling Convention v1.1 Mar 17 18:47:48.898249 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 18:47:48.898255 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 18:47:48.898262 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 18:47:48.898268 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 18:47:48.898275 kernel: Detected PIPT I-cache on CPU0 Mar 17 18:47:48.898281 kernel: CPU features: detected: GIC system register CPU interface Mar 17 18:47:48.898287 kernel: CPU features: detected: Hardware dirty bit management Mar 17 18:47:48.898296 kernel: CPU features: detected: Spectre-v4 Mar 17 18:47:48.898302 kernel: CPU features: detected: Spectre-BHB Mar 17 18:47:48.898309 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 18:47:48.898315 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 18:47:48.898322 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 18:47:48.898328 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 18:47:48.898335 kernel: alternatives: applying boot alternatives Mar 17 18:47:48.898343 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 18:47:48.898350 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:47:48.898357 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:47:48.898363 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:47:48.898371 kernel: Fallback order for Node 0: 0 Mar 17 18:47:48.898378 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Mar 17 18:47:48.898384 kernel: Policy zone: Normal Mar 17 18:47:48.898390 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:47:48.898397 kernel: software IO TLB: area num 2. Mar 17 18:47:48.898403 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Mar 17 18:47:48.898410 kernel: Memory: 3883896K/4096000K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 212104K reserved, 0K cma-reserved) Mar 17 18:47:48.898416 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:47:48.898423 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 18:47:48.898430 kernel: rcu: RCU event tracing is enabled. Mar 17 18:47:48.898500 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:47:48.898508 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 18:47:48.898518 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:47:48.898525 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:47:48.898532 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:47:48.898538 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 18:47:48.898545 kernel: GICv3: 256 SPIs implemented Mar 17 18:47:48.898552 kernel: GICv3: 0 Extended SPIs implemented Mar 17 18:47:48.898558 kernel: Root IRQ handler: gic_handle_irq Mar 17 18:47:48.898564 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 18:47:48.898571 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 18:47:48.898578 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 18:47:48.898584 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 18:47:48.898593 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Mar 17 18:47:48.898600 kernel: GICv3: using LPI property table @0x00000001000e0000 Mar 17 18:47:48.898607 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Mar 17 18:47:48.898613 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 18:47:48.898620 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:47:48.898627 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 18:47:48.898634 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 18:47:48.898640 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 18:47:48.898647 kernel: Console: colour dummy device 80x25 Mar 17 18:47:48.898654 kernel: ACPI: Core revision 20230628 Mar 17 18:47:48.898661 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 18:47:48.898670 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:47:48.898677 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 18:47:48.898684 kernel: landlock: Up and running. Mar 17 18:47:48.898691 kernel: SELinux: Initializing. Mar 17 18:47:48.898698 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:47:48.898705 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:47:48.898712 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 18:47:48.898719 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 18:47:48.898725 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:47:48.898734 kernel: rcu: Max phase no-delay instances is 400. Mar 17 18:47:48.898741 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 18:47:48.898748 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 18:47:48.898755 kernel: Remapping and enabling EFI services. Mar 17 18:47:48.898762 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:47:48.898769 kernel: Detected PIPT I-cache on CPU1 Mar 17 18:47:48.898776 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 18:47:48.898791 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Mar 17 18:47:48.898800 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:47:48.898809 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 18:47:48.898816 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:47:48.898840 kernel: SMP: Total of 2 processors activated. Mar 17 18:47:48.898851 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 18:47:48.898858 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 18:47:48.898865 kernel: CPU features: detected: Common not Private translations Mar 17 18:47:48.898872 kernel: CPU features: detected: CRC32 instructions Mar 17 18:47:48.898879 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 17 18:47:48.898886 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 18:47:48.898895 kernel: CPU features: detected: LSE atomic instructions Mar 17 18:47:48.898902 kernel: CPU features: detected: Privileged Access Never Mar 17 18:47:48.898909 kernel: CPU features: detected: RAS Extension Support Mar 17 18:47:48.898916 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 18:47:48.898923 kernel: CPU: All CPU(s) started at EL1 Mar 17 18:47:48.898930 kernel: alternatives: applying system-wide alternatives Mar 17 18:47:48.898937 kernel: devtmpfs: initialized Mar 17 18:47:48.898945 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:47:48.898954 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:47:48.898961 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:47:48.898968 kernel: SMBIOS 3.0.0 present. Mar 17 18:47:48.898975 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Mar 17 18:47:48.899001 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:47:48.899009 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 18:47:48.899016 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 18:47:48.899023 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 18:47:48.899031 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:47:48.899040 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Mar 17 18:47:48.899047 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:47:48.899055 kernel: cpuidle: using governor menu Mar 17 18:47:48.899062 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 18:47:48.899095 kernel: ASID allocator initialised with 32768 entries Mar 17 18:47:48.899104 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:47:48.899111 kernel: Serial: AMBA PL011 UART driver Mar 17 18:47:48.899118 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 18:47:48.899125 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 18:47:48.899136 kernel: Modules: 509280 pages in range for PLT usage Mar 17 18:47:48.899143 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:47:48.899150 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 18:47:48.899157 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 18:47:48.899164 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 18:47:48.899171 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:47:48.899183 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 18:47:48.899191 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 18:47:48.899198 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 18:47:48.899208 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:47:48.899215 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:47:48.899222 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:47:48.899229 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:47:48.899236 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:47:48.899262 kernel: ACPI: Interpreter enabled Mar 17 18:47:48.899271 kernel: ACPI: Using GIC for interrupt routing Mar 17 18:47:48.899278 kernel: ACPI: MCFG table detected, 1 entries Mar 17 18:47:48.899285 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 18:47:48.899295 kernel: printk: console [ttyAMA0] enabled Mar 17 18:47:48.899302 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:47:48.899478 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:47:48.899561 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 18:47:48.899627 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 18:47:48.899692 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 18:47:48.899755 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 18:47:48.899768 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 18:47:48.899776 kernel: PCI host bridge to bus 0000:00 Mar 17 18:47:48.899927 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 18:47:48.902114 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 18:47:48.902207 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 18:47:48.902267 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:47:48.902354 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 18:47:48.902444 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Mar 17 18:47:48.902513 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Mar 17 18:47:48.902584 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Mar 17 18:47:48.902666 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 17 18:47:48.902733 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Mar 17 18:47:48.902810 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 17 18:47:48.902938 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Mar 17 18:47:48.903061 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 17 18:47:48.903136 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Mar 17 18:47:48.903221 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 17 18:47:48.903290 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Mar 17 18:47:48.903372 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 17 18:47:48.903439 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Mar 17 18:47:48.903519 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 17 18:47:48.903586 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Mar 17 18:47:48.903658 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 17 18:47:48.903725 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Mar 17 18:47:48.903801 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 17 18:47:48.903888 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Mar 17 18:47:48.903972 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 17 18:47:48.905252 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Mar 17 18:47:48.905369 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Mar 17 18:47:48.905446 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Mar 17 18:47:48.905526 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 18:47:48.905600 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Mar 17 18:47:48.905678 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 18:47:48.905747 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 17 18:47:48.905846 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 17 18:47:48.905924 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Mar 17 18:47:48.906021 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 17 18:47:48.906093 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Mar 17 18:47:48.906162 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Mar 17 18:47:48.906245 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 17 18:47:48.906314 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Mar 17 18:47:48.906395 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 17 18:47:48.906465 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Mar 17 18:47:48.906543 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Mar 17 18:47:48.906621 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 17 18:47:48.906693 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Mar 17 18:47:48.906762 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Mar 17 18:47:48.906858 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 18:47:48.906935 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Mar 17 18:47:48.909176 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Mar 17 18:47:48.909280 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 17 18:47:48.909362 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Mar 17 18:47:48.909430 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Mar 17 18:47:48.909496 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Mar 17 18:47:48.909567 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Mar 17 18:47:48.909633 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Mar 17 18:47:48.909699 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Mar 17 18:47:48.909770 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 17 18:47:48.909885 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Mar 17 18:47:48.909969 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Mar 17 18:47:48.911456 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 17 18:47:48.911590 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Mar 17 18:47:48.911657 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Mar 17 18:47:48.911731 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 17 18:47:48.911795 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Mar 17 18:47:48.911882 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Mar 17 18:47:48.911965 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 17 18:47:48.912048 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Mar 17 18:47:48.912115 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Mar 17 18:47:48.912185 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 17 18:47:48.912251 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Mar 17 18:47:48.912316 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Mar 17 18:47:48.912387 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 17 18:47:48.912796 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Mar 17 18:47:48.912946 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Mar 17 18:47:48.913065 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 17 18:47:48.913136 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Mar 17 18:47:48.913202 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Mar 17 18:47:48.913272 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Mar 17 18:47:48.913339 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 18:47:48.913409 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Mar 17 18:47:48.913484 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 18:47:48.913554 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Mar 17 18:47:48.913621 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 18:47:48.913690 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Mar 17 18:47:48.913758 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 18:47:48.913845 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Mar 17 18:47:48.913919 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 18:47:48.915154 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Mar 17 18:47:48.915268 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 18:47:48.915340 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Mar 17 18:47:48.915407 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 18:47:48.915475 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Mar 17 18:47:48.915541 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 18:47:48.915618 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Mar 17 18:47:48.915684 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 18:47:48.915756 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Mar 17 18:47:48.915862 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Mar 17 18:47:48.915944 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Mar 17 18:47:48.916166 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 17 18:47:48.916248 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Mar 17 18:47:48.916314 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 17 18:47:48.916388 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Mar 17 18:47:48.916453 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 17 18:47:48.916522 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Mar 17 18:47:48.916586 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 17 18:47:48.916654 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Mar 17 18:47:48.916718 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 17 18:47:48.916785 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Mar 17 18:47:48.916870 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 17 18:47:48.916945 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Mar 17 18:47:48.917081 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 17 18:47:48.917153 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Mar 17 18:47:48.917217 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 17 18:47:48.917284 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Mar 17 18:47:48.917350 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Mar 17 18:47:48.917421 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Mar 17 18:47:48.917494 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Mar 17 18:47:48.917567 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 18:47:48.917635 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Mar 17 18:47:48.917700 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 17 18:47:48.917765 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 17 18:47:48.917860 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Mar 17 18:47:48.917933 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 18:47:48.918032 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Mar 17 18:47:48.918110 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 17 18:47:48.918175 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 17 18:47:48.918242 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Mar 17 18:47:48.918309 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 18:47:48.918383 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Mar 17 18:47:48.918454 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Mar 17 18:47:48.918521 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 17 18:47:48.918587 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 17 18:47:48.918652 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Mar 17 18:47:48.918716 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 18:47:48.918790 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Mar 17 18:47:48.918903 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 17 18:47:48.918976 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 17 18:47:48.919105 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Mar 17 18:47:48.919170 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 18:47:48.919245 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Mar 17 18:47:48.919312 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Mar 17 18:47:48.919377 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 17 18:47:48.919442 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 17 18:47:48.919506 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 17 18:47:48.919569 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 18:47:48.919645 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Mar 17 18:47:48.919712 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Mar 17 18:47:48.919778 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 17 18:47:48.919862 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 17 18:47:48.919930 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Mar 17 18:47:48.920005 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 18:47:48.920079 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Mar 17 18:47:48.920148 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Mar 17 18:47:48.920241 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Mar 17 18:47:48.920312 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 17 18:47:48.920378 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 17 18:47:48.920443 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Mar 17 18:47:48.920509 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 18:47:48.920577 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 17 18:47:48.920641 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 17 18:47:48.920709 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Mar 17 18:47:48.920773 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 18:47:48.920853 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 17 18:47:48.920921 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Mar 17 18:47:48.920995 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Mar 17 18:47:48.921079 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 18:47:48.921149 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 18:47:48.921209 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 18:47:48.921271 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 18:47:48.921347 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 17 18:47:48.921409 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Mar 17 18:47:48.921469 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 18:47:48.921538 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Mar 17 18:47:48.921601 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Mar 17 18:47:48.921665 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 18:47:48.921746 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Mar 17 18:47:48.921809 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Mar 17 18:47:48.921915 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 18:47:48.922024 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Mar 17 18:47:48.922107 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Mar 17 18:47:48.922169 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 18:47:48.922248 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Mar 17 18:47:48.922309 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Mar 17 18:47:48.922371 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 18:47:48.922440 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Mar 17 18:47:48.922505 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Mar 17 18:47:48.922570 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 18:47:48.922642 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Mar 17 18:47:48.922704 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Mar 17 18:47:48.922767 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 18:47:48.922856 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Mar 17 18:47:48.922919 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Mar 17 18:47:48.923625 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 18:47:48.923764 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Mar 17 18:47:48.923880 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Mar 17 18:47:48.923954 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 18:47:48.923964 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 18:47:48.923973 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 18:47:48.923995 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 18:47:48.924025 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 18:47:48.924040 kernel: iommu: Default domain type: Translated Mar 17 18:47:48.924048 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 18:47:48.924055 kernel: efivars: Registered efivars operations Mar 17 18:47:48.924062 kernel: vgaarb: loaded Mar 17 18:47:48.924070 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 18:47:48.924079 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:47:48.924087 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:47:48.924095 kernel: pnp: PnP ACPI init Mar 17 18:47:48.924182 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 18:47:48.924196 kernel: pnp: PnP ACPI: found 1 devices Mar 17 18:47:48.924203 kernel: NET: Registered PF_INET protocol family Mar 17 18:47:48.924211 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:47:48.924219 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:47:48.924227 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:47:48.924234 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:47:48.924242 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 18:47:48.924249 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:47:48.924258 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:47:48.924266 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:47:48.924273 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:47:48.924360 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Mar 17 18:47:48.924373 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:47:48.924381 kernel: kvm [1]: HYP mode not available Mar 17 18:47:48.924388 kernel: Initialise system trusted keyrings Mar 17 18:47:48.924396 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:47:48.924404 kernel: Key type asymmetric registered Mar 17 18:47:48.924414 kernel: Asymmetric key parser 'x509' registered Mar 17 18:47:48.924421 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 18:47:48.924429 kernel: io scheduler mq-deadline registered Mar 17 18:47:48.924436 kernel: io scheduler kyber registered Mar 17 18:47:48.924443 kernel: io scheduler bfq registered Mar 17 18:47:48.924452 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 18:47:48.924522 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Mar 17 18:47:48.924589 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Mar 17 18:47:48.924658 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 18:47:48.924727 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Mar 17 18:47:48.924793 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Mar 17 18:47:48.924878 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 18:47:48.924950 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Mar 17 18:47:48.925584 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Mar 17 18:47:48.925687 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 18:47:48.925760 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Mar 17 18:47:48.925845 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Mar 17 18:47:48.925926 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 18:47:48.926065 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Mar 17 18:47:48.926136 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Mar 17 18:47:48.926206 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 18:47:48.926276 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Mar 17 18:47:48.926340 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Mar 17 18:47:48.926406 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 18:47:48.926473 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Mar 17 18:47:48.926539 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Mar 17 18:47:48.926607 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 18:47:48.926676 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Mar 17 18:47:48.926741 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Mar 17 18:47:48.926806 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 18:47:48.926816 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Mar 17 18:47:48.926941 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Mar 17 18:47:48.927035 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Mar 17 18:47:48.927106 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 18:47:48.927117 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 18:47:48.927125 kernel: ACPI: button: Power Button [PWRB] Mar 17 18:47:48.927133 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 18:47:48.927207 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Mar 17 18:47:48.927282 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Mar 17 18:47:48.927293 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:47:48.927305 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 18:47:48.927374 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Mar 17 18:47:48.927385 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Mar 17 18:47:48.927393 kernel: thunder_xcv, ver 1.0 Mar 17 18:47:48.927400 kernel: thunder_bgx, ver 1.0 Mar 17 18:47:48.927407 kernel: nicpf, ver 1.0 Mar 17 18:47:48.927415 kernel: nicvf, ver 1.0 Mar 17 18:47:48.927497 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 18:47:48.927563 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T18:47:48 UTC (1742237268) Mar 17 18:47:48.927576 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 18:47:48.927583 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 18:47:48.927591 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 18:47:48.927601 kernel: watchdog: Hard watchdog permanently disabled Mar 17 18:47:48.927609 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:47:48.927618 kernel: Segment Routing with IPv6 Mar 17 18:47:48.927627 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:47:48.927636 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:47:48.927646 kernel: Key type dns_resolver registered Mar 17 18:47:48.927656 kernel: registered taskstats version 1 Mar 17 18:47:48.927663 kernel: Loading compiled-in X.509 certificates Mar 17 18:47:48.927671 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: f4ff2820cf7379ce82b759137d15b536f0a99b51' Mar 17 18:47:48.927679 kernel: Key type .fscrypt registered Mar 17 18:47:48.927686 kernel: Key type fscrypt-provisioning registered Mar 17 18:47:48.927694 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:47:48.927701 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:47:48.927710 kernel: ima: No architecture policies found Mar 17 18:47:48.927721 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 18:47:48.927730 kernel: clk: Disabling unused clocks Mar 17 18:47:48.927739 kernel: Freeing unused kernel memory: 38336K Mar 17 18:47:48.927748 kernel: Run /init as init process Mar 17 18:47:48.927756 kernel: with arguments: Mar 17 18:47:48.927766 kernel: /init Mar 17 18:47:48.927773 kernel: with environment: Mar 17 18:47:48.927782 kernel: HOME=/ Mar 17 18:47:48.927790 kernel: TERM=linux Mar 17 18:47:48.927800 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:47:48.927809 systemd[1]: Successfully made /usr/ read-only. Mar 17 18:47:48.927819 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 18:47:48.927842 systemd[1]: Detected virtualization kvm. Mar 17 18:47:48.927850 systemd[1]: Detected architecture arm64. Mar 17 18:47:48.927858 systemd[1]: Running in initrd. Mar 17 18:47:48.927866 systemd[1]: No hostname configured, using default hostname. Mar 17 18:47:48.927876 systemd[1]: Hostname set to . Mar 17 18:47:48.927884 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:47:48.927892 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:47:48.927900 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 18:47:48.927910 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 18:47:48.927919 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 18:47:48.927927 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 18:47:48.927936 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 18:47:48.927946 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 18:47:48.927955 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 18:47:48.927963 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 18:47:48.927971 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 18:47:48.927979 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 18:47:48.929870 systemd[1]: Reached target paths.target - Path Units. Mar 17 18:47:48.929880 systemd[1]: Reached target slices.target - Slice Units. Mar 17 18:47:48.929895 systemd[1]: Reached target swap.target - Swaps. Mar 17 18:47:48.929903 systemd[1]: Reached target timers.target - Timer Units. Mar 17 18:47:48.929911 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 18:47:48.929919 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 18:47:48.929927 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 18:47:48.929935 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 18:47:48.929944 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 18:47:48.929952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 18:47:48.929960 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 18:47:48.929970 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 18:47:48.929978 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 18:47:48.930024 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 18:47:48.930032 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 18:47:48.930040 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:47:48.930048 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 18:47:48.930056 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 18:47:48.930064 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 18:47:48.930075 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 18:47:48.930082 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 18:47:48.930091 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:47:48.930141 systemd-journald[235]: Collecting audit messages is disabled. Mar 17 18:47:48.930165 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 18:47:48.930173 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 18:47:48.930182 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:47:48.930191 kernel: Bridge firewalling registered Mar 17 18:47:48.930199 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 18:47:48.930209 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 18:47:48.930217 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 18:47:48.930225 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 18:47:48.930234 systemd-journald[235]: Journal started Mar 17 18:47:48.930254 systemd-journald[235]: Runtime Journal (/run/log/journal/4931fcee3a444fd7b0b2f2ad836154f5) is 8M, max 76.6M, 68.6M free. Mar 17 18:47:48.887527 systemd-modules-load[237]: Inserted module 'overlay' Mar 17 18:47:48.908472 systemd-modules-load[237]: Inserted module 'br_netfilter' Mar 17 18:47:48.937007 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 18:47:48.937068 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 18:47:48.952320 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 18:47:48.957000 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 18:47:48.961371 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 18:47:48.972750 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 18:47:48.974747 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 18:47:48.977557 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 18:47:48.990503 dracut-cmdline[271]: dracut-dracut-053 Mar 17 18:47:48.991401 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 18:47:48.994077 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 18:47:49.022413 systemd-resolved[279]: Positive Trust Anchors: Mar 17 18:47:49.023052 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:47:49.023087 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 18:47:49.032770 systemd-resolved[279]: Defaulting to hostname 'linux'. Mar 17 18:47:49.034016 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 18:47:49.035735 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 18:47:49.080019 kernel: SCSI subsystem initialized Mar 17 18:47:49.086017 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:47:49.094031 kernel: iscsi: registered transport (tcp) Mar 17 18:47:49.108132 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:47:49.108270 kernel: QLogic iSCSI HBA Driver Mar 17 18:47:49.158365 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 18:47:49.166292 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 18:47:49.186177 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:47:49.186273 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:47:49.187048 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 18:47:49.239023 kernel: raid6: neonx8 gen() 15624 MB/s Mar 17 18:47:49.255003 kernel: raid6: neonx4 gen() 15562 MB/s Mar 17 18:47:49.272050 kernel: raid6: neonx2 gen() 12698 MB/s Mar 17 18:47:49.289063 kernel: raid6: neonx1 gen() 10299 MB/s Mar 17 18:47:49.306152 kernel: raid6: int64x8 gen() 6657 MB/s Mar 17 18:47:49.324023 kernel: raid6: int64x4 gen() 7183 MB/s Mar 17 18:47:49.340077 kernel: raid6: int64x2 gen() 5928 MB/s Mar 17 18:47:49.357053 kernel: raid6: int64x1 gen() 5003 MB/s Mar 17 18:47:49.357143 kernel: raid6: using algorithm neonx8 gen() 15624 MB/s Mar 17 18:47:49.374046 kernel: raid6: .... xor() 11739 MB/s, rmw enabled Mar 17 18:47:49.374173 kernel: raid6: using neon recovery algorithm Mar 17 18:47:49.379271 kernel: xor: measuring software checksum speed Mar 17 18:47:49.379336 kernel: 8regs : 21647 MB/sec Mar 17 18:47:49.379364 kernel: 32regs : 17873 MB/sec Mar 17 18:47:49.379455 kernel: arm64_neon : 27993 MB/sec Mar 17 18:47:49.380016 kernel: xor: using function: arm64_neon (27993 MB/sec) Mar 17 18:47:49.436068 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 18:47:49.450079 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 18:47:49.456226 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 18:47:49.473779 systemd-udevd[457]: Using default interface naming scheme 'v255'. Mar 17 18:47:49.481067 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 18:47:49.492204 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 18:47:49.507112 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Mar 17 18:47:49.546259 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 18:47:49.555380 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 18:47:49.612771 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 18:47:49.621456 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 18:47:49.644275 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 18:47:49.648294 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 18:47:49.651207 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 18:47:49.652727 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 18:47:49.662278 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 18:47:49.685247 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 18:47:49.737060 kernel: scsi host0: Virtio SCSI HBA Mar 17 18:47:49.739019 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 18:47:49.739138 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 17 18:47:49.782452 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:47:49.782587 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 18:47:49.784422 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 18:47:49.786147 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:47:49.789657 kernel: ACPI: bus type USB registered Mar 17 18:47:49.789681 kernel: sr 0:0:0:0: Power-on or device reset occurred Mar 17 18:47:49.790954 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Mar 17 18:47:49.791108 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 18:47:49.791127 kernel: usbcore: registered new interface driver usbfs Mar 17 18:47:49.791137 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Mar 17 18:47:49.786687 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 18:47:49.790318 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 18:47:49.797035 kernel: usbcore: registered new interface driver hub Mar 17 18:47:49.800605 kernel: usbcore: registered new device driver usb Mar 17 18:47:49.800760 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 18:47:49.826569 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 18:47:49.832200 kernel: sd 0:0:0:1: Power-on or device reset occurred Mar 17 18:47:49.842183 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 17 18:47:49.842337 kernel: sd 0:0:0:1: [sda] Write Protect is off Mar 17 18:47:49.842439 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Mar 17 18:47:49.842584 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 17 18:47:49.842689 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:47:49.842701 kernel: GPT:17805311 != 80003071 Mar 17 18:47:49.842711 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:47:49.842721 kernel: GPT:17805311 != 80003071 Mar 17 18:47:49.842730 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:47:49.842745 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:47:49.842756 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Mar 17 18:47:49.836301 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 18:47:49.847330 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 18:47:49.863085 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 17 18:47:49.863239 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 17 18:47:49.863328 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 18:47:49.863417 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 17 18:47:49.863498 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 17 18:47:49.863580 kernel: hub 1-0:1.0: USB hub found Mar 17 18:47:49.863694 kernel: hub 1-0:1.0: 4 ports detected Mar 17 18:47:49.863775 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 17 18:47:49.863903 kernel: hub 2-0:1.0: USB hub found Mar 17 18:47:49.864022 kernel: hub 2-0:1.0: 4 ports detected Mar 17 18:47:49.871265 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 18:47:49.903012 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (508) Mar 17 18:47:49.905019 kernel: BTRFS: device fsid 5ecee764-de70-4de1-8711-3798360e0d13 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (509) Mar 17 18:47:49.930741 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 17 18:47:49.939925 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 17 18:47:49.948598 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 18:47:49.955925 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 17 18:47:49.956606 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 17 18:47:49.973300 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 18:47:49.980217 disk-uuid[578]: Primary Header is updated. Mar 17 18:47:49.980217 disk-uuid[578]: Secondary Entries is updated. Mar 17 18:47:49.980217 disk-uuid[578]: Secondary Header is updated. Mar 17 18:47:49.987033 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:47:49.995468 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:47:50.101053 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 17 18:47:50.344070 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Mar 17 18:47:50.502059 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Mar 17 18:47:50.502155 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 17 18:47:50.502444 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Mar 17 18:47:50.558870 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Mar 17 18:47:50.559368 kernel: usbcore: registered new interface driver usbhid Mar 17 18:47:50.559402 kernel: usbhid: USB HID core driver Mar 17 18:47:51.002330 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:47:51.002424 disk-uuid[579]: The operation has completed successfully. Mar 17 18:47:51.065013 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:47:51.065204 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 18:47:51.100296 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 18:47:51.105018 sh[593]: Success Mar 17 18:47:51.118045 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 18:47:51.186511 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 18:47:51.202209 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 18:47:51.205186 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 18:47:51.235535 kernel: BTRFS info (device dm-0): first mount of filesystem 5ecee764-de70-4de1-8711-3798360e0d13 Mar 17 18:47:51.235613 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:47:51.235631 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 18:47:51.235648 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 18:47:51.236237 kernel: BTRFS info (device dm-0): using free space tree Mar 17 18:47:51.243062 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 18:47:51.245566 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 18:47:51.247483 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 18:47:51.264324 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 18:47:51.270238 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 18:47:51.287192 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 18:47:51.287255 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:47:51.287274 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:47:51.291119 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 18:47:51.291182 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 18:47:51.304540 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:47:51.305904 kernel: BTRFS info (device sda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 18:47:51.313073 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 18:47:51.318294 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 18:47:51.404704 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 18:47:51.418581 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 18:47:51.431613 ignition[696]: Ignition 2.20.0 Mar 17 18:47:51.431627 ignition[696]: Stage: fetch-offline Mar 17 18:47:51.433494 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 18:47:51.431685 ignition[696]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:47:51.431698 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 18:47:51.431904 ignition[696]: parsed url from cmdline: "" Mar 17 18:47:51.431908 ignition[696]: no config URL provided Mar 17 18:47:51.431913 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:47:51.431921 ignition[696]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:47:51.431927 ignition[696]: failed to fetch config: resource requires networking Mar 17 18:47:51.432364 ignition[696]: Ignition finished successfully Mar 17 18:47:51.447433 systemd-networkd[782]: lo: Link UP Mar 17 18:47:51.447445 systemd-networkd[782]: lo: Gained carrier Mar 17 18:47:51.450811 systemd-networkd[782]: Enumeration completed Mar 17 18:47:51.451291 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:47:51.451294 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:47:51.451781 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 18:47:51.453325 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:47:51.453328 systemd-networkd[782]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:47:51.453971 systemd-networkd[782]: eth0: Link UP Mar 17 18:47:51.453974 systemd-networkd[782]: eth0: Gained carrier Mar 17 18:47:51.454011 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:47:51.454579 systemd[1]: Reached target network.target - Network. Mar 17 18:47:51.459368 systemd-networkd[782]: eth1: Link UP Mar 17 18:47:51.459373 systemd-networkd[782]: eth1: Gained carrier Mar 17 18:47:51.459388 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:47:51.465247 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 18:47:51.481435 ignition[786]: Ignition 2.20.0 Mar 17 18:47:51.482253 ignition[786]: Stage: fetch Mar 17 18:47:51.482480 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:47:51.482493 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 18:47:51.482607 ignition[786]: parsed url from cmdline: "" Mar 17 18:47:51.482612 ignition[786]: no config URL provided Mar 17 18:47:51.482617 ignition[786]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:47:51.482628 ignition[786]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:47:51.482719 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 17 18:47:51.483622 ignition[786]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 17 18:47:51.497098 systemd-networkd[782]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:47:51.511086 systemd-networkd[782]: eth0: DHCPv4 address 138.201.89.219/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 18:47:51.684207 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 17 18:47:51.690731 ignition[786]: GET result: OK Mar 17 18:47:51.690941 ignition[786]: parsing config with SHA512: 2c8a94f8a7044cc28dfe222e1f2d50306132c7dc28b051de962195eecbd9516f9d7dc9acf1cc8786769315a27344ce060f6ee90d426f0c6df40ffbed637e0144 Mar 17 18:47:51.700700 unknown[786]: fetched base config from "system" Mar 17 18:47:51.700720 unknown[786]: fetched base config from "system" Mar 17 18:47:51.702492 ignition[786]: fetch: fetch complete Mar 17 18:47:51.700726 unknown[786]: fetched user config from "hetzner" Mar 17 18:47:51.702500 ignition[786]: fetch: fetch passed Mar 17 18:47:51.702587 ignition[786]: Ignition finished successfully Mar 17 18:47:51.709304 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 18:47:51.714229 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 18:47:51.731968 ignition[794]: Ignition 2.20.0 Mar 17 18:47:51.732018 ignition[794]: Stage: kargs Mar 17 18:47:51.732315 ignition[794]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:47:51.732335 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 18:47:51.734366 ignition[794]: kargs: kargs passed Mar 17 18:47:51.734465 ignition[794]: Ignition finished successfully Mar 17 18:47:51.738220 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 18:47:51.749253 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 18:47:51.762197 ignition[800]: Ignition 2.20.0 Mar 17 18:47:51.762207 ignition[800]: Stage: disks Mar 17 18:47:51.762408 ignition[800]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:47:51.762418 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 18:47:51.764694 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 18:47:51.763468 ignition[800]: disks: disks passed Mar 17 18:47:51.767208 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 18:47:51.763530 ignition[800]: Ignition finished successfully Mar 17 18:47:51.768524 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 18:47:51.769134 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 18:47:51.770021 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 18:47:51.771461 systemd[1]: Reached target basic.target - Basic System. Mar 17 18:47:51.778228 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 18:47:51.802408 systemd-fsck[809]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 17 18:47:51.805673 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 18:47:52.250202 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 18:47:52.316297 kernel: EXT4-fs (sda9): mounted filesystem 3914ef65-c5cd-468c-8ee7-964383d8e9e2 r/w with ordered data mode. Quota mode: none. Mar 17 18:47:52.317732 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 18:47:52.319830 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 18:47:52.327193 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 18:47:52.333231 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 18:47:52.336854 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 18:47:52.340643 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:47:52.341762 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 18:47:52.347398 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 18:47:52.357032 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (817) Mar 17 18:47:52.360545 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 18:47:52.360599 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:47:52.360610 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:47:52.360235 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 18:47:52.378015 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 18:47:52.378098 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 18:47:52.386498 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 18:47:52.423282 coreos-metadata[819]: Mar 17 18:47:52.422 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 17 18:47:52.424848 coreos-metadata[819]: Mar 17 18:47:52.424 INFO Fetch successful Mar 17 18:47:52.425818 coreos-metadata[819]: Mar 17 18:47:52.425 INFO wrote hostname ci-4230-1-0-9-a87a0d0143 to /sysroot/etc/hostname Mar 17 18:47:52.429009 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 18:47:52.430349 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:47:52.435911 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:47:52.441313 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:47:52.446461 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:47:52.553372 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 18:47:52.563191 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 18:47:52.568207 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 18:47:52.579015 kernel: BTRFS info (device sda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 18:47:52.604014 ignition[934]: INFO : Ignition 2.20.0 Mar 17 18:47:52.604014 ignition[934]: INFO : Stage: mount Mar 17 18:47:52.604014 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:47:52.604014 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 18:47:52.606990 ignition[934]: INFO : mount: mount passed Mar 17 18:47:52.606990 ignition[934]: INFO : Ignition finished successfully Mar 17 18:47:52.607605 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 18:47:52.614149 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 18:47:52.614929 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 18:47:53.001527 systemd-networkd[782]: eth1: Gained IPv6LL Mar 17 18:47:53.001909 systemd-networkd[782]: eth0: Gained IPv6LL Mar 17 18:47:53.236926 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 18:47:53.242345 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 18:47:53.254010 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (945) Mar 17 18:47:53.256041 kernel: BTRFS info (device sda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 18:47:53.256116 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:47:53.256134 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:47:53.259328 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 18:47:53.259406 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 18:47:53.262547 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 18:47:53.283722 ignition[962]: INFO : Ignition 2.20.0 Mar 17 18:47:53.285014 ignition[962]: INFO : Stage: files Mar 17 18:47:53.285014 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:47:53.285014 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 18:47:53.286669 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:47:53.288856 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:47:53.288856 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:47:53.292939 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:47:53.294714 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:47:53.294714 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:47:53.293507 unknown[962]: wrote ssh authorized keys file for user: core Mar 17 18:47:53.299488 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:47:53.299488 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 18:47:53.396561 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 18:47:53.530459 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:47:53.530459 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:47:53.533499 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 18:47:54.205804 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:47:54.546048 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:47:54.546048 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:47:54.546048 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:47:54.546048 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:47:54.546048 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:47:54.546048 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:47:54.546048 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:47:54.553639 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:47:54.553639 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:47:54.553639 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:47:54.553639 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:47:54.553639 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:47:54.553639 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:47:54.553639 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:47:54.553639 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 17 18:47:55.157032 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 18:47:56.524563 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:47:56.524563 ignition[962]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 18:47:56.528830 ignition[962]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:47:56.528830 ignition[962]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:47:56.528830 ignition[962]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 18:47:56.528830 ignition[962]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 17 18:47:56.528830 ignition[962]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 18:47:56.528830 ignition[962]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 18:47:56.528830 ignition[962]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 17 18:47:56.528830 ignition[962]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:47:56.528830 ignition[962]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:47:56.537384 ignition[962]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:47:56.537384 ignition[962]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:47:56.537384 ignition[962]: INFO : files: files passed Mar 17 18:47:56.537384 ignition[962]: INFO : Ignition finished successfully Mar 17 18:47:56.532258 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 18:47:56.541257 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 18:47:56.545243 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 18:47:56.550937 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:47:56.552568 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 18:47:56.573089 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:47:56.573089 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:47:56.579600 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:47:56.586495 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 18:47:56.587530 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 18:47:56.594217 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 18:47:56.636650 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:47:56.636847 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 18:47:56.638852 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 18:47:56.640027 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 18:47:56.641230 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 18:47:56.642792 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 18:47:56.663801 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 18:47:56.674417 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 18:47:56.686953 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 18:47:56.688114 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 18:47:56.689369 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 18:47:56.690389 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:47:56.690521 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 18:47:56.691971 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 18:47:56.692561 systemd[1]: Stopped target basic.target - Basic System. Mar 17 18:47:56.693800 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 18:47:56.694852 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 18:47:56.695769 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 18:47:56.696839 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 18:47:56.697873 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 18:47:56.699151 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 18:47:56.700213 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 18:47:56.701274 systemd[1]: Stopped target swap.target - Swaps. Mar 17 18:47:56.702106 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:47:56.702239 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 18:47:56.704018 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 18:47:56.705129 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 18:47:56.706138 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 18:47:56.706584 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 18:47:56.707325 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:47:56.707456 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 18:47:56.709141 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:47:56.709280 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 18:47:56.710640 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:47:56.711101 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 18:47:56.712168 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 18:47:56.712279 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 18:47:56.720246 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 18:47:56.725174 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 18:47:56.725763 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:47:56.725901 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 18:47:56.730018 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:47:56.730129 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 18:47:56.738017 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:47:56.738125 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 18:47:56.746744 ignition[1015]: INFO : Ignition 2.20.0 Mar 17 18:47:56.749102 ignition[1015]: INFO : Stage: umount Mar 17 18:47:56.749102 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:47:56.749102 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 18:47:56.748554 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:47:56.753949 ignition[1015]: INFO : umount: umount passed Mar 17 18:47:56.753949 ignition[1015]: INFO : Ignition finished successfully Mar 17 18:47:56.756647 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:47:56.756845 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 18:47:56.758461 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:47:56.758570 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 18:47:56.761603 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:47:56.761744 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 18:47:56.762436 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:47:56.762499 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 18:47:56.763367 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:47:56.763436 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 18:47:56.764299 systemd[1]: Stopped target network.target - Network. Mar 17 18:47:56.765135 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:47:56.765214 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 18:47:56.766200 systemd[1]: Stopped target paths.target - Path Units. Mar 17 18:47:56.767057 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:47:56.771089 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 18:47:56.773415 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 18:47:56.774494 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 18:47:56.776090 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:47:56.776172 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 18:47:56.778043 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:47:56.778129 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 18:47:56.779245 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:47:56.779315 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 18:47:56.780076 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 18:47:56.780120 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 18:47:56.781690 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:47:56.781754 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 18:47:56.782811 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 18:47:56.783510 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 18:47:56.791547 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:47:56.791722 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 18:47:56.796002 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 18:47:56.796389 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 18:47:56.796438 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 18:47:56.799575 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 18:47:56.799894 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:47:56.800141 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 18:47:56.802503 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 18:47:56.803311 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:47:56.803382 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 18:47:56.809165 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 18:47:56.809825 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:47:56.809898 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 18:47:56.810953 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:47:56.811031 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 18:47:56.812557 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:47:56.812612 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 18:47:56.814305 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 18:47:56.818722 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:47:56.828418 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:47:56.828559 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 18:47:56.832832 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:47:56.833766 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 18:47:56.834874 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:47:56.834972 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 18:47:56.835763 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:47:56.835795 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 18:47:56.836917 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:47:56.836972 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 18:47:56.839131 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:47:56.839191 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 18:47:56.840559 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:47:56.840607 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 18:47:56.850652 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 18:47:56.851616 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 18:47:56.851701 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 18:47:56.852963 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 18:47:56.853054 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 18:47:56.855132 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:47:56.855181 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 18:47:56.856093 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:47:56.856139 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 18:47:56.862781 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:47:56.862917 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 18:47:56.867516 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 18:47:56.881186 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 18:47:56.890546 systemd[1]: Switching root. Mar 17 18:47:56.923357 systemd-journald[235]: Journal stopped Mar 17 18:47:58.043620 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). Mar 17 18:47:58.043730 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:47:58.043745 kernel: SELinux: policy capability open_perms=1 Mar 17 18:47:58.043759 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:47:58.043769 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:47:58.043779 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:47:58.043788 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:47:58.043798 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:47:58.043811 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:47:58.043821 kernel: audit: type=1403 audit(1742237277.093:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:47:58.043832 systemd[1]: Successfully loaded SELinux policy in 38.747ms. Mar 17 18:47:58.043856 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.918ms. Mar 17 18:47:58.043870 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 18:47:58.043881 systemd[1]: Detected virtualization kvm. Mar 17 18:47:58.043896 systemd[1]: Detected architecture arm64. Mar 17 18:47:58.043906 systemd[1]: Detected first boot. Mar 17 18:47:58.043917 systemd[1]: Hostname set to . Mar 17 18:47:58.043927 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:47:58.043938 zram_generator::config[1063]: No configuration found. Mar 17 18:47:58.043949 kernel: NET: Registered PF_VSOCK protocol family Mar 17 18:47:58.043961 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:47:58.043972 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 18:47:58.046061 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:47:58.046110 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 18:47:58.046122 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:47:58.046133 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 18:47:58.046144 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 18:47:58.046155 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 18:47:58.046175 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 18:47:58.046185 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 18:47:58.046196 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 18:47:58.046206 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 18:47:58.046216 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 18:47:58.046227 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 18:47:58.046238 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 18:47:58.046249 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 18:47:58.046258 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 18:47:58.046276 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 18:47:58.046287 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 18:47:58.046297 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 18:47:58.046308 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 18:47:58.046318 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 18:47:58.046328 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 18:47:58.046341 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 18:47:58.046371 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 18:47:58.046386 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 18:47:58.046397 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 18:47:58.046408 systemd[1]: Reached target slices.target - Slice Units. Mar 17 18:47:58.046419 systemd[1]: Reached target swap.target - Swaps. Mar 17 18:47:58.046429 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 18:47:58.046439 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 18:47:58.046456 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 18:47:58.046469 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 18:47:58.046480 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 18:47:58.046491 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 18:47:58.046501 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 18:47:58.046512 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 18:47:58.046522 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 18:47:58.046535 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 18:47:58.046545 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 18:47:58.046555 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 18:47:58.046565 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 18:47:58.046577 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:47:58.046587 systemd[1]: Reached target machines.target - Containers. Mar 17 18:47:58.046597 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 18:47:58.046608 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 18:47:58.046623 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 18:47:58.046637 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 18:47:58.046648 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 18:47:58.046658 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 18:47:58.046669 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 18:47:58.046679 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 18:47:58.046689 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 18:47:58.046713 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:47:58.046726 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:47:58.046740 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 18:47:58.046751 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:47:58.046761 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:47:58.046772 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 18:47:58.046782 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 18:47:58.046793 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 18:47:58.046809 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 18:47:58.046823 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 18:47:58.046836 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 18:47:58.046846 kernel: loop: module loaded Mar 17 18:47:58.046858 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 18:47:58.046878 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:47:58.046890 systemd[1]: Stopped verity-setup.service. Mar 17 18:47:58.046900 kernel: fuse: init (API version 7.39) Mar 17 18:47:58.046913 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 18:47:58.046930 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 18:47:58.046945 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 18:47:58.046956 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 18:47:58.046969 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 18:47:58.047008 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 18:47:58.047022 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 18:47:58.047033 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:47:58.047043 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 18:47:58.047054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:47:58.047064 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 18:47:58.047074 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:47:58.047085 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 18:47:58.047095 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:47:58.047109 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 18:47:58.047120 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:47:58.047132 kernel: ACPI: bus type drm_connector registered Mar 17 18:47:58.047142 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 18:47:58.047152 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 18:47:58.047163 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:47:58.047173 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 18:47:58.047183 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 18:47:58.047195 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 18:47:58.047205 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 18:47:58.047216 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:47:58.047231 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 18:47:58.047242 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 18:47:58.047295 systemd-journald[1127]: Collecting audit messages is disabled. Mar 17 18:47:58.047319 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 18:47:58.047334 systemd-journald[1127]: Journal started Mar 17 18:47:58.047360 systemd-journald[1127]: Runtime Journal (/run/log/journal/4931fcee3a444fd7b0b2f2ad836154f5) is 8M, max 76.6M, 68.6M free. Mar 17 18:47:58.051083 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 18:47:57.702830 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:47:57.715644 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 17 18:47:57.716191 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:47:58.055341 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 18:47:58.059810 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 18:47:58.064080 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:47:58.068011 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 18:47:58.070447 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 18:47:58.081518 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 18:47:58.095331 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 18:47:58.103529 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 18:47:58.107262 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 18:47:58.111071 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 18:47:58.112368 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 18:47:58.114511 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 18:47:58.118631 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 18:47:58.121362 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 18:47:58.123621 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 18:47:58.152366 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 18:47:58.154043 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 18:47:58.155012 kernel: loop0: detected capacity change from 0 to 123192 Mar 17 18:47:58.173478 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 18:47:58.175189 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 18:47:58.186462 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 18:47:58.195389 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:47:58.200236 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 18:47:58.207782 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 18:47:58.212502 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 18:47:58.228600 systemd-journald[1127]: Time spent on flushing to /var/log/journal/4931fcee3a444fd7b0b2f2ad836154f5 is 50.435ms for 1152 entries. Mar 17 18:47:58.228600 systemd-journald[1127]: System Journal (/var/log/journal/4931fcee3a444fd7b0b2f2ad836154f5) is 8M, max 584.8M, 576.8M free. Mar 17 18:47:58.305790 systemd-journald[1127]: Received client request to flush runtime journal. Mar 17 18:47:58.305843 kernel: loop1: detected capacity change from 0 to 189592 Mar 17 18:47:58.305856 kernel: loop2: detected capacity change from 0 to 8 Mar 17 18:47:58.246956 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 18:47:58.259809 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Mar 17 18:47:58.259843 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Mar 17 18:47:58.270842 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:47:58.271687 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 18:47:58.284296 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 18:47:58.309943 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 18:47:58.336026 kernel: loop3: detected capacity change from 0 to 113512 Mar 17 18:47:58.354453 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 18:47:58.360314 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 18:47:58.387523 kernel: loop4: detected capacity change from 0 to 123192 Mar 17 18:47:58.403517 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Mar 17 18:47:58.403537 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Mar 17 18:47:58.420555 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 18:47:58.423332 kernel: loop5: detected capacity change from 0 to 189592 Mar 17 18:47:58.462022 kernel: loop6: detected capacity change from 0 to 8 Mar 17 18:47:58.465194 kernel: loop7: detected capacity change from 0 to 113512 Mar 17 18:47:58.486605 (sd-merge)[1207]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 17 18:47:58.487357 (sd-merge)[1207]: Merged extensions into '/usr'. Mar 17 18:47:58.494298 systemd[1]: Reload requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 18:47:58.494427 systemd[1]: Reloading... Mar 17 18:47:58.679029 zram_generator::config[1237]: No configuration found. Mar 17 18:47:58.680975 ldconfig[1156]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:47:58.801265 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:47:58.867270 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:47:58.867400 systemd[1]: Reloading finished in 370 ms. Mar 17 18:47:58.883548 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 18:47:58.884697 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 18:47:58.897252 systemd[1]: Starting ensure-sysext.service... Mar 17 18:47:58.906304 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 18:47:58.924629 systemd[1]: Reload requested from client PID 1273 ('systemctl') (unit ensure-sysext.service)... Mar 17 18:47:58.924645 systemd[1]: Reloading... Mar 17 18:47:58.948436 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:47:58.949404 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 18:47:58.950297 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:47:58.950659 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Mar 17 18:47:58.950803 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Mar 17 18:47:58.958480 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 18:47:58.958490 systemd-tmpfiles[1274]: Skipping /boot Mar 17 18:47:58.979237 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 18:47:58.981045 systemd-tmpfiles[1274]: Skipping /boot Mar 17 18:47:59.029021 zram_generator::config[1303]: No configuration found. Mar 17 18:47:59.149931 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:47:59.214427 systemd[1]: Reloading finished in 289 ms. Mar 17 18:47:59.229851 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 18:47:59.245121 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 18:47:59.258482 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 18:47:59.270093 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 18:47:59.275385 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 18:47:59.285323 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 18:47:59.290597 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 18:47:59.294840 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 18:47:59.299850 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 18:47:59.308353 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 18:47:59.311346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 18:47:59.315913 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 18:47:59.316710 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 18:47:59.316868 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 18:47:59.321339 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 18:47:59.328564 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:47:59.329137 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 18:47:59.331737 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 18:47:59.331932 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 18:47:59.332065 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 18:47:59.339325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 18:47:59.346731 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 18:47:59.351317 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 18:47:59.352184 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 18:47:59.352316 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 18:47:59.359535 systemd[1]: Finished ensure-sysext.service. Mar 17 18:47:59.367657 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 18:47:59.369619 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 18:47:59.375249 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 18:47:59.398255 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 18:47:59.422209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:47:59.422494 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 18:47:59.424271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:47:59.424889 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 18:47:59.427833 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 18:47:59.428855 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:47:59.429460 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 18:47:59.430665 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Mar 17 18:47:59.431457 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:47:59.431699 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 18:47:59.433946 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 18:47:59.441439 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:47:59.441540 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 18:47:59.441565 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:47:59.452136 augenrules[1385]: No rules Mar 17 18:47:59.453884 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 18:47:59.454943 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 18:47:59.477073 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 18:47:59.479254 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 18:47:59.491631 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 18:47:59.632391 systemd-networkd[1400]: lo: Link UP Mar 17 18:47:59.632402 systemd-networkd[1400]: lo: Gained carrier Mar 17 18:47:59.634358 systemd-networkd[1400]: Enumeration completed Mar 17 18:47:59.634505 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 18:47:59.648834 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 18:47:59.653194 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 18:47:59.653794 systemd-resolved[1346]: Positive Trust Anchors: Mar 17 18:47:59.653829 systemd-resolved[1346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:47:59.653912 systemd-resolved[1346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 18:47:59.660941 systemd-resolved[1346]: Using system hostname 'ci-4230-1-0-9-a87a0d0143'. Mar 17 18:47:59.661799 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 18:47:59.662904 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 18:47:59.665221 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 18:47:59.665945 systemd[1]: Reached target network.target - Network. Mar 17 18:47:59.666461 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 18:47:59.685667 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 18:47:59.694503 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 17 18:47:59.756577 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:47:59.756593 systemd-networkd[1400]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:47:59.757936 systemd-networkd[1400]: eth0: Link UP Mar 17 18:47:59.757945 systemd-networkd[1400]: eth0: Gained carrier Mar 17 18:47:59.757970 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:47:59.780946 systemd-networkd[1400]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:47:59.781348 systemd-networkd[1400]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:47:59.782558 systemd-networkd[1400]: eth1: Link UP Mar 17 18:47:59.782567 systemd-networkd[1400]: eth1: Gained carrier Mar 17 18:47:59.782590 systemd-networkd[1400]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:47:59.803176 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:47:59.815024 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1404) Mar 17 18:47:59.816290 systemd-networkd[1400]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:47:59.842728 systemd-networkd[1400]: eth0: DHCPv4 address 138.201.89.219/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 18:47:59.844332 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. Mar 17 18:47:59.882856 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 17 18:47:59.883019 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 18:47:59.892603 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 18:47:59.896332 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 18:47:59.901286 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 18:47:59.902485 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 18:47:59.902541 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 18:47:59.902567 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:47:59.906311 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:47:59.906514 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 18:47:59.919820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:47:59.920076 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 18:47:59.922484 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:47:59.926388 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:47:59.927094 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 18:47:59.929490 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 18:47:59.950540 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 18:47:59.953173 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Mar 17 18:47:59.953304 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 17 18:47:59.953318 kernel: [drm] features: -context_init Mar 17 18:47:59.955031 kernel: [drm] number of scanouts: 1 Mar 17 18:47:59.955150 kernel: [drm] number of cap sets: 0 Mar 17 18:47:59.959092 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 18:47:59.961021 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 17 18:47:59.968965 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 18:47:59.969273 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 18:47:59.971045 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 17 18:47:59.986568 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:47:59.988209 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 18:47:59.991003 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 18:47:59.991615 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 18:48:00.001285 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 18:48:00.054235 systemd-timesyncd[1365]: Contacted time server 194.50.19.117:123 (0.flatcar.pool.ntp.org). Mar 17 18:48:00.054353 systemd-timesyncd[1365]: Initial clock synchronization to Mon 2025-03-17 18:48:00.443475 UTC. Mar 17 18:48:00.080145 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 18:48:00.106896 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 18:48:00.124368 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 18:48:00.138090 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:48:00.174513 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 18:48:00.175840 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 18:48:00.176661 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 18:48:00.177574 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 18:48:00.178411 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 18:48:00.179827 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 18:48:00.180584 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 18:48:00.181403 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 18:48:00.182120 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:48:00.182159 systemd[1]: Reached target paths.target - Path Units. Mar 17 18:48:00.182606 systemd[1]: Reached target timers.target - Timer Units. Mar 17 18:48:00.184972 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 18:48:00.187644 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 18:48:00.191343 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 18:48:00.192368 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 18:48:00.193146 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 18:48:00.196497 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 18:48:00.197919 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 18:48:00.200473 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 18:48:00.201885 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 18:48:00.202750 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 18:48:00.203381 systemd[1]: Reached target basic.target - Basic System. Mar 17 18:48:00.203947 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 18:48:00.204008 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 18:48:00.210158 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 18:48:00.214535 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 18:48:00.218518 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 18:48:00.223011 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:48:00.222257 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 18:48:00.225180 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 18:48:00.225769 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 18:48:00.235228 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 18:48:00.242201 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 18:48:00.249234 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 17 18:48:00.253236 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 18:48:00.260253 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 18:48:00.270345 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 18:48:00.273578 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:48:00.274157 jq[1474]: false Mar 17 18:48:00.274255 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:48:00.278214 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 18:48:00.281122 extend-filesystems[1475]: Found loop4 Mar 17 18:48:00.282014 extend-filesystems[1475]: Found loop5 Mar 17 18:48:00.284244 extend-filesystems[1475]: Found loop6 Mar 17 18:48:00.284244 extend-filesystems[1475]: Found loop7 Mar 17 18:48:00.284244 extend-filesystems[1475]: Found sda Mar 17 18:48:00.284244 extend-filesystems[1475]: Found sda1 Mar 17 18:48:00.284244 extend-filesystems[1475]: Found sda2 Mar 17 18:48:00.284244 extend-filesystems[1475]: Found sda3 Mar 17 18:48:00.284244 extend-filesystems[1475]: Found usr Mar 17 18:48:00.284244 extend-filesystems[1475]: Found sda4 Mar 17 18:48:00.284244 extend-filesystems[1475]: Found sda6 Mar 17 18:48:00.284244 extend-filesystems[1475]: Found sda7 Mar 17 18:48:00.284244 extend-filesystems[1475]: Found sda9 Mar 17 18:48:00.284244 extend-filesystems[1475]: Checking size of /dev/sda9 Mar 17 18:48:00.284876 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 18:48:00.289833 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:48:00.290060 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 18:48:00.342551 dbus-daemon[1473]: [system] SELinux support is enabled Mar 17 18:48:00.351936 extend-filesystems[1475]: Resized partition /dev/sda9 Mar 17 18:48:00.355105 jq[1487]: true Mar 17 18:48:00.361602 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 18:48:00.365827 extend-filesystems[1509]: resize2fs 1.47.1 (20-May-2024) Mar 17 18:48:00.368124 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 18:48:00.373563 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:48:00.373849 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 18:48:00.374952 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:48:00.377247 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 18:48:00.385996 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 17 18:48:00.391484 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:48:00.391524 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 18:48:00.392520 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:48:00.392551 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 18:48:00.397918 coreos-metadata[1472]: Mar 17 18:48:00.397 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 17 18:48:00.397918 coreos-metadata[1472]: Mar 17 18:48:00.397 INFO Fetch successful Mar 17 18:48:00.397918 coreos-metadata[1472]: Mar 17 18:48:00.397 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 17 18:48:00.397918 coreos-metadata[1472]: Mar 17 18:48:00.397 INFO Fetch successful Mar 17 18:48:00.403094 jq[1511]: true Mar 17 18:48:00.403390 tar[1492]: linux-arm64/helm Mar 17 18:48:00.420670 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 18:48:00.442051 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1413) Mar 17 18:48:00.508925 update_engine[1485]: I20250317 18:48:00.507241 1485 main.cc:92] Flatcar Update Engine starting Mar 17 18:48:00.520804 update_engine[1485]: I20250317 18:48:00.520536 1485 update_check_scheduler.cc:74] Next update check in 6m28s Mar 17 18:48:00.523288 systemd[1]: Started update-engine.service - Update Engine. Mar 17 18:48:00.529235 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 18:48:00.541520 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 18:48:00.542831 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 18:48:00.577140 bash[1539]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:48:00.582477 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 18:48:00.603241 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 17 18:48:00.599773 systemd[1]: Starting sshkeys.service... Mar 17 18:48:00.627797 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 18:48:00.637599 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 18:48:00.639283 systemd-logind[1484]: New seat seat0. Mar 17 18:48:00.641489 extend-filesystems[1509]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 17 18:48:00.641489 extend-filesystems[1509]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 17 18:48:00.641489 extend-filesystems[1509]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 17 18:48:00.647008 extend-filesystems[1475]: Resized filesystem in /dev/sda9 Mar 17 18:48:00.647008 extend-filesystems[1475]: Found sr0 Mar 17 18:48:00.643854 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:48:00.644388 systemd-logind[1484]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 18:48:00.644404 systemd-logind[1484]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Mar 17 18:48:00.644704 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 18:48:00.646541 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 18:48:00.719532 coreos-metadata[1546]: Mar 17 18:48:00.717 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 17 18:48:00.723613 coreos-metadata[1546]: Mar 17 18:48:00.722 INFO Fetch successful Mar 17 18:48:00.731809 unknown[1546]: wrote ssh authorized keys file for user: core Mar 17 18:48:00.770317 update-ssh-keys[1556]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:48:00.771196 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 18:48:00.774756 systemd[1]: Finished sshkeys.service. Mar 17 18:48:00.857895 sshd_keygen[1497]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:48:00.868361 containerd[1508]: time="2025-03-17T18:48:00.868248440Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 18:48:00.873153 systemd-networkd[1400]: eth0: Gained IPv6LL Mar 17 18:48:00.878304 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 18:48:00.883805 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 18:48:00.898257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:48:00.908772 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 18:48:00.909974 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 18:48:00.925689 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 18:48:00.934771 containerd[1508]: time="2025-03-17T18:48:00.934685360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:00.942498 containerd[1508]: time="2025-03-17T18:48:00.942438640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:48:00.943068 containerd[1508]: time="2025-03-17T18:48:00.942607040Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:48:00.943068 containerd[1508]: time="2025-03-17T18:48:00.942634160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:48:00.943068 containerd[1508]: time="2025-03-17T18:48:00.942829560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 18:48:00.943068 containerd[1508]: time="2025-03-17T18:48:00.942851720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:00.943068 containerd[1508]: time="2025-03-17T18:48:00.942928360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:48:00.943068 containerd[1508]: time="2025-03-17T18:48:00.942941840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:00.944367 containerd[1508]: time="2025-03-17T18:48:00.943499240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:48:00.944367 containerd[1508]: time="2025-03-17T18:48:00.943554760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:00.944367 containerd[1508]: time="2025-03-17T18:48:00.943574840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:48:00.944367 containerd[1508]: time="2025-03-17T18:48:00.943584240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:00.944367 containerd[1508]: time="2025-03-17T18:48:00.943748360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:00.945641 containerd[1508]: time="2025-03-17T18:48:00.943970360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:48:00.947876 containerd[1508]: time="2025-03-17T18:48:00.947835240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:48:00.948370 containerd[1508]: time="2025-03-17T18:48:00.947971360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:48:00.948370 containerd[1508]: time="2025-03-17T18:48:00.948150480Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:48:00.948370 containerd[1508]: time="2025-03-17T18:48:00.948210640Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:48:00.951893 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:48:00.952152 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 18:48:00.961026 containerd[1508]: time="2025-03-17T18:48:00.959338160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:48:00.961026 containerd[1508]: time="2025-03-17T18:48:00.959413280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:48:00.961026 containerd[1508]: time="2025-03-17T18:48:00.959430200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 18:48:00.961026 containerd[1508]: time="2025-03-17T18:48:00.959449040Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 18:48:00.961026 containerd[1508]: time="2025-03-17T18:48:00.959465160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:48:00.961026 containerd[1508]: time="2025-03-17T18:48:00.959646840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:48:00.961026 containerd[1508]: time="2025-03-17T18:48:00.959912960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:48:00.961026 containerd[1508]: time="2025-03-17T18:48:00.960065200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 18:48:00.961026 containerd[1508]: time="2025-03-17T18:48:00.960084480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 18:48:00.961026 containerd[1508]: time="2025-03-17T18:48:00.960106120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 18:48:00.961026 containerd[1508]: time="2025-03-17T18:48:00.960123920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:48:00.961026 containerd[1508]: time="2025-03-17T18:48:00.960138720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:48:00.961026 containerd[1508]: time="2025-03-17T18:48:00.960152360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:48:00.961026 containerd[1508]: time="2025-03-17T18:48:00.960167600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:48:00.961349 containerd[1508]: time="2025-03-17T18:48:00.960183560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:48:00.961349 containerd[1508]: time="2025-03-17T18:48:00.960199920Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:48:00.961349 containerd[1508]: time="2025-03-17T18:48:00.960213600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:48:00.961349 containerd[1508]: time="2025-03-17T18:48:00.960226880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:48:00.961349 containerd[1508]: time="2025-03-17T18:48:00.960289680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:48:00.961349 containerd[1508]: time="2025-03-17T18:48:00.960312360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:48:00.961349 containerd[1508]: time="2025-03-17T18:48:00.960327640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:48:00.961349 containerd[1508]: time="2025-03-17T18:48:00.960342720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:48:00.961349 containerd[1508]: time="2025-03-17T18:48:00.960355240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:48:00.961349 containerd[1508]: time="2025-03-17T18:48:00.960368280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:48:00.961349 containerd[1508]: time="2025-03-17T18:48:00.960379600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:48:00.961349 containerd[1508]: time="2025-03-17T18:48:00.960401760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:48:00.961349 containerd[1508]: time="2025-03-17T18:48:00.960414920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 18:48:00.961349 containerd[1508]: time="2025-03-17T18:48:00.960431120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 18:48:00.961611 containerd[1508]: time="2025-03-17T18:48:00.960444160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:48:00.961611 containerd[1508]: time="2025-03-17T18:48:00.960456120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 18:48:00.961611 containerd[1508]: time="2025-03-17T18:48:00.960467960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:48:00.961611 containerd[1508]: time="2025-03-17T18:48:00.960485320Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 18:48:00.961611 containerd[1508]: time="2025-03-17T18:48:00.960510120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 18:48:00.961611 containerd[1508]: time="2025-03-17T18:48:00.960524920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:48:00.961611 containerd[1508]: time="2025-03-17T18:48:00.960537240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:48:00.961611 containerd[1508]: time="2025-03-17T18:48:00.960763000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:48:00.961611 containerd[1508]: time="2025-03-17T18:48:00.960785800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 18:48:00.961611 containerd[1508]: time="2025-03-17T18:48:00.960797760Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:48:00.961611 containerd[1508]: time="2025-03-17T18:48:00.960809960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 18:48:00.961611 containerd[1508]: time="2025-03-17T18:48:00.960819280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:48:00.961611 containerd[1508]: time="2025-03-17T18:48:00.960832080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 18:48:00.961611 containerd[1508]: time="2025-03-17T18:48:00.960843440Z" level=info msg="NRI interface is disabled by configuration." Mar 17 18:48:00.961845 containerd[1508]: time="2025-03-17T18:48:00.960853760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:48:00.965452 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 18:48:00.967745 containerd[1508]: time="2025-03-17T18:48:00.966344360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:48:00.967745 containerd[1508]: time="2025-03-17T18:48:00.966410120Z" level=info msg="Connect containerd service" Mar 17 18:48:00.967745 containerd[1508]: time="2025-03-17T18:48:00.966458760Z" level=info msg="using legacy CRI server" Mar 17 18:48:00.967745 containerd[1508]: time="2025-03-17T18:48:00.966467000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 18:48:00.967745 containerd[1508]: time="2025-03-17T18:48:00.966772440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:48:00.970510 containerd[1508]: time="2025-03-17T18:48:00.970264440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:48:00.973833 containerd[1508]: time="2025-03-17T18:48:00.970927760Z" level=info msg="Start subscribing containerd event" Mar 17 18:48:00.973833 containerd[1508]: time="2025-03-17T18:48:00.972939880Z" level=info msg="Start recovering state" Mar 17 18:48:00.973833 containerd[1508]: time="2025-03-17T18:48:00.973070360Z" level=info msg="Start event monitor" Mar 17 18:48:00.973833 containerd[1508]: time="2025-03-17T18:48:00.973088520Z" level=info msg="Start snapshots syncer" Mar 17 18:48:00.973833 containerd[1508]: time="2025-03-17T18:48:00.973099360Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:48:00.973833 containerd[1508]: time="2025-03-17T18:48:00.973109600Z" level=info msg="Start streaming server" Mar 17 18:48:00.978098 containerd[1508]: time="2025-03-17T18:48:00.977242200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:48:00.978098 containerd[1508]: time="2025-03-17T18:48:00.977310600Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:48:00.978098 containerd[1508]: time="2025-03-17T18:48:00.977376360Z" level=info msg="containerd successfully booted in 0.112033s" Mar 17 18:48:00.977487 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 18:48:00.982041 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 18:48:00.992933 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 18:48:00.998957 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 18:48:01.000169 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 18:48:01.004173 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 18:48:01.011540 locksmithd[1542]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:48:01.266567 tar[1492]: linux-arm64/LICENSE Mar 17 18:48:01.266567 tar[1492]: linux-arm64/README.md Mar 17 18:48:01.279231 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 18:48:01.578284 systemd-networkd[1400]: eth1: Gained IPv6LL Mar 17 18:48:01.717256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:48:01.719949 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 18:48:01.720116 (kubelet)[1605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:48:01.725633 systemd[1]: Startup finished in 805ms (kernel) + 8.399s (initrd) + 4.670s (userspace) = 13.875s. Mar 17 18:48:02.309705 kubelet[1605]: E0317 18:48:02.309634 1605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:48:02.315572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:48:02.315744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:48:02.316179 systemd[1]: kubelet.service: Consumed 830ms CPU time, 231.5M memory peak. Mar 17 18:48:12.567415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:48:12.576504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:48:12.704323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:48:12.718640 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:48:12.780436 kubelet[1624]: E0317 18:48:12.780336 1624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:48:12.786089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:48:12.786498 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:48:12.787701 systemd[1]: kubelet.service: Consumed 170ms CPU time, 95.6M memory peak. Mar 17 18:48:22.950931 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:48:22.961380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:48:23.062763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:48:23.075571 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:48:23.120692 kubelet[1638]: E0317 18:48:23.120624 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:48:23.124516 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:48:23.124856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:48:23.125393 systemd[1]: kubelet.service: Consumed 145ms CPU time, 96.6M memory peak. Mar 17 18:48:33.200110 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 18:48:33.219284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:48:33.369086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:48:33.382397 (kubelet)[1653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:48:33.433522 kubelet[1653]: E0317 18:48:33.433401 1653 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:48:33.435851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:48:33.436015 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:48:33.436709 systemd[1]: kubelet.service: Consumed 175ms CPU time, 94.6M memory peak. Mar 17 18:48:43.450618 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 18:48:43.461370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:48:43.600879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:48:43.607551 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:48:43.656102 kubelet[1669]: E0317 18:48:43.655924 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:48:43.657855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:48:43.658147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:48:43.658890 systemd[1]: kubelet.service: Consumed 165ms CPU time, 96.6M memory peak. Mar 17 18:48:45.895680 update_engine[1485]: I20250317 18:48:45.895459 1485 update_attempter.cc:509] Updating boot flags... Mar 17 18:48:45.953092 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1685) Mar 17 18:48:46.042098 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1681) Mar 17 18:48:53.701151 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 18:48:53.717204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:48:53.868066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:48:53.884060 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:48:53.938796 kubelet[1702]: E0317 18:48:53.938614 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:48:53.942290 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:48:53.942493 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:48:53.944162 systemd[1]: kubelet.service: Consumed 192ms CPU time, 96.3M memory peak. Mar 17 18:49:03.950594 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 18:49:03.959473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:49:04.081962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:49:04.092672 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:49:04.139874 kubelet[1717]: E0317 18:49:04.139795 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:04.143656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:04.143869 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:04.145159 systemd[1]: kubelet.service: Consumed 152ms CPU time, 96.6M memory peak. Mar 17 18:49:14.200824 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 17 18:49:14.209370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:49:14.339194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:49:14.344100 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:49:14.388889 kubelet[1732]: E0317 18:49:14.388819 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:14.392363 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:14.392625 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:14.393489 systemd[1]: kubelet.service: Consumed 157ms CPU time, 94.3M memory peak. Mar 17 18:49:24.450491 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 17 18:49:24.458317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:49:24.595537 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:49:24.596720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:49:24.644390 kubelet[1747]: E0317 18:49:24.644286 1747 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:24.649232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:24.649903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:24.652239 systemd[1]: kubelet.service: Consumed 154ms CPU time, 91.6M memory peak. Mar 17 18:49:34.699883 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 17 18:49:34.715361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:49:34.855298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:49:34.876238 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:49:34.935211 kubelet[1761]: E0317 18:49:34.935160 1761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:34.937936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:34.938113 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:34.940112 systemd[1]: kubelet.service: Consumed 179ms CPU time, 96.6M memory peak. Mar 17 18:49:44.950152 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 17 18:49:44.958553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:49:45.108656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:49:45.122677 (kubelet)[1777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:49:45.169160 kubelet[1777]: E0317 18:49:45.169080 1777 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:45.173124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:45.173498 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:45.174432 systemd[1]: kubelet.service: Consumed 170ms CPU time, 94.3M memory peak. Mar 17 18:49:54.419898 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 18:49:54.427422 systemd[1]: Started sshd@0-138.201.89.219:22-139.178.89.65:55254.service - OpenSSH per-connection server daemon (139.178.89.65:55254). Mar 17 18:49:55.200383 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Mar 17 18:49:55.211402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:49:55.364556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:49:55.369167 (kubelet)[1795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:49:55.422979 kubelet[1795]: E0317 18:49:55.421191 1795 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:49:55.424384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:49:55.424549 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:49:55.426487 systemd[1]: kubelet.service: Consumed 183ms CPU time, 94.5M memory peak. Mar 17 18:49:55.430127 sshd[1785]: Accepted publickey for core from 139.178.89.65 port 55254 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:49:55.433731 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:49:55.445614 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 18:49:55.461529 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 18:49:55.471734 systemd-logind[1484]: New session 1 of user core. Mar 17 18:49:55.479444 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 18:49:55.488508 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 18:49:55.494089 (systemd)[1804]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:55.497639 systemd-logind[1484]: New session c1 of user core. Mar 17 18:49:55.638131 systemd[1804]: Queued start job for default target default.target. Mar 17 18:49:55.649012 systemd[1804]: Created slice app.slice - User Application Slice. Mar 17 18:49:55.649076 systemd[1804]: Reached target paths.target - Paths. Mar 17 18:49:55.649153 systemd[1804]: Reached target timers.target - Timers. Mar 17 18:49:55.651957 systemd[1804]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 18:49:55.668028 systemd[1804]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 18:49:55.668417 systemd[1804]: Reached target sockets.target - Sockets. Mar 17 18:49:55.668650 systemd[1804]: Reached target basic.target - Basic System. Mar 17 18:49:55.668797 systemd[1804]: Reached target default.target - Main User Target. Mar 17 18:49:55.669093 systemd[1804]: Startup finished in 163ms. Mar 17 18:49:55.669477 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 18:49:55.680362 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 18:49:56.378367 systemd[1]: Started sshd@1-138.201.89.219:22-139.178.89.65:55262.service - OpenSSH per-connection server daemon (139.178.89.65:55262). Mar 17 18:49:57.354631 sshd[1815]: Accepted publickey for core from 139.178.89.65 port 55262 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:49:57.358217 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:49:57.369428 systemd-logind[1484]: New session 2 of user core. Mar 17 18:49:57.375360 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 18:49:58.032621 sshd[1817]: Connection closed by 139.178.89.65 port 55262 Mar 17 18:49:58.033420 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Mar 17 18:49:58.043078 systemd[1]: sshd@1-138.201.89.219:22-139.178.89.65:55262.service: Deactivated successfully. Mar 17 18:49:58.048361 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:49:58.051954 systemd-logind[1484]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:49:58.055497 systemd-logind[1484]: Removed session 2. Mar 17 18:49:58.217113 systemd[1]: Started sshd@2-138.201.89.219:22-139.178.89.65:55270.service - OpenSSH per-connection server daemon (139.178.89.65:55270). Mar 17 18:49:59.196412 sshd[1823]: Accepted publickey for core from 139.178.89.65 port 55270 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:49:59.198573 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:49:59.205110 systemd-logind[1484]: New session 3 of user core. Mar 17 18:49:59.215063 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 18:49:59.868946 sshd[1825]: Connection closed by 139.178.89.65 port 55270 Mar 17 18:49:59.869832 sshd-session[1823]: pam_unix(sshd:session): session closed for user core Mar 17 18:49:59.874476 systemd[1]: sshd@2-138.201.89.219:22-139.178.89.65:55270.service: Deactivated successfully. Mar 17 18:49:59.876810 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:49:59.878333 systemd-logind[1484]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:49:59.880844 systemd-logind[1484]: Removed session 3. Mar 17 18:50:00.047353 systemd[1]: Started sshd@3-138.201.89.219:22-139.178.89.65:55278.service - OpenSSH per-connection server daemon (139.178.89.65:55278). Mar 17 18:50:01.023961 sshd[1831]: Accepted publickey for core from 139.178.89.65 port 55278 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:50:01.027713 sshd-session[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:50:01.037548 systemd-logind[1484]: New session 4 of user core. Mar 17 18:50:01.044207 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 18:50:01.708021 sshd[1833]: Connection closed by 139.178.89.65 port 55278 Mar 17 18:50:01.708713 sshd-session[1831]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:01.716182 systemd[1]: sshd@3-138.201.89.219:22-139.178.89.65:55278.service: Deactivated successfully. Mar 17 18:50:01.721577 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:50:01.725820 systemd-logind[1484]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:50:01.727528 systemd-logind[1484]: Removed session 4. Mar 17 18:50:01.890378 systemd[1]: Started sshd@4-138.201.89.219:22-139.178.89.65:48896.service - OpenSSH per-connection server daemon (139.178.89.65:48896). Mar 17 18:50:02.877097 sshd[1839]: Accepted publickey for core from 139.178.89.65 port 48896 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:50:02.880967 sshd-session[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:50:02.893499 systemd-logind[1484]: New session 5 of user core. Mar 17 18:50:02.903401 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 18:50:03.410082 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 18:50:03.410521 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:50:03.430207 sudo[1842]: pam_unix(sudo:session): session closed for user root Mar 17 18:50:03.590119 sshd[1841]: Connection closed by 139.178.89.65 port 48896 Mar 17 18:50:03.591275 sshd-session[1839]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:03.596549 systemd-logind[1484]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:50:03.597620 systemd[1]: sshd@4-138.201.89.219:22-139.178.89.65:48896.service: Deactivated successfully. Mar 17 18:50:03.601431 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:50:03.605504 systemd-logind[1484]: Removed session 5. Mar 17 18:50:03.772961 systemd[1]: Started sshd@5-138.201.89.219:22-139.178.89.65:48904.service - OpenSSH per-connection server daemon (139.178.89.65:48904). Mar 17 18:50:04.760011 sshd[1848]: Accepted publickey for core from 139.178.89.65 port 48904 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:50:04.763760 sshd-session[1848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:50:04.775175 systemd-logind[1484]: New session 6 of user core. Mar 17 18:50:04.778723 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 18:50:05.280567 sudo[1852]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 18:50:05.283419 sudo[1852]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:50:05.290701 sudo[1852]: pam_unix(sudo:session): session closed for user root Mar 17 18:50:05.302908 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 18:50:05.303268 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:50:05.335614 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 18:50:05.368723 augenrules[1874]: No rules Mar 17 18:50:05.370687 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 18:50:05.371191 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 18:50:05.374378 sudo[1851]: pam_unix(sudo:session): session closed for user root Mar 17 18:50:05.450639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Mar 17 18:50:05.460903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:50:05.533007 sshd[1850]: Connection closed by 139.178.89.65 port 48904 Mar 17 18:50:05.535083 sshd-session[1848]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:05.540552 systemd-logind[1484]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:50:05.541190 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:50:05.543607 systemd[1]: sshd@5-138.201.89.219:22-139.178.89.65:48904.service: Deactivated successfully. Mar 17 18:50:05.550825 systemd-logind[1484]: Removed session 6. Mar 17 18:50:05.598418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:50:05.611957 (kubelet)[1890]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:50:05.665550 kubelet[1890]: E0317 18:50:05.665479 1890 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:05.669973 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:05.670149 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:05.670461 systemd[1]: kubelet.service: Consumed 170ms CPU time, 94.1M memory peak. Mar 17 18:50:05.708377 systemd[1]: Started sshd@6-138.201.89.219:22-139.178.89.65:48916.service - OpenSSH per-connection server daemon (139.178.89.65:48916). Mar 17 18:50:06.695720 sshd[1898]: Accepted publickey for core from 139.178.89.65 port 48916 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:50:06.699577 sshd-session[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:50:06.709084 systemd-logind[1484]: New session 7 of user core. Mar 17 18:50:06.716362 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 18:50:07.228084 sudo[1901]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:50:07.228424 sudo[1901]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:50:07.606786 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 18:50:07.609094 (dockerd)[1918]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 18:50:07.879226 dockerd[1918]: time="2025-03-17T18:50:07.878565012Z" level=info msg="Starting up" Mar 17 18:50:08.003543 dockerd[1918]: time="2025-03-17T18:50:08.003468090Z" level=info msg="Loading containers: start." Mar 17 18:50:08.222207 kernel: Initializing XFRM netlink socket Mar 17 18:50:08.354105 systemd-networkd[1400]: docker0: Link UP Mar 17 18:50:08.399183 dockerd[1918]: time="2025-03-17T18:50:08.399112035Z" level=info msg="Loading containers: done." Mar 17 18:50:08.420204 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1043870752-merged.mount: Deactivated successfully. Mar 17 18:50:08.427928 dockerd[1918]: time="2025-03-17T18:50:08.427830239Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:50:08.428180 dockerd[1918]: time="2025-03-17T18:50:08.428079762Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 18:50:08.428408 dockerd[1918]: time="2025-03-17T18:50:08.428364325Z" level=info msg="Daemon has completed initialization" Mar 17 18:50:08.478565 dockerd[1918]: time="2025-03-17T18:50:08.476929233Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:50:08.478779 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 18:50:09.575747 containerd[1508]: time="2025-03-17T18:50:09.575367297Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 17 18:50:10.348327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount249294486.mount: Deactivated successfully. Mar 17 18:50:12.182325 containerd[1508]: time="2025-03-17T18:50:12.181969244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:12.184768 containerd[1508]: time="2025-03-17T18:50:12.184434696Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=25552858" Mar 17 18:50:12.186202 containerd[1508]: time="2025-03-17T18:50:12.186144413Z" level=info msg="ImageCreate event name:\"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:12.190687 containerd[1508]: time="2025-03-17T18:50:12.190591348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:12.192892 containerd[1508]: time="2025-03-17T18:50:12.191929216Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"25549566\" in 2.616510318s" Mar 17 18:50:12.192892 containerd[1508]: time="2025-03-17T18:50:12.192002418Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 17 18:50:12.193410 containerd[1508]: time="2025-03-17T18:50:12.193363327Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 17 18:50:13.890028 containerd[1508]: time="2025-03-17T18:50:13.888315843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:13.890765 containerd[1508]: time="2025-03-17T18:50:13.889972962Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=22458998" Mar 17 18:50:13.891702 containerd[1508]: time="2025-03-17T18:50:13.891647322Z" level=info msg="ImageCreate event name:\"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:13.895442 containerd[1508]: time="2025-03-17T18:50:13.895376650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:13.897160 containerd[1508]: time="2025-03-17T18:50:13.897112851Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"23899774\" in 1.703702723s" Mar 17 18:50:13.897342 containerd[1508]: time="2025-03-17T18:50:13.897323176Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 17 18:50:13.898253 containerd[1508]: time="2025-03-17T18:50:13.898099314Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 17 18:50:15.447030 containerd[1508]: time="2025-03-17T18:50:15.445279142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:15.447589 containerd[1508]: time="2025-03-17T18:50:15.447300038Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=17125849" Mar 17 18:50:15.448120 containerd[1508]: time="2025-03-17T18:50:15.448072580Z" level=info msg="ImageCreate event name:\"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:15.453416 containerd[1508]: time="2025-03-17T18:50:15.453366728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:15.456328 containerd[1508]: time="2025-03-17T18:50:15.456273450Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"18566643\" in 1.557865088s" Mar 17 18:50:15.456526 containerd[1508]: time="2025-03-17T18:50:15.456505656Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 17 18:50:15.457567 containerd[1508]: time="2025-03-17T18:50:15.457518365Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 18:50:15.699694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Mar 17 18:50:15.711298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:50:15.851061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:50:15.863576 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:50:15.916039 kubelet[2177]: E0317 18:50:15.915166 2177 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:15.918733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:15.918964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:15.919532 systemd[1]: kubelet.service: Consumed 172ms CPU time, 94.3M memory peak. Mar 17 18:50:16.626474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2528453419.mount: Deactivated successfully. Mar 17 18:50:17.165770 containerd[1508]: time="2025-03-17T18:50:17.165593738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:17.168036 containerd[1508]: time="2025-03-17T18:50:17.167539560Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=26871941" Mar 17 18:50:17.169869 containerd[1508]: time="2025-03-17T18:50:17.169808193Z" level=info msg="ImageCreate event name:\"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:17.175034 containerd[1508]: time="2025-03-17T18:50:17.174957039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:17.176724 containerd[1508]: time="2025-03-17T18:50:17.176177718Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"26870934\" in 1.718353625s" Mar 17 18:50:17.176724 containerd[1508]: time="2025-03-17T18:50:17.176234240Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 17 18:50:17.176947 containerd[1508]: time="2025-03-17T18:50:17.176879941Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:50:17.795213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2773462272.mount: Deactivated successfully. Mar 17 18:50:18.809233 containerd[1508]: time="2025-03-17T18:50:18.807947554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:18.810595 containerd[1508]: time="2025-03-17T18:50:18.810533523Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Mar 17 18:50:18.812283 containerd[1508]: time="2025-03-17T18:50:18.812234621Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:18.817813 containerd[1508]: time="2025-03-17T18:50:18.817762449Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.640842267s" Mar 17 18:50:18.818114 containerd[1508]: time="2025-03-17T18:50:18.818084420Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 18:50:18.818287 containerd[1508]: time="2025-03-17T18:50:18.818054099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:18.820065 containerd[1508]: time="2025-03-17T18:50:18.820026127Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 18:50:19.475411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822218442.mount: Deactivated successfully. Mar 17 18:50:19.496757 containerd[1508]: time="2025-03-17T18:50:19.495683536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:19.499055 containerd[1508]: time="2025-03-17T18:50:19.498934333Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Mar 17 18:50:19.500779 containerd[1508]: time="2025-03-17T18:50:19.500204579Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:19.508024 containerd[1508]: time="2025-03-17T18:50:19.505780300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:19.508311 containerd[1508]: time="2025-03-17T18:50:19.508265510Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 688.046976ms" Mar 17 18:50:19.508439 containerd[1508]: time="2025-03-17T18:50:19.508417115Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 17 18:50:19.509920 containerd[1508]: time="2025-03-17T18:50:19.509869367Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 17 18:50:20.140051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount273347281.mount: Deactivated successfully. Mar 17 18:50:22.590012 containerd[1508]: time="2025-03-17T18:50:22.588214143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:22.590718 containerd[1508]: time="2025-03-17T18:50:22.590656004Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406487" Mar 17 18:50:22.591524 containerd[1508]: time="2025-03-17T18:50:22.590838732Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:22.598353 containerd[1508]: time="2025-03-17T18:50:22.598288321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:22.600635 containerd[1508]: time="2025-03-17T18:50:22.600580296Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.090654046s" Mar 17 18:50:22.600829 containerd[1508]: time="2025-03-17T18:50:22.600804945Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 17 18:50:25.950688 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Mar 17 18:50:25.964170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:50:26.103487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:50:26.113446 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:50:26.167061 kubelet[2320]: E0317 18:50:26.166944 2320 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:50:26.171557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:50:26.171878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:50:26.174185 systemd[1]: kubelet.service: Consumed 166ms CPU time, 92.7M memory peak. Mar 17 18:50:28.513252 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:50:28.513741 systemd[1]: kubelet.service: Consumed 166ms CPU time, 92.7M memory peak. Mar 17 18:50:28.530375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:50:28.571260 systemd[1]: Reload requested from client PID 2334 ('systemctl') (unit session-7.scope)... Mar 17 18:50:28.571282 systemd[1]: Reloading... Mar 17 18:50:28.705110 zram_generator::config[2379]: No configuration found. Mar 17 18:50:28.824425 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:50:28.919907 systemd[1]: Reloading finished in 348 ms. Mar 17 18:50:28.985066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:50:28.991833 (kubelet)[2418]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 18:50:28.993776 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:50:28.994473 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:50:28.994778 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:50:28.994832 systemd[1]: kubelet.service: Consumed 106ms CPU time, 82.3M memory peak. Mar 17 18:50:29.003074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:50:29.140466 (kubelet)[2430]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 18:50:29.141175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:50:29.194394 kubelet[2430]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:50:29.194394 kubelet[2430]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:50:29.194394 kubelet[2430]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:50:29.194822 kubelet[2430]: I0317 18:50:29.194607 2430 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:50:30.300030 kubelet[2430]: I0317 18:50:30.299222 2430 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:50:30.300030 kubelet[2430]: I0317 18:50:30.299263 2430 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:50:30.300030 kubelet[2430]: I0317 18:50:30.299544 2430 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:50:30.332219 kubelet[2430]: E0317 18:50:30.332167 2430 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://138.201.89.219:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 138.201.89.219:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:50:30.332937 kubelet[2430]: I0317 18:50:30.332702 2430 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:50:30.347273 kubelet[2430]: E0317 18:50:30.347038 2430 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:50:30.347273 kubelet[2430]: I0317 18:50:30.347093 2430 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:50:30.351164 kubelet[2430]: I0317 18:50:30.351134 2430 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:50:30.353505 kubelet[2430]: I0317 18:50:30.352439 2430 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:50:30.353505 kubelet[2430]: I0317 18:50:30.352650 2430 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:50:30.353505 kubelet[2430]: I0317 18:50:30.352682 2430 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-0-9-a87a0d0143","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:50:30.353505 kubelet[2430]: I0317 18:50:30.352954 2430 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:50:30.353753 kubelet[2430]: I0317 18:50:30.352964 2430 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:50:30.353753 kubelet[2430]: I0317 18:50:30.353207 2430 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:50:30.355601 kubelet[2430]: I0317 18:50:30.355571 2430 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:50:30.356855 kubelet[2430]: I0317 18:50:30.356834 2430 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:50:30.357005 kubelet[2430]: I0317 18:50:30.356996 2430 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:50:30.357120 kubelet[2430]: I0317 18:50:30.357108 2430 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:50:30.359633 kubelet[2430]: W0317 18:50:30.356962 2430 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.201.89.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-9-a87a0d0143&limit=500&resourceVersion=0": dial tcp 138.201.89.219:6443: connect: connection refused Mar 17 18:50:30.359832 kubelet[2430]: E0317 18:50:30.359811 2430 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.201.89.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-9-a87a0d0143&limit=500&resourceVersion=0\": dial tcp 138.201.89.219:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:50:30.360546 kubelet[2430]: W0317 18:50:30.360496 2430 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.201.89.219:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.201.89.219:6443: connect: connection refused Mar 17 18:50:30.360879 kubelet[2430]: E0317 18:50:30.360842 2430 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.201.89.219:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.201.89.219:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:50:30.361189 kubelet[2430]: I0317 18:50:30.361165 2430 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 18:50:30.363881 kubelet[2430]: I0317 18:50:30.363849 2430 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:50:30.365239 kubelet[2430]: W0317 18:50:30.365036 2430 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:50:30.367196 kubelet[2430]: I0317 18:50:30.366851 2430 server.go:1269] "Started kubelet" Mar 17 18:50:30.367709 kubelet[2430]: I0317 18:50:30.367316 2430 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:50:30.369484 kubelet[2430]: I0317 18:50:30.368617 2430 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:50:30.370556 kubelet[2430]: I0317 18:50:30.370516 2430 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:50:30.370944 kubelet[2430]: I0317 18:50:30.370923 2430 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:50:30.371738 kubelet[2430]: I0317 18:50:30.371696 2430 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:50:30.373259 kubelet[2430]: E0317 18:50:30.371949 2430 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.201.89.219:6443/api/v1/namespaces/default/events\": dial tcp 138.201.89.219:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-1-0-9-a87a0d0143.182dabb64f676ac6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-0-9-a87a0d0143,UID:ci-4230-1-0-9-a87a0d0143,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-0-9-a87a0d0143,},FirstTimestamp:2025-03-17 18:50:30.366825158 +0000 UTC m=+1.221293840,LastTimestamp:2025-03-17 18:50:30.366825158 +0000 UTC m=+1.221293840,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-0-9-a87a0d0143,}" Mar 17 18:50:30.376076 kubelet[2430]: I0317 18:50:30.375292 2430 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:50:30.379458 kubelet[2430]: E0317 18:50:30.378372 2430 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-0-9-a87a0d0143\" not found" Mar 17 18:50:30.379458 kubelet[2430]: I0317 18:50:30.378490 2430 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:50:30.379458 kubelet[2430]: I0317 18:50:30.378712 2430 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:50:30.379458 kubelet[2430]: I0317 18:50:30.378777 2430 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:50:30.379458 kubelet[2430]: W0317 18:50:30.379247 2430 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.201.89.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.201.89.219:6443: connect: connection refused Mar 17 18:50:30.379458 kubelet[2430]: E0317 18:50:30.379302 2430 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.201.89.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.201.89.219:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:50:30.379698 kubelet[2430]: E0317 18:50:30.379521 2430 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:50:30.383546 kubelet[2430]: I0317 18:50:30.383514 2430 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:50:30.383546 kubelet[2430]: I0317 18:50:30.383537 2430 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:50:30.383666 kubelet[2430]: I0317 18:50:30.383619 2430 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:50:30.391813 kubelet[2430]: E0317 18:50:30.391729 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.201.89.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-9-a87a0d0143?timeout=10s\": dial tcp 138.201.89.219:6443: connect: connection refused" interval="200ms" Mar 17 18:50:30.408503 kubelet[2430]: I0317 18:50:30.408413 2430 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:50:30.409883 kubelet[2430]: I0317 18:50:30.409812 2430 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:50:30.409883 kubelet[2430]: I0317 18:50:30.409871 2430 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:50:30.410149 kubelet[2430]: I0317 18:50:30.409902 2430 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:50:30.410149 kubelet[2430]: E0317 18:50:30.409975 2430 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:50:30.417700 kubelet[2430]: W0317 18:50:30.417433 2430 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.201.89.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.201.89.219:6443: connect: connection refused Mar 17 18:50:30.417700 kubelet[2430]: E0317 18:50:30.417519 2430 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.201.89.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.201.89.219:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:50:30.426926 kubelet[2430]: I0317 18:50:30.426723 2430 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:50:30.426926 kubelet[2430]: I0317 18:50:30.426748 2430 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:50:30.426926 kubelet[2430]: I0317 18:50:30.426780 2430 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:50:30.429722 kubelet[2430]: I0317 18:50:30.429685 2430 policy_none.go:49] "None policy: Start" Mar 17 18:50:30.431157 kubelet[2430]: I0317 18:50:30.430715 2430 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:50:30.431157 kubelet[2430]: I0317 18:50:30.430750 2430 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:50:30.438854 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 18:50:30.450388 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 18:50:30.455431 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 18:50:30.469009 kubelet[2430]: I0317 18:50:30.468931 2430 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:50:30.470859 kubelet[2430]: I0317 18:50:30.469671 2430 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:50:30.470859 kubelet[2430]: I0317 18:50:30.470265 2430 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:50:30.470859 kubelet[2430]: I0317 18:50:30.470662 2430 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:50:30.475814 kubelet[2430]: E0317 18:50:30.475787 2430 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-1-0-9-a87a0d0143\" not found" Mar 17 18:50:30.527714 systemd[1]: Created slice kubepods-burstable-podbcc5fe08c881b0602985d6eb492a68ff.slice - libcontainer container kubepods-burstable-podbcc5fe08c881b0602985d6eb492a68ff.slice. Mar 17 18:50:30.547042 systemd[1]: Created slice kubepods-burstable-pod4a65272e81093cab3285c823f6a4a8a3.slice - libcontainer container kubepods-burstable-pod4a65272e81093cab3285c823f6a4a8a3.slice. Mar 17 18:50:30.563167 systemd[1]: Created slice kubepods-burstable-podd2b28ce80c6d9651a3e69426cb91d406.slice - libcontainer container kubepods-burstable-podd2b28ce80c6d9651a3e69426cb91d406.slice. Mar 17 18:50:30.573370 kubelet[2430]: I0317 18:50:30.573322 2430 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:30.574267 kubelet[2430]: E0317 18:50:30.574217 2430 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.201.89.219:6443/api/v1/nodes\": dial tcp 138.201.89.219:6443: connect: connection refused" node="ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:30.593274 kubelet[2430]: E0317 18:50:30.593204 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.201.89.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-9-a87a0d0143?timeout=10s\": dial tcp 138.201.89.219:6443: connect: connection refused" interval="400ms" Mar 17 18:50:30.679695 kubelet[2430]: I0317 18:50:30.679615 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bcc5fe08c881b0602985d6eb492a68ff-ca-certs\") pod \"kube-apiserver-ci-4230-1-0-9-a87a0d0143\" (UID: \"bcc5fe08c881b0602985d6eb492a68ff\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:30.679884 kubelet[2430]: I0317 18:50:30.679689 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bcc5fe08c881b0602985d6eb492a68ff-k8s-certs\") pod \"kube-apiserver-ci-4230-1-0-9-a87a0d0143\" (UID: \"bcc5fe08c881b0602985d6eb492a68ff\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:30.679884 kubelet[2430]: I0317 18:50:30.679739 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bcc5fe08c881b0602985d6eb492a68ff-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-0-9-a87a0d0143\" (UID: \"bcc5fe08c881b0602985d6eb492a68ff\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:30.679884 kubelet[2430]: I0317 18:50:30.679769 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a65272e81093cab3285c823f6a4a8a3-ca-certs\") pod \"kube-controller-manager-ci-4230-1-0-9-a87a0d0143\" (UID: \"4a65272e81093cab3285c823f6a4a8a3\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:30.679884 kubelet[2430]: I0317 18:50:30.679798 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a65272e81093cab3285c823f6a4a8a3-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-0-9-a87a0d0143\" (UID: \"4a65272e81093cab3285c823f6a4a8a3\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:30.679884 kubelet[2430]: I0317 18:50:30.679824 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2b28ce80c6d9651a3e69426cb91d406-kubeconfig\") pod \"kube-scheduler-ci-4230-1-0-9-a87a0d0143\" (UID: \"d2b28ce80c6d9651a3e69426cb91d406\") " pod="kube-system/kube-scheduler-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:30.680130 kubelet[2430]: I0317 18:50:30.679848 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a65272e81093cab3285c823f6a4a8a3-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-0-9-a87a0d0143\" (UID: \"4a65272e81093cab3285c823f6a4a8a3\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:30.680130 kubelet[2430]: I0317 18:50:30.679871 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a65272e81093cab3285c823f6a4a8a3-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-0-9-a87a0d0143\" (UID: \"4a65272e81093cab3285c823f6a4a8a3\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:30.680130 kubelet[2430]: I0317 18:50:30.679913 2430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a65272e81093cab3285c823f6a4a8a3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-0-9-a87a0d0143\" (UID: \"4a65272e81093cab3285c823f6a4a8a3\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:30.779054 kubelet[2430]: I0317 18:50:30.778674 2430 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:30.779395 kubelet[2430]: E0317 18:50:30.779356 2430 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.201.89.219:6443/api/v1/nodes\": dial tcp 138.201.89.219:6443: connect: connection refused" node="ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:30.842187 containerd[1508]: time="2025-03-17T18:50:30.841629445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-0-9-a87a0d0143,Uid:bcc5fe08c881b0602985d6eb492a68ff,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:30.860752 containerd[1508]: time="2025-03-17T18:50:30.860620541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-0-9-a87a0d0143,Uid:4a65272e81093cab3285c823f6a4a8a3,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:30.869948 containerd[1508]: time="2025-03-17T18:50:30.869888717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-0-9-a87a0d0143,Uid:d2b28ce80c6d9651a3e69426cb91d406,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:30.995665 kubelet[2430]: E0317 18:50:30.994728 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.201.89.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-9-a87a0d0143?timeout=10s\": dial tcp 138.201.89.219:6443: connect: connection refused" interval="800ms" Mar 17 18:50:31.182884 kubelet[2430]: I0317 18:50:31.182011 2430 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:31.182884 kubelet[2430]: E0317 18:50:31.182496 2430 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.201.89.219:6443/api/v1/nodes\": dial tcp 138.201.89.219:6443: connect: connection refused" node="ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:31.310431 kubelet[2430]: W0317 18:50:31.310255 2430 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.201.89.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.201.89.219:6443: connect: connection refused Mar 17 18:50:31.310431 kubelet[2430]: E0317 18:50:31.310332 2430 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.201.89.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.201.89.219:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:50:31.370597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2000247508.mount: Deactivated successfully. Mar 17 18:50:31.381378 containerd[1508]: time="2025-03-17T18:50:31.379894702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:50:31.382535 containerd[1508]: time="2025-03-17T18:50:31.382459003Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Mar 17 18:50:31.385623 containerd[1508]: time="2025-03-17T18:50:31.385569293Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:50:31.388906 containerd[1508]: time="2025-03-17T18:50:31.388845993Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:50:31.391809 containerd[1508]: time="2025-03-17T18:50:31.391750232Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 18:50:31.395658 containerd[1508]: time="2025-03-17T18:50:31.395603643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:50:31.397153 containerd[1508]: time="2025-03-17T18:50:31.397099925Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.343594ms" Mar 17 18:50:31.401248 containerd[1508]: time="2025-03-17T18:50:31.401183829Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:50:31.408722 containerd[1508]: time="2025-03-17T18:50:31.407417811Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 18:50:31.408722 containerd[1508]: time="2025-03-17T18:50:31.408433547Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 538.230413ms" Mar 17 18:50:31.433666 containerd[1508]: time="2025-03-17T18:50:31.433379074Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 572.601844ms" Mar 17 18:50:31.502135 kubelet[2430]: W0317 18:50:31.501912 2430 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.201.89.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.201.89.219:6443: connect: connection refused Mar 17 18:50:31.502135 kubelet[2430]: E0317 18:50:31.502035 2430 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.201.89.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.201.89.219:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:50:31.530948 containerd[1508]: time="2025-03-17T18:50:31.530581362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:31.530948 containerd[1508]: time="2025-03-17T18:50:31.530677407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:31.530948 containerd[1508]: time="2025-03-17T18:50:31.530695048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:31.532229 containerd[1508]: time="2025-03-17T18:50:31.532056403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:31.533623 containerd[1508]: time="2025-03-17T18:50:31.533496722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:31.533744 containerd[1508]: time="2025-03-17T18:50:31.533666491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:31.537356 containerd[1508]: time="2025-03-17T18:50:31.535013285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:31.537356 containerd[1508]: time="2025-03-17T18:50:31.537245848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:31.538949 containerd[1508]: time="2025-03-17T18:50:31.538698767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:31.538949 containerd[1508]: time="2025-03-17T18:50:31.538793372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:31.538949 containerd[1508]: time="2025-03-17T18:50:31.538821934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:31.539917 containerd[1508]: time="2025-03-17T18:50:31.539840670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:31.561277 systemd[1]: Started cri-containerd-183dbc89ca29f019db65bbc36f1c3f296fdf5c2fd00209e04e4c65b127518807.scope - libcontainer container 183dbc89ca29f019db65bbc36f1c3f296fdf5c2fd00209e04e4c65b127518807. Mar 17 18:50:31.569441 systemd[1]: Started cri-containerd-40606f2e8fb5f01c649ded7e178cf9a758ee091fcc09c981122b2a617a7e34a7.scope - libcontainer container 40606f2e8fb5f01c649ded7e178cf9a758ee091fcc09c981122b2a617a7e34a7. Mar 17 18:50:31.578752 systemd[1]: Started cri-containerd-57cba6f8de956e6fee217705e9804cb2bc54ea7ef298cdfd25dfe9cfdee7dabe.scope - libcontainer container 57cba6f8de956e6fee217705e9804cb2bc54ea7ef298cdfd25dfe9cfdee7dabe. Mar 17 18:50:31.637732 containerd[1508]: time="2025-03-17T18:50:31.637340734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-0-9-a87a0d0143,Uid:d2b28ce80c6d9651a3e69426cb91d406,Namespace:kube-system,Attempt:0,} returns sandbox id \"40606f2e8fb5f01c649ded7e178cf9a758ee091fcc09c981122b2a617a7e34a7\"" Mar 17 18:50:31.650213 containerd[1508]: time="2025-03-17T18:50:31.649863141Z" level=info msg="CreateContainer within sandbox \"40606f2e8fb5f01c649ded7e178cf9a758ee091fcc09c981122b2a617a7e34a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:50:31.651235 containerd[1508]: time="2025-03-17T18:50:31.651184253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-0-9-a87a0d0143,Uid:4a65272e81093cab3285c823f6a4a8a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"183dbc89ca29f019db65bbc36f1c3f296fdf5c2fd00209e04e4c65b127518807\"" Mar 17 18:50:31.658877 containerd[1508]: time="2025-03-17T18:50:31.658821192Z" level=info msg="CreateContainer within sandbox \"183dbc89ca29f019db65bbc36f1c3f296fdf5c2fd00209e04e4c65b127518807\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:50:31.662494 containerd[1508]: time="2025-03-17T18:50:31.662359866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-0-9-a87a0d0143,Uid:bcc5fe08c881b0602985d6eb492a68ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"57cba6f8de956e6fee217705e9804cb2bc54ea7ef298cdfd25dfe9cfdee7dabe\"" Mar 17 18:50:31.667795 containerd[1508]: time="2025-03-17T18:50:31.667674757Z" level=info msg="CreateContainer within sandbox \"57cba6f8de956e6fee217705e9804cb2bc54ea7ef298cdfd25dfe9cfdee7dabe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:50:31.686457 containerd[1508]: time="2025-03-17T18:50:31.686295858Z" level=info msg="CreateContainer within sandbox \"40606f2e8fb5f01c649ded7e178cf9a758ee091fcc09c981122b2a617a7e34a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c17089bd3d5e96c37fa06a155fab6752557e376fddf319334bb77556b9c7a63d\"" Mar 17 18:50:31.689697 containerd[1508]: time="2025-03-17T18:50:31.689539516Z" level=info msg="StartContainer for \"c17089bd3d5e96c37fa06a155fab6752557e376fddf319334bb77556b9c7a63d\"" Mar 17 18:50:31.699815 containerd[1508]: time="2025-03-17T18:50:31.699596827Z" level=info msg="CreateContainer within sandbox \"183dbc89ca29f019db65bbc36f1c3f296fdf5c2fd00209e04e4c65b127518807\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"15f13aef1ed317be70882d02b2c0990685ddc29a09b049bde0a1ae2f4d2e958f\"" Mar 17 18:50:31.700604 containerd[1508]: time="2025-03-17T18:50:31.700423792Z" level=info msg="StartContainer for \"15f13aef1ed317be70882d02b2c0990685ddc29a09b049bde0a1ae2f4d2e958f\"" Mar 17 18:50:31.702054 containerd[1508]: time="2025-03-17T18:50:31.701968997Z" level=info msg="CreateContainer within sandbox \"57cba6f8de956e6fee217705e9804cb2bc54ea7ef298cdfd25dfe9cfdee7dabe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b1f80bb834e6a6fe6809ae8ed1727b2172f4a898779b0f030a51e2cbeb188f45\"" Mar 17 18:50:31.702722 containerd[1508]: time="2025-03-17T18:50:31.702687636Z" level=info msg="StartContainer for \"b1f80bb834e6a6fe6809ae8ed1727b2172f4a898779b0f030a51e2cbeb188f45\"" Mar 17 18:50:31.738484 systemd[1]: Started cri-containerd-c17089bd3d5e96c37fa06a155fab6752557e376fddf319334bb77556b9c7a63d.scope - libcontainer container c17089bd3d5e96c37fa06a155fab6752557e376fddf319334bb77556b9c7a63d. Mar 17 18:50:31.750205 systemd[1]: Started cri-containerd-b1f80bb834e6a6fe6809ae8ed1727b2172f4a898779b0f030a51e2cbeb188f45.scope - libcontainer container b1f80bb834e6a6fe6809ae8ed1727b2172f4a898779b0f030a51e2cbeb188f45. Mar 17 18:50:31.755277 systemd[1]: Started cri-containerd-15f13aef1ed317be70882d02b2c0990685ddc29a09b049bde0a1ae2f4d2e958f.scope - libcontainer container 15f13aef1ed317be70882d02b2c0990685ddc29a09b049bde0a1ae2f4d2e958f. Mar 17 18:50:31.795612 kubelet[2430]: E0317 18:50:31.795557 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.201.89.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-9-a87a0d0143?timeout=10s\": dial tcp 138.201.89.219:6443: connect: connection refused" interval="1.6s" Mar 17 18:50:31.842161 containerd[1508]: time="2025-03-17T18:50:31.842021554Z" level=info msg="StartContainer for \"b1f80bb834e6a6fe6809ae8ed1727b2172f4a898779b0f030a51e2cbeb188f45\" returns successfully" Mar 17 18:50:31.842609 containerd[1508]: time="2025-03-17T18:50:31.842122880Z" level=info msg="StartContainer for \"c17089bd3d5e96c37fa06a155fab6752557e376fddf319334bb77556b9c7a63d\" returns successfully" Mar 17 18:50:31.852448 containerd[1508]: time="2025-03-17T18:50:31.852271116Z" level=info msg="StartContainer for \"15f13aef1ed317be70882d02b2c0990685ddc29a09b049bde0a1ae2f4d2e958f\" returns successfully" Mar 17 18:50:31.869392 kubelet[2430]: W0317 18:50:31.869317 2430 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.201.89.219:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.201.89.219:6443: connect: connection refused Mar 17 18:50:31.869541 kubelet[2430]: E0317 18:50:31.869401 2430 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.201.89.219:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.201.89.219:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:50:31.931005 kubelet[2430]: W0317 18:50:31.930822 2430 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.201.89.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-9-a87a0d0143&limit=500&resourceVersion=0": dial tcp 138.201.89.219:6443: connect: connection refused Mar 17 18:50:31.931005 kubelet[2430]: E0317 18:50:31.930935 2430 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.201.89.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-9-a87a0d0143&limit=500&resourceVersion=0\": dial tcp 138.201.89.219:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:50:31.988065 kubelet[2430]: I0317 18:50:31.987503 2430 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:35.044514 kubelet[2430]: E0317 18:50:35.044466 2430 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-1-0-9-a87a0d0143\" not found" node="ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:35.114808 kubelet[2430]: I0317 18:50:35.114534 2430 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:35.364556 kubelet[2430]: I0317 18:50:35.364399 2430 apiserver.go:52] "Watching apiserver" Mar 17 18:50:35.379129 kubelet[2430]: I0317 18:50:35.379076 2430 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:50:37.611069 systemd[1]: Reload requested from client PID 2705 ('systemctl') (unit session-7.scope)... Mar 17 18:50:37.611091 systemd[1]: Reloading... Mar 17 18:50:37.769068 zram_generator::config[2756]: No configuration found. Mar 17 18:50:37.874946 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:50:37.991549 systemd[1]: Reloading finished in 380 ms. Mar 17 18:50:38.013298 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:50:38.034606 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:50:38.034937 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:50:38.035024 systemd[1]: kubelet.service: Consumed 1.731s CPU time, 115.1M memory peak. Mar 17 18:50:38.040541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:50:38.187102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:50:38.199447 (kubelet)[2795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 18:50:38.260294 kubelet[2795]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:50:38.260711 kubelet[2795]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:50:38.260757 kubelet[2795]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:50:38.260926 kubelet[2795]: I0317 18:50:38.260883 2795 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:50:38.279527 kubelet[2795]: I0317 18:50:38.279476 2795 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:50:38.279841 kubelet[2795]: I0317 18:50:38.279818 2795 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:50:38.280840 kubelet[2795]: I0317 18:50:38.280524 2795 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:50:38.283961 kubelet[2795]: I0317 18:50:38.283457 2795 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:50:38.287127 kubelet[2795]: I0317 18:50:38.287094 2795 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:50:38.292628 kubelet[2795]: E0317 18:50:38.292592 2795 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:50:38.292822 kubelet[2795]: I0317 18:50:38.292807 2795 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:50:38.295580 kubelet[2795]: I0317 18:50:38.295553 2795 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:50:38.295921 kubelet[2795]: I0317 18:50:38.295902 2795 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:50:38.296206 kubelet[2795]: I0317 18:50:38.296169 2795 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:50:38.296537 kubelet[2795]: I0317 18:50:38.296292 2795 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-0-9-a87a0d0143","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:50:38.296708 kubelet[2795]: I0317 18:50:38.296692 2795 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:50:38.296787 kubelet[2795]: I0317 18:50:38.296776 2795 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:50:38.296872 kubelet[2795]: I0317 18:50:38.296863 2795 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:50:38.297103 kubelet[2795]: I0317 18:50:38.297087 2795 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:50:38.297308 kubelet[2795]: I0317 18:50:38.297184 2795 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:50:38.297308 kubelet[2795]: I0317 18:50:38.297215 2795 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:50:38.297308 kubelet[2795]: I0317 18:50:38.297226 2795 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:50:38.305057 kubelet[2795]: I0317 18:50:38.304178 2795 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 18:50:38.305057 kubelet[2795]: I0317 18:50:38.304805 2795 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:50:38.306434 kubelet[2795]: I0317 18:50:38.306392 2795 server.go:1269] "Started kubelet" Mar 17 18:50:38.310239 kubelet[2795]: I0317 18:50:38.310212 2795 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:50:38.312851 kubelet[2795]: I0317 18:50:38.312805 2795 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:50:38.314875 kubelet[2795]: I0317 18:50:38.314848 2795 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:50:38.315287 kubelet[2795]: E0317 18:50:38.315249 2795 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-0-9-a87a0d0143\" not found" Mar 17 18:50:38.316100 kubelet[2795]: I0317 18:50:38.316076 2795 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:50:38.316381 kubelet[2795]: I0317 18:50:38.316367 2795 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:50:38.326178 kubelet[2795]: I0317 18:50:38.326114 2795 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:50:38.326598 kubelet[2795]: I0317 18:50:38.326577 2795 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:50:38.328272 kubelet[2795]: I0317 18:50:38.328083 2795 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:50:38.331782 kubelet[2795]: I0317 18:50:38.331644 2795 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:50:38.335324 kubelet[2795]: I0317 18:50:38.335034 2795 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:50:38.335324 kubelet[2795]: I0317 18:50:38.335177 2795 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:50:38.341346 kubelet[2795]: I0317 18:50:38.341304 2795 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:50:38.342945 kubelet[2795]: I0317 18:50:38.342742 2795 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:50:38.342945 kubelet[2795]: I0317 18:50:38.342773 2795 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:50:38.342945 kubelet[2795]: I0317 18:50:38.342794 2795 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:50:38.342945 kubelet[2795]: E0317 18:50:38.342839 2795 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:50:38.344031 kubelet[2795]: I0317 18:50:38.341475 2795 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:50:38.372731 kubelet[2795]: E0317 18:50:38.372657 2795 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:50:38.439789 kubelet[2795]: I0317 18:50:38.438355 2795 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:50:38.439789 kubelet[2795]: I0317 18:50:38.438378 2795 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:50:38.439789 kubelet[2795]: I0317 18:50:38.438456 2795 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:50:38.439789 kubelet[2795]: I0317 18:50:38.438640 2795 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:50:38.439789 kubelet[2795]: I0317 18:50:38.438654 2795 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:50:38.439789 kubelet[2795]: I0317 18:50:38.438673 2795 policy_none.go:49] "None policy: Start" Mar 17 18:50:38.441222 kubelet[2795]: I0317 18:50:38.441181 2795 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:50:38.441222 kubelet[2795]: I0317 18:50:38.441225 2795 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:50:38.441587 kubelet[2795]: I0317 18:50:38.441563 2795 state_mem.go:75] "Updated machine memory state" Mar 17 18:50:38.443990 kubelet[2795]: E0317 18:50:38.443946 2795 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:50:38.451166 kubelet[2795]: I0317 18:50:38.451128 2795 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:50:38.451368 kubelet[2795]: I0317 18:50:38.451350 2795 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:50:38.451428 kubelet[2795]: I0317 18:50:38.451370 2795 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:50:38.452046 kubelet[2795]: I0317 18:50:38.452026 2795 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:50:38.572050 kubelet[2795]: I0317 18:50:38.570691 2795 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:38.583528 kubelet[2795]: I0317 18:50:38.583480 2795 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:38.583737 kubelet[2795]: I0317 18:50:38.583589 2795 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:38.611278 sudo[2828]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:50:38.612120 sudo[2828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 18:50:38.664173 kubelet[2795]: E0317 18:50:38.664116 2795 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-1-0-9-a87a0d0143\" already exists" pod="kube-system/kube-apiserver-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:38.719712 kubelet[2795]: I0317 18:50:38.719662 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a65272e81093cab3285c823f6a4a8a3-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-0-9-a87a0d0143\" (UID: \"4a65272e81093cab3285c823f6a4a8a3\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:38.721113 kubelet[2795]: I0317 18:50:38.720069 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a65272e81093cab3285c823f6a4a8a3-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-0-9-a87a0d0143\" (UID: \"4a65272e81093cab3285c823f6a4a8a3\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:38.721113 kubelet[2795]: I0317 18:50:38.720104 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a65272e81093cab3285c823f6a4a8a3-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-0-9-a87a0d0143\" (UID: \"4a65272e81093cab3285c823f6a4a8a3\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:38.721113 kubelet[2795]: I0317 18:50:38.720126 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d2b28ce80c6d9651a3e69426cb91d406-kubeconfig\") pod \"kube-scheduler-ci-4230-1-0-9-a87a0d0143\" (UID: \"d2b28ce80c6d9651a3e69426cb91d406\") " pod="kube-system/kube-scheduler-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:38.721113 kubelet[2795]: I0317 18:50:38.720144 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bcc5fe08c881b0602985d6eb492a68ff-ca-certs\") pod \"kube-apiserver-ci-4230-1-0-9-a87a0d0143\" (UID: \"bcc5fe08c881b0602985d6eb492a68ff\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:38.721113 kubelet[2795]: I0317 18:50:38.720174 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bcc5fe08c881b0602985d6eb492a68ff-k8s-certs\") pod \"kube-apiserver-ci-4230-1-0-9-a87a0d0143\" (UID: \"bcc5fe08c881b0602985d6eb492a68ff\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:38.721265 kubelet[2795]: I0317 18:50:38.720210 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bcc5fe08c881b0602985d6eb492a68ff-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-0-9-a87a0d0143\" (UID: \"bcc5fe08c881b0602985d6eb492a68ff\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:38.721564 kubelet[2795]: I0317 18:50:38.721359 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a65272e81093cab3285c823f6a4a8a3-ca-certs\") pod \"kube-controller-manager-ci-4230-1-0-9-a87a0d0143\" (UID: \"4a65272e81093cab3285c823f6a4a8a3\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:38.721564 kubelet[2795]: I0317 18:50:38.721490 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a65272e81093cab3285c823f6a4a8a3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-0-9-a87a0d0143\" (UID: \"4a65272e81093cab3285c823f6a4a8a3\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:39.127216 sudo[2828]: pam_unix(sudo:session): session closed for user root Mar 17 18:50:39.306137 kubelet[2795]: I0317 18:50:39.304199 2795 apiserver.go:52] "Watching apiserver" Mar 17 18:50:39.316676 kubelet[2795]: I0317 18:50:39.316596 2795 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:50:39.439378 kubelet[2795]: E0317 18:50:39.439247 2795 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-1-0-9-a87a0d0143\" already exists" pod="kube-system/kube-apiserver-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:39.444782 kubelet[2795]: E0317 18:50:39.444739 2795 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230-1-0-9-a87a0d0143\" already exists" pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a87a0d0143" Mar 17 18:50:39.479805 kubelet[2795]: I0317 18:50:39.479713 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-1-0-9-a87a0d0143" podStartSLOduration=2.479690046 podStartE2EDuration="2.479690046s" podCreationTimestamp="2025-03-17 18:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:50:39.462183288 +0000 UTC m=+1.256656487" watchObservedRunningTime="2025-03-17 18:50:39.479690046 +0000 UTC m=+1.274163246" Mar 17 18:50:39.493906 kubelet[2795]: I0317 18:50:39.493835 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-1-0-9-a87a0d0143" podStartSLOduration=1.493815549 podStartE2EDuration="1.493815549s" podCreationTimestamp="2025-03-17 18:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:50:39.480675669 +0000 UTC m=+1.275148868" watchObservedRunningTime="2025-03-17 18:50:39.493815549 +0000 UTC m=+1.288288708" Mar 17 18:50:39.494133 kubelet[2795]: I0317 18:50:39.494003 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-1-0-9-a87a0d0143" podStartSLOduration=1.493998441 podStartE2EDuration="1.493998441s" podCreationTimestamp="2025-03-17 18:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:50:39.492871089 +0000 UTC m=+1.287344288" watchObservedRunningTime="2025-03-17 18:50:39.493998441 +0000 UTC m=+1.288471640" Mar 17 18:50:41.162677 sudo[1901]: pam_unix(sudo:session): session closed for user root Mar 17 18:50:41.321602 sshd[1900]: Connection closed by 139.178.89.65 port 48916 Mar 17 18:50:41.323288 sshd-session[1898]: pam_unix(sshd:session): session closed for user core Mar 17 18:50:41.329919 systemd[1]: sshd@6-138.201.89.219:22-139.178.89.65:48916.service: Deactivated successfully. Mar 17 18:50:41.335809 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:50:41.336119 systemd[1]: session-7.scope: Consumed 8.102s CPU time, 258.8M memory peak. Mar 17 18:50:41.338671 systemd-logind[1484]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:50:41.340818 systemd-logind[1484]: Removed session 7. Mar 17 18:50:41.765364 kubelet[2795]: I0317 18:50:41.765322 2795 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:50:41.767140 containerd[1508]: time="2025-03-17T18:50:41.765759169Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:50:41.770342 kubelet[2795]: I0317 18:50:41.767676 2795 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:50:41.787082 systemd[1]: Created slice kubepods-besteffort-podf9d37212_48da_4dfd_a09c_f71d3e0721c3.slice - libcontainer container kubepods-besteffort-podf9d37212_48da_4dfd_a09c_f71d3e0721c3.slice. Mar 17 18:50:41.807247 systemd[1]: Created slice kubepods-burstable-pod21bb9504_3da2_48f5_b8f5_92d2af0a3644.slice - libcontainer container kubepods-burstable-pod21bb9504_3da2_48f5_b8f5_92d2af0a3644.slice. Mar 17 18:50:41.849350 kubelet[2795]: I0317 18:50:41.849301 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21bb9504-3da2-48f5-b8f5-92d2af0a3644-clustermesh-secrets\") pod \"cilium-kzjkv\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " pod="kube-system/cilium-kzjkv" Mar 17 18:50:41.849350 kubelet[2795]: I0317 18:50:41.849354 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9d37212-48da-4dfd-a09c-f71d3e0721c3-xtables-lock\") pod \"kube-proxy-js8z5\" (UID: \"f9d37212-48da-4dfd-a09c-f71d3e0721c3\") " pod="kube-system/kube-proxy-js8z5" Mar 17 18:50:41.849540 kubelet[2795]: I0317 18:50:41.849383 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f9d37212-48da-4dfd-a09c-f71d3e0721c3-kube-proxy\") pod \"kube-proxy-js8z5\" (UID: \"f9d37212-48da-4dfd-a09c-f71d3e0721c3\") " pod="kube-system/kube-proxy-js8z5" Mar 17 18:50:41.849540 kubelet[2795]: I0317 18:50:41.849451 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9d37212-48da-4dfd-a09c-f71d3e0721c3-lib-modules\") pod \"kube-proxy-js8z5\" (UID: \"f9d37212-48da-4dfd-a09c-f71d3e0721c3\") " pod="kube-system/kube-proxy-js8z5" Mar 17 18:50:41.849540 kubelet[2795]: I0317 18:50:41.849479 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg8cm\" (UniqueName: \"kubernetes.io/projected/f9d37212-48da-4dfd-a09c-f71d3e0721c3-kube-api-access-qg8cm\") pod \"kube-proxy-js8z5\" (UID: \"f9d37212-48da-4dfd-a09c-f71d3e0721c3\") " pod="kube-system/kube-proxy-js8z5" Mar 17 18:50:41.849540 kubelet[2795]: I0317 18:50:41.849502 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-hostproc\") pod \"cilium-kzjkv\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " pod="kube-system/cilium-kzjkv" Mar 17 18:50:41.849540 kubelet[2795]: I0317 18:50:41.849522 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkcls\" (UniqueName: \"kubernetes.io/projected/21bb9504-3da2-48f5-b8f5-92d2af0a3644-kube-api-access-xkcls\") pod \"cilium-kzjkv\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " pod="kube-system/cilium-kzjkv" Mar 17 18:50:41.849713 kubelet[2795]: I0317 18:50:41.849543 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-lib-modules\") pod \"cilium-kzjkv\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " pod="kube-system/cilium-kzjkv" Mar 17 18:50:41.849713 kubelet[2795]: I0317 18:50:41.849567 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-cilium-run\") pod \"cilium-kzjkv\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " pod="kube-system/cilium-kzjkv" Mar 17 18:50:41.849713 kubelet[2795]: I0317 18:50:41.849623 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21bb9504-3da2-48f5-b8f5-92d2af0a3644-cilium-config-path\") pod \"cilium-kzjkv\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " pod="kube-system/cilium-kzjkv" Mar 17 18:50:41.849713 kubelet[2795]: I0317 18:50:41.849644 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-host-proc-sys-kernel\") pod \"cilium-kzjkv\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " pod="kube-system/cilium-kzjkv" Mar 17 18:50:41.849801 kubelet[2795]: I0317 18:50:41.849716 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-cilium-cgroup\") pod \"cilium-kzjkv\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " pod="kube-system/cilium-kzjkv" Mar 17 18:50:41.849801 kubelet[2795]: I0317 18:50:41.849777 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21bb9504-3da2-48f5-b8f5-92d2af0a3644-hubble-tls\") pod \"cilium-kzjkv\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " pod="kube-system/cilium-kzjkv" Mar 17 18:50:41.849859 kubelet[2795]: I0317 18:50:41.849800 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-bpf-maps\") pod \"cilium-kzjkv\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " pod="kube-system/cilium-kzjkv" Mar 17 18:50:41.849859 kubelet[2795]: I0317 18:50:41.849822 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-cni-path\") pod \"cilium-kzjkv\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " pod="kube-system/cilium-kzjkv" Mar 17 18:50:41.849859 kubelet[2795]: I0317 18:50:41.849841 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-etc-cni-netd\") pod \"cilium-kzjkv\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " pod="kube-system/cilium-kzjkv" Mar 17 18:50:41.849922 kubelet[2795]: I0317 18:50:41.849862 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-xtables-lock\") pod \"cilium-kzjkv\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " pod="kube-system/cilium-kzjkv" Mar 17 18:50:41.849922 kubelet[2795]: I0317 18:50:41.849886 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-host-proc-sys-net\") pod \"cilium-kzjkv\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " pod="kube-system/cilium-kzjkv" Mar 17 18:50:41.974023 kubelet[2795]: E0317 18:50:41.973209 2795 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 18:50:41.974023 kubelet[2795]: E0317 18:50:41.973245 2795 projected.go:194] Error preparing data for projected volume kube-api-access-qg8cm for pod kube-system/kube-proxy-js8z5: configmap "kube-root-ca.crt" not found Mar 17 18:50:41.974023 kubelet[2795]: E0317 18:50:41.973324 2795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9d37212-48da-4dfd-a09c-f71d3e0721c3-kube-api-access-qg8cm podName:f9d37212-48da-4dfd-a09c-f71d3e0721c3 nodeName:}" failed. No retries permitted until 2025-03-17 18:50:42.473300151 +0000 UTC m=+4.267773350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qg8cm" (UniqueName: "kubernetes.io/projected/f9d37212-48da-4dfd-a09c-f71d3e0721c3-kube-api-access-qg8cm") pod "kube-proxy-js8z5" (UID: "f9d37212-48da-4dfd-a09c-f71d3e0721c3") : configmap "kube-root-ca.crt" not found Mar 17 18:50:41.985324 kubelet[2795]: E0317 18:50:41.985270 2795 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 18:50:41.985324 kubelet[2795]: E0317 18:50:41.985304 2795 projected.go:194] Error preparing data for projected volume kube-api-access-xkcls for pod kube-system/cilium-kzjkv: configmap "kube-root-ca.crt" not found Mar 17 18:50:41.985735 kubelet[2795]: E0317 18:50:41.985565 2795 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21bb9504-3da2-48f5-b8f5-92d2af0a3644-kube-api-access-xkcls podName:21bb9504-3da2-48f5-b8f5-92d2af0a3644 nodeName:}" failed. No retries permitted until 2025-03-17 18:50:42.485540197 +0000 UTC m=+4.280013356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xkcls" (UniqueName: "kubernetes.io/projected/21bb9504-3da2-48f5-b8f5-92d2af0a3644-kube-api-access-xkcls") pod "cilium-kzjkv" (UID: "21bb9504-3da2-48f5-b8f5-92d2af0a3644") : configmap "kube-root-ca.crt" not found Mar 17 18:50:42.701960 containerd[1508]: time="2025-03-17T18:50:42.701360757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-js8z5,Uid:f9d37212-48da-4dfd-a09c-f71d3e0721c3,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:42.713039 containerd[1508]: time="2025-03-17T18:50:42.712939289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kzjkv,Uid:21bb9504-3da2-48f5-b8f5-92d2af0a3644,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:42.738338 containerd[1508]: time="2025-03-17T18:50:42.738126210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:42.738692 containerd[1508]: time="2025-03-17T18:50:42.738313543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:42.738961 containerd[1508]: time="2025-03-17T18:50:42.738837578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:42.739206 containerd[1508]: time="2025-03-17T18:50:42.739118597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:42.756850 containerd[1508]: time="2025-03-17T18:50:42.755176068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:42.756850 containerd[1508]: time="2025-03-17T18:50:42.755268795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:42.756850 containerd[1508]: time="2025-03-17T18:50:42.755285836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:42.756850 containerd[1508]: time="2025-03-17T18:50:42.755491450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:42.768263 systemd[1]: Started cri-containerd-1e9d07bcd593ffefb7f7b08c1b2fd3f1a60d4078f2cac626860bc95ce6bae07b.scope - libcontainer container 1e9d07bcd593ffefb7f7b08c1b2fd3f1a60d4078f2cac626860bc95ce6bae07b. Mar 17 18:50:42.794381 systemd[1]: Started cri-containerd-e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c.scope - libcontainer container e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c. Mar 17 18:50:42.855161 containerd[1508]: time="2025-03-17T18:50:42.855027373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-js8z5,Uid:f9d37212-48da-4dfd-a09c-f71d3e0721c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e9d07bcd593ffefb7f7b08c1b2fd3f1a60d4078f2cac626860bc95ce6bae07b\"" Mar 17 18:50:42.857862 containerd[1508]: time="2025-03-17T18:50:42.857806879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kzjkv,Uid:21bb9504-3da2-48f5-b8f5-92d2af0a3644,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\"" Mar 17 18:50:42.869414 containerd[1508]: time="2025-03-17T18:50:42.869358170Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:50:42.881607 containerd[1508]: time="2025-03-17T18:50:42.881072792Z" level=info msg="CreateContainer within sandbox \"1e9d07bcd593ffefb7f7b08c1b2fd3f1a60d4078f2cac626860bc95ce6bae07b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:50:42.919693 systemd[1]: Created slice kubepods-besteffort-podad251a9a_5ee3_4488_a273_a2d788bdf63e.slice - libcontainer container kubepods-besteffort-podad251a9a_5ee3_4488_a273_a2d788bdf63e.slice. Mar 17 18:50:42.960497 containerd[1508]: time="2025-03-17T18:50:42.955367190Z" level=info msg="CreateContainer within sandbox \"1e9d07bcd593ffefb7f7b08c1b2fd3f1a60d4078f2cac626860bc95ce6bae07b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f5c43cfa40c193ef2b63c5a2bea020ebd0f6832b11a15485aa856b5ac2b49206\"" Mar 17 18:50:42.960732 kubelet[2795]: I0317 18:50:42.960263 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdsnv\" (UniqueName: \"kubernetes.io/projected/ad251a9a-5ee3-4488-a273-a2d788bdf63e-kube-api-access-wdsnv\") pod \"cilium-operator-5d85765b45-rjlx7\" (UID: \"ad251a9a-5ee3-4488-a273-a2d788bdf63e\") " pod="kube-system/cilium-operator-5d85765b45-rjlx7" Mar 17 18:50:42.960732 kubelet[2795]: I0317 18:50:42.960314 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad251a9a-5ee3-4488-a273-a2d788bdf63e-cilium-config-path\") pod \"cilium-operator-5d85765b45-rjlx7\" (UID: \"ad251a9a-5ee3-4488-a273-a2d788bdf63e\") " pod="kube-system/cilium-operator-5d85765b45-rjlx7" Mar 17 18:50:42.962797 containerd[1508]: time="2025-03-17T18:50:42.960754070Z" level=info msg="StartContainer for \"f5c43cfa40c193ef2b63c5a2bea020ebd0f6832b11a15485aa856b5ac2b49206\"" Mar 17 18:50:43.009294 systemd[1]: Started cri-containerd-f5c43cfa40c193ef2b63c5a2bea020ebd0f6832b11a15485aa856b5ac2b49206.scope - libcontainer container f5c43cfa40c193ef2b63c5a2bea020ebd0f6832b11a15485aa856b5ac2b49206. Mar 17 18:50:43.049171 containerd[1508]: time="2025-03-17T18:50:43.048467047Z" level=info msg="StartContainer for \"f5c43cfa40c193ef2b63c5a2bea020ebd0f6832b11a15485aa856b5ac2b49206\" returns successfully" Mar 17 18:50:43.231526 containerd[1508]: time="2025-03-17T18:50:43.230966671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-rjlx7,Uid:ad251a9a-5ee3-4488-a273-a2d788bdf63e,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:43.274485 containerd[1508]: time="2025-03-17T18:50:43.273508868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:50:43.274485 containerd[1508]: time="2025-03-17T18:50:43.274266319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:50:43.274485 containerd[1508]: time="2025-03-17T18:50:43.274280160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:43.274485 containerd[1508]: time="2025-03-17T18:50:43.274372046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:50:43.303350 systemd[1]: Started cri-containerd-978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357.scope - libcontainer container 978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357. Mar 17 18:50:43.342777 containerd[1508]: time="2025-03-17T18:50:43.342569099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-rjlx7,Uid:ad251a9a-5ee3-4488-a273-a2d788bdf63e,Namespace:kube-system,Attempt:0,} returns sandbox id \"978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357\"" Mar 17 18:50:44.062548 kubelet[2795]: I0317 18:50:44.061725 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-js8z5" podStartSLOduration=3.061689391 podStartE2EDuration="3.061689391s" podCreationTimestamp="2025-03-17 18:50:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:50:43.443427521 +0000 UTC m=+5.237900720" watchObservedRunningTime="2025-03-17 18:50:44.061689391 +0000 UTC m=+5.856162590" Mar 17 18:50:47.094189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1920508715.mount: Deactivated successfully. Mar 17 18:50:48.548699 containerd[1508]: time="2025-03-17T18:50:48.548642142Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:48.550660 containerd[1508]: time="2025-03-17T18:50:48.550609603Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 17 18:50:48.551858 containerd[1508]: time="2025-03-17T18:50:48.551788567Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:48.555842 containerd[1508]: time="2025-03-17T18:50:48.555504914Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.6813807s" Mar 17 18:50:48.555842 containerd[1508]: time="2025-03-17T18:50:48.555571958Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 18:50:48.561390 containerd[1508]: time="2025-03-17T18:50:48.561064512Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:50:48.562056 containerd[1508]: time="2025-03-17T18:50:48.561909453Z" level=info msg="CreateContainer within sandbox \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:50:48.582368 containerd[1508]: time="2025-03-17T18:50:48.582314636Z" level=info msg="CreateContainer within sandbox \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243\"" Mar 17 18:50:48.584728 containerd[1508]: time="2025-03-17T18:50:48.583166777Z" level=info msg="StartContainer for \"da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243\"" Mar 17 18:50:48.627409 systemd[1]: Started cri-containerd-da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243.scope - libcontainer container da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243. Mar 17 18:50:48.661036 containerd[1508]: time="2025-03-17T18:50:48.660234662Z" level=info msg="StartContainer for \"da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243\" returns successfully" Mar 17 18:50:48.677867 systemd[1]: cri-containerd-da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243.scope: Deactivated successfully. Mar 17 18:50:48.875183 containerd[1508]: time="2025-03-17T18:50:48.874735241Z" level=info msg="shim disconnected" id=da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243 namespace=k8s.io Mar 17 18:50:48.875183 containerd[1508]: time="2025-03-17T18:50:48.874795085Z" level=warning msg="cleaning up after shim disconnected" id=da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243 namespace=k8s.io Mar 17 18:50:48.875183 containerd[1508]: time="2025-03-17T18:50:48.874804566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:50:49.459199 containerd[1508]: time="2025-03-17T18:50:49.459149795Z" level=info msg="CreateContainer within sandbox \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:50:49.480401 containerd[1508]: time="2025-03-17T18:50:49.480243443Z" level=info msg="CreateContainer within sandbox \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8\"" Mar 17 18:50:49.484064 containerd[1508]: time="2025-03-17T18:50:49.483954352Z" level=info msg="StartContainer for \"8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8\"" Mar 17 18:50:49.512867 systemd[1]: Started cri-containerd-8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8.scope - libcontainer container 8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8. Mar 17 18:50:49.548570 containerd[1508]: time="2025-03-17T18:50:49.548437942Z" level=info msg="StartContainer for \"8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8\" returns successfully" Mar 17 18:50:49.561178 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:50:49.562220 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 18:50:49.562472 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 18:50:49.569519 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 18:50:49.569742 systemd[1]: cri-containerd-8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8.scope: Deactivated successfully. Mar 17 18:50:49.577598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243-rootfs.mount: Deactivated successfully. Mar 17 18:50:49.577726 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:50:49.596621 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 18:50:49.605116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8-rootfs.mount: Deactivated successfully. Mar 17 18:50:49.611554 containerd[1508]: time="2025-03-17T18:50:49.611474748Z" level=info msg="shim disconnected" id=8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8 namespace=k8s.io Mar 17 18:50:49.611554 containerd[1508]: time="2025-03-17T18:50:49.611540433Z" level=warning msg="cleaning up after shim disconnected" id=8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8 namespace=k8s.io Mar 17 18:50:49.611554 containerd[1508]: time="2025-03-17T18:50:49.611549674Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:50:50.428625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2495325048.mount: Deactivated successfully. Mar 17 18:50:50.463156 containerd[1508]: time="2025-03-17T18:50:50.463107203Z" level=info msg="CreateContainer within sandbox \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:50:50.484600 containerd[1508]: time="2025-03-17T18:50:50.483703670Z" level=info msg="CreateContainer within sandbox \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417\"" Mar 17 18:50:50.484912 containerd[1508]: time="2025-03-17T18:50:50.484868155Z" level=info msg="StartContainer for \"3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417\"" Mar 17 18:50:50.513278 systemd[1]: Started cri-containerd-3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417.scope - libcontainer container 3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417. Mar 17 18:50:50.560101 containerd[1508]: time="2025-03-17T18:50:50.559059742Z" level=info msg="StartContainer for \"3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417\" returns successfully" Mar 17 18:50:50.568166 systemd[1]: cri-containerd-3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417.scope: Deactivated successfully. Mar 17 18:50:50.600453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417-rootfs.mount: Deactivated successfully. Mar 17 18:50:50.611172 containerd[1508]: time="2025-03-17T18:50:50.611083787Z" level=info msg="shim disconnected" id=3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417 namespace=k8s.io Mar 17 18:50:50.611542 containerd[1508]: time="2025-03-17T18:50:50.611503098Z" level=warning msg="cleaning up after shim disconnected" id=3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417 namespace=k8s.io Mar 17 18:50:50.611693 containerd[1508]: time="2025-03-17T18:50:50.611676351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:50:50.889825 containerd[1508]: time="2025-03-17T18:50:50.889756652Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 17 18:50:50.889977 containerd[1508]: time="2025-03-17T18:50:50.889898182Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:50.892972 containerd[1508]: time="2025-03-17T18:50:50.892717068Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.331591871s" Mar 17 18:50:50.892972 containerd[1508]: time="2025-03-17T18:50:50.892799994Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 18:50:50.892972 containerd[1508]: time="2025-03-17T18:50:50.892955526Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:50:50.897370 containerd[1508]: time="2025-03-17T18:50:50.897034064Z" level=info msg="CreateContainer within sandbox \"978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:50:50.920090 containerd[1508]: time="2025-03-17T18:50:50.920033266Z" level=info msg="CreateContainer within sandbox \"978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74\"" Mar 17 18:50:50.920932 containerd[1508]: time="2025-03-17T18:50:50.920880928Z" level=info msg="StartContainer for \"078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74\"" Mar 17 18:50:50.956186 systemd[1]: Started cri-containerd-078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74.scope - libcontainer container 078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74. Mar 17 18:50:51.005363 containerd[1508]: time="2025-03-17T18:50:51.005290266Z" level=info msg="StartContainer for \"078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74\" returns successfully" Mar 17 18:50:51.470328 containerd[1508]: time="2025-03-17T18:50:51.470273839Z" level=info msg="CreateContainer within sandbox \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:50:51.496967 containerd[1508]: time="2025-03-17T18:50:51.496900686Z" level=info msg="CreateContainer within sandbox \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d\"" Mar 17 18:50:51.500021 containerd[1508]: time="2025-03-17T18:50:51.497561294Z" level=info msg="StartContainer for \"4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d\"" Mar 17 18:50:51.546542 systemd[1]: Started cri-containerd-4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d.scope - libcontainer container 4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d. Mar 17 18:50:51.624041 containerd[1508]: time="2025-03-17T18:50:51.623406867Z" level=info msg="StartContainer for \"4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d\" returns successfully" Mar 17 18:50:51.627473 systemd[1]: cri-containerd-4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d.scope: Deactivated successfully. Mar 17 18:50:51.668750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d-rootfs.mount: Deactivated successfully. Mar 17 18:50:51.713163 containerd[1508]: time="2025-03-17T18:50:51.713072248Z" level=info msg="shim disconnected" id=4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d namespace=k8s.io Mar 17 18:50:51.713163 containerd[1508]: time="2025-03-17T18:50:51.713147053Z" level=warning msg="cleaning up after shim disconnected" id=4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d namespace=k8s.io Mar 17 18:50:51.713163 containerd[1508]: time="2025-03-17T18:50:51.713156854Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:50:52.485771 containerd[1508]: time="2025-03-17T18:50:52.485378399Z" level=info msg="CreateContainer within sandbox \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:50:52.515064 kubelet[2795]: I0317 18:50:52.514428 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-rjlx7" podStartSLOduration=2.965689639 podStartE2EDuration="10.514403041s" podCreationTimestamp="2025-03-17 18:50:42 +0000 UTC" firstStartedPulling="2025-03-17 18:50:43.345704151 +0000 UTC m=+5.140177310" lastFinishedPulling="2025-03-17 18:50:50.894417433 +0000 UTC m=+12.688890712" observedRunningTime="2025-03-17 18:50:51.685217631 +0000 UTC m=+13.479690830" watchObservedRunningTime="2025-03-17 18:50:52.514403041 +0000 UTC m=+14.308876240" Mar 17 18:50:52.520545 containerd[1508]: time="2025-03-17T18:50:52.520360245Z" level=info msg="CreateContainer within sandbox \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7\"" Mar 17 18:50:52.521701 containerd[1508]: time="2025-03-17T18:50:52.521646141Z" level=info msg="StartContainer for \"617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7\"" Mar 17 18:50:52.574507 systemd[1]: Started cri-containerd-617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7.scope - libcontainer container 617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7. Mar 17 18:50:52.615459 containerd[1508]: time="2025-03-17T18:50:52.615403887Z" level=info msg="StartContainer for \"617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7\" returns successfully" Mar 17 18:50:52.710459 kubelet[2795]: I0317 18:50:52.710427 2795 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 18:50:52.763314 systemd[1]: Created slice kubepods-burstable-pod5aa01465_a043_4bec_9ec9_1a790f41a30b.slice - libcontainer container kubepods-burstable-pod5aa01465_a043_4bec_9ec9_1a790f41a30b.slice. Mar 17 18:50:52.770849 systemd[1]: Created slice kubepods-burstable-podf4dc78cb_f020_464b_a4e0_a2257e8773d1.slice - libcontainer container kubepods-burstable-podf4dc78cb_f020_464b_a4e0_a2257e8773d1.slice. Mar 17 18:50:52.834458 kubelet[2795]: I0317 18:50:52.834187 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpk54\" (UniqueName: \"kubernetes.io/projected/5aa01465-a043-4bec-9ec9-1a790f41a30b-kube-api-access-vpk54\") pod \"coredns-6f6b679f8f-vmnqw\" (UID: \"5aa01465-a043-4bec-9ec9-1a790f41a30b\") " pod="kube-system/coredns-6f6b679f8f-vmnqw" Mar 17 18:50:52.834458 kubelet[2795]: I0317 18:50:52.834295 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa01465-a043-4bec-9ec9-1a790f41a30b-config-volume\") pod \"coredns-6f6b679f8f-vmnqw\" (UID: \"5aa01465-a043-4bec-9ec9-1a790f41a30b\") " pod="kube-system/coredns-6f6b679f8f-vmnqw" Mar 17 18:50:52.834458 kubelet[2795]: I0317 18:50:52.834323 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4dc78cb-f020-464b-a4e0-a2257e8773d1-config-volume\") pod \"coredns-6f6b679f8f-mgkrh\" (UID: \"f4dc78cb-f020-464b-a4e0-a2257e8773d1\") " pod="kube-system/coredns-6f6b679f8f-mgkrh" Mar 17 18:50:52.834458 kubelet[2795]: I0317 18:50:52.834362 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7d8q\" (UniqueName: \"kubernetes.io/projected/f4dc78cb-f020-464b-a4e0-a2257e8773d1-kube-api-access-g7d8q\") pod \"coredns-6f6b679f8f-mgkrh\" (UID: \"f4dc78cb-f020-464b-a4e0-a2257e8773d1\") " pod="kube-system/coredns-6f6b679f8f-mgkrh" Mar 17 18:50:53.072284 containerd[1508]: time="2025-03-17T18:50:53.071010319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vmnqw,Uid:5aa01465-a043-4bec-9ec9-1a790f41a30b,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:53.075488 containerd[1508]: time="2025-03-17T18:50:53.075436932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mgkrh,Uid:f4dc78cb-f020-464b-a4e0-a2257e8773d1,Namespace:kube-system,Attempt:0,}" Mar 17 18:50:53.515809 kubelet[2795]: I0317 18:50:53.515726 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kzjkv" podStartSLOduration=6.82471174 podStartE2EDuration="12.515705222s" podCreationTimestamp="2025-03-17 18:50:41 +0000 UTC" firstStartedPulling="2025-03-17 18:50:42.867503886 +0000 UTC m=+4.661977085" lastFinishedPulling="2025-03-17 18:50:48.558497408 +0000 UTC m=+10.352970567" observedRunningTime="2025-03-17 18:50:53.515167181 +0000 UTC m=+15.309640380" watchObservedRunningTime="2025-03-17 18:50:53.515705222 +0000 UTC m=+15.310178421" Mar 17 18:50:54.570186 systemd-networkd[1400]: cilium_host: Link UP Mar 17 18:50:54.571131 systemd-networkd[1400]: cilium_net: Link UP Mar 17 18:50:54.571555 systemd-networkd[1400]: cilium_net: Gained carrier Mar 17 18:50:54.574266 systemd-networkd[1400]: cilium_host: Gained carrier Mar 17 18:50:54.574602 systemd-networkd[1400]: cilium_host: Gained IPv6LL Mar 17 18:50:54.649782 systemd-networkd[1400]: cilium_net: Gained IPv6LL Mar 17 18:50:54.698565 systemd-networkd[1400]: cilium_vxlan: Link UP Mar 17 18:50:54.698766 systemd-networkd[1400]: cilium_vxlan: Gained carrier Mar 17 18:50:55.003010 kernel: NET: Registered PF_ALG protocol family Mar 17 18:50:55.817432 systemd-networkd[1400]: lxc_health: Link UP Mar 17 18:50:55.843361 systemd-networkd[1400]: lxc_health: Gained carrier Mar 17 18:50:56.107147 systemd-networkd[1400]: cilium_vxlan: Gained IPv6LL Mar 17 18:50:56.179535 kernel: eth0: renamed from tmpc4ad3 Mar 17 18:50:56.176848 systemd-networkd[1400]: lxc841c5065a1e2: Link UP Mar 17 18:50:56.177126 systemd-networkd[1400]: lxcb23a87d397ff: Link UP Mar 17 18:50:56.185439 systemd-networkd[1400]: lxcb23a87d397ff: Gained carrier Mar 17 18:50:56.189462 kernel: eth0: renamed from tmpaf356 Mar 17 18:50:56.194767 systemd-networkd[1400]: lxc841c5065a1e2: Gained carrier Mar 17 18:50:57.578170 systemd-networkd[1400]: lxc841c5065a1e2: Gained IPv6LL Mar 17 18:50:57.833199 systemd-networkd[1400]: lxc_health: Gained IPv6LL Mar 17 18:50:57.962699 systemd-networkd[1400]: lxcb23a87d397ff: Gained IPv6LL Mar 17 18:51:00.573086 containerd[1508]: time="2025-03-17T18:51:00.572223207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:51:00.574570 containerd[1508]: time="2025-03-17T18:51:00.573536591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:51:00.574570 containerd[1508]: time="2025-03-17T18:51:00.574279810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:00.574840 containerd[1508]: time="2025-03-17T18:51:00.574426661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:00.609931 systemd[1]: Started cri-containerd-c4ad3bdb305c4229375694bfc02a1b5f5be02c38f15a02fe4e3ee939ce95d391.scope - libcontainer container c4ad3bdb305c4229375694bfc02a1b5f5be02c38f15a02fe4e3ee939ce95d391. Mar 17 18:51:00.630125 containerd[1508]: time="2025-03-17T18:51:00.629432336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:51:00.630125 containerd[1508]: time="2025-03-17T18:51:00.629493141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:51:00.630125 containerd[1508]: time="2025-03-17T18:51:00.629504262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:00.630125 containerd[1508]: time="2025-03-17T18:51:00.629584468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:51:00.669315 systemd[1]: Started cri-containerd-af35662f4acb69f14acef73b41ac054feddddeb8a1e897de48c6ec44f6f70a6e.scope - libcontainer container af35662f4acb69f14acef73b41ac054feddddeb8a1e897de48c6ec44f6f70a6e. Mar 17 18:51:00.708638 containerd[1508]: time="2025-03-17T18:51:00.708591563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mgkrh,Uid:f4dc78cb-f020-464b-a4e0-a2257e8773d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4ad3bdb305c4229375694bfc02a1b5f5be02c38f15a02fe4e3ee939ce95d391\"" Mar 17 18:51:00.717226 containerd[1508]: time="2025-03-17T18:51:00.717086316Z" level=info msg="CreateContainer within sandbox \"c4ad3bdb305c4229375694bfc02a1b5f5be02c38f15a02fe4e3ee939ce95d391\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:51:00.770532 containerd[1508]: time="2025-03-17T18:51:00.770431259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vmnqw,Uid:5aa01465-a043-4bec-9ec9-1a790f41a30b,Namespace:kube-system,Attempt:0,} returns sandbox id \"af35662f4acb69f14acef73b41ac054feddddeb8a1e897de48c6ec44f6f70a6e\"" Mar 17 18:51:00.772487 containerd[1508]: time="2025-03-17T18:51:00.770933739Z" level=info msg="CreateContainer within sandbox \"c4ad3bdb305c4229375694bfc02a1b5f5be02c38f15a02fe4e3ee939ce95d391\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"98a1a6d449b940e7c8e09b656b8565b402a606c75a0f834d8a13edd53c948da6\"" Mar 17 18:51:00.773450 containerd[1508]: time="2025-03-17T18:51:00.773375772Z" level=info msg="StartContainer for \"98a1a6d449b940e7c8e09b656b8565b402a606c75a0f834d8a13edd53c948da6\"" Mar 17 18:51:00.783098 containerd[1508]: time="2025-03-17T18:51:00.783042537Z" level=info msg="CreateContainer within sandbox \"af35662f4acb69f14acef73b41ac054feddddeb8a1e897de48c6ec44f6f70a6e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:51:00.809111 containerd[1508]: time="2025-03-17T18:51:00.809061237Z" level=info msg="CreateContainer within sandbox \"af35662f4acb69f14acef73b41ac054feddddeb8a1e897de48c6ec44f6f70a6e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"43a193f7ac80a503d1de2edc58e68f1ec797c3939d9b63a5ee7e1bf37ede1f29\"" Mar 17 18:51:00.812721 containerd[1508]: time="2025-03-17T18:51:00.812654202Z" level=info msg="StartContainer for \"43a193f7ac80a503d1de2edc58e68f1ec797c3939d9b63a5ee7e1bf37ede1f29\"" Mar 17 18:51:00.817352 systemd[1]: Started cri-containerd-98a1a6d449b940e7c8e09b656b8565b402a606c75a0f834d8a13edd53c948da6.scope - libcontainer container 98a1a6d449b940e7c8e09b656b8565b402a606c75a0f834d8a13edd53c948da6. Mar 17 18:51:00.856642 systemd[1]: Started cri-containerd-43a193f7ac80a503d1de2edc58e68f1ec797c3939d9b63a5ee7e1bf37ede1f29.scope - libcontainer container 43a193f7ac80a503d1de2edc58e68f1ec797c3939d9b63a5ee7e1bf37ede1f29. Mar 17 18:51:00.868029 containerd[1508]: time="2025-03-17T18:51:00.867588711Z" level=info msg="StartContainer for \"98a1a6d449b940e7c8e09b656b8565b402a606c75a0f834d8a13edd53c948da6\" returns successfully" Mar 17 18:51:00.901804 containerd[1508]: time="2025-03-17T18:51:00.901656568Z" level=info msg="StartContainer for \"43a193f7ac80a503d1de2edc58e68f1ec797c3939d9b63a5ee7e1bf37ede1f29\" returns successfully" Mar 17 18:51:01.559339 kubelet[2795]: I0317 18:51:01.559078 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-mgkrh" podStartSLOduration=19.559023771 podStartE2EDuration="19.559023771s" podCreationTimestamp="2025-03-17 18:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:51:01.554164344 +0000 UTC m=+23.348637543" watchObservedRunningTime="2025-03-17 18:51:01.559023771 +0000 UTC m=+23.353496970" Mar 17 18:51:01.585882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount546032603.mount: Deactivated successfully. Mar 17 18:54:28.929108 update_engine[1485]: I20250317 18:54:28.928339 1485 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 17 18:54:28.929108 update_engine[1485]: I20250317 18:54:28.928422 1485 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 17 18:54:28.929108 update_engine[1485]: I20250317 18:54:28.928767 1485 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 17 18:54:28.935207 update_engine[1485]: I20250317 18:54:28.931758 1485 omaha_request_params.cc:62] Current group set to beta Mar 17 18:54:28.935207 update_engine[1485]: I20250317 18:54:28.931949 1485 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 17 18:54:28.935207 update_engine[1485]: I20250317 18:54:28.931972 1485 update_attempter.cc:643] Scheduling an action processor start. Mar 17 18:54:28.935207 update_engine[1485]: I20250317 18:54:28.932190 1485 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 18:54:28.935207 update_engine[1485]: I20250317 18:54:28.932271 1485 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 17 18:54:28.935207 update_engine[1485]: I20250317 18:54:28.933218 1485 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 18:54:28.935207 update_engine[1485]: I20250317 18:54:28.933270 1485 omaha_request_action.cc:272] Request: Mar 17 18:54:28.935207 update_engine[1485]: Mar 17 18:54:28.935207 update_engine[1485]: Mar 17 18:54:28.935207 update_engine[1485]: Mar 17 18:54:28.935207 update_engine[1485]: Mar 17 18:54:28.935207 update_engine[1485]: Mar 17 18:54:28.935207 update_engine[1485]: Mar 17 18:54:28.935207 update_engine[1485]: Mar 17 18:54:28.935207 update_engine[1485]: Mar 17 18:54:28.935207 update_engine[1485]: I20250317 18:54:28.933283 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:54:28.935618 locksmithd[1542]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 17 18:54:28.936553 update_engine[1485]: I20250317 18:54:28.936457 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:54:28.937276 update_engine[1485]: I20250317 18:54:28.937192 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:54:28.938706 update_engine[1485]: E20250317 18:54:28.938446 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:54:28.938706 update_engine[1485]: I20250317 18:54:28.938555 1485 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 17 18:54:38.852419 update_engine[1485]: I20250317 18:54:38.852293 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:54:38.852923 update_engine[1485]: I20250317 18:54:38.852581 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:54:38.852923 update_engine[1485]: I20250317 18:54:38.852881 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:54:38.853353 update_engine[1485]: E20250317 18:54:38.853283 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:54:38.853426 update_engine[1485]: I20250317 18:54:38.853392 1485 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 17 18:54:48.859604 update_engine[1485]: I20250317 18:54:48.859413 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:54:48.860162 update_engine[1485]: I20250317 18:54:48.859750 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:54:48.860162 update_engine[1485]: I20250317 18:54:48.860103 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:54:48.860603 update_engine[1485]: E20250317 18:54:48.860529 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:54:48.860656 update_engine[1485]: I20250317 18:54:48.860628 1485 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 17 18:54:58.859314 update_engine[1485]: I20250317 18:54:58.859166 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:54:58.859848 update_engine[1485]: I20250317 18:54:58.859601 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:54:58.860188 update_engine[1485]: I20250317 18:54:58.860119 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:54:58.860581 update_engine[1485]: E20250317 18:54:58.860523 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:54:58.861220 update_engine[1485]: I20250317 18:54:58.860858 1485 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 18:54:58.861438 update_engine[1485]: I20250317 18:54:58.861387 1485 omaha_request_action.cc:617] Omaha request response: Mar 17 18:54:58.861689 update_engine[1485]: E20250317 18:54:58.861653 1485 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 17 18:54:58.861863 update_engine[1485]: I20250317 18:54:58.861829 1485 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 17 18:54:58.862025 update_engine[1485]: I20250317 18:54:58.861968 1485 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:54:58.862161 update_engine[1485]: I20250317 18:54:58.862126 1485 update_attempter.cc:306] Processing Done. Mar 17 18:54:58.863021 update_engine[1485]: E20250317 18:54:58.862262 1485 update_attempter.cc:619] Update failed. Mar 17 18:54:58.863021 update_engine[1485]: I20250317 18:54:58.862287 1485 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 17 18:54:58.863021 update_engine[1485]: I20250317 18:54:58.862299 1485 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 17 18:54:58.863021 update_engine[1485]: I20250317 18:54:58.862313 1485 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 17 18:54:58.863021 update_engine[1485]: I20250317 18:54:58.862447 1485 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 18:54:58.863021 update_engine[1485]: I20250317 18:54:58.862491 1485 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 18:54:58.863021 update_engine[1485]: I20250317 18:54:58.862504 1485 omaha_request_action.cc:272] Request: Mar 17 18:54:58.863021 update_engine[1485]: Mar 17 18:54:58.863021 update_engine[1485]: Mar 17 18:54:58.863021 update_engine[1485]: Mar 17 18:54:58.863021 update_engine[1485]: Mar 17 18:54:58.863021 update_engine[1485]: Mar 17 18:54:58.863021 update_engine[1485]: Mar 17 18:54:58.863021 update_engine[1485]: I20250317 18:54:58.862517 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:54:58.863870 locksmithd[1542]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 17 18:54:58.864666 update_engine[1485]: I20250317 18:54:58.864318 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:54:58.864666 update_engine[1485]: I20250317 18:54:58.864609 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:54:58.865204 update_engine[1485]: E20250317 18:54:58.865020 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:54:58.865204 update_engine[1485]: I20250317 18:54:58.865078 1485 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 18:54:58.865204 update_engine[1485]: I20250317 18:54:58.865085 1485 omaha_request_action.cc:617] Omaha request response: Mar 17 18:54:58.865204 update_engine[1485]: I20250317 18:54:58.865092 1485 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:54:58.865204 update_engine[1485]: I20250317 18:54:58.865096 1485 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 18:54:58.865204 update_engine[1485]: I20250317 18:54:58.865103 1485 update_attempter.cc:306] Processing Done. Mar 17 18:54:58.865204 update_engine[1485]: I20250317 18:54:58.865109 1485 update_attempter.cc:310] Error event sent. Mar 17 18:54:58.865204 update_engine[1485]: I20250317 18:54:58.865119 1485 update_check_scheduler.cc:74] Next update check in 47m9s Mar 17 18:54:58.865868 locksmithd[1542]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 17 18:55:27.878151 systemd[1]: Started sshd@7-138.201.89.219:22-139.178.89.65:47644.service - OpenSSH per-connection server daemon (139.178.89.65:47644). Mar 17 18:55:28.867037 sshd[4216]: Accepted publickey for core from 139.178.89.65 port 47644 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:55:28.869423 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:55:28.879725 systemd-logind[1484]: New session 8 of user core. Mar 17 18:55:28.886425 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 18:55:29.680830 sshd[4218]: Connection closed by 139.178.89.65 port 47644 Mar 17 18:55:29.681696 sshd-session[4216]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:29.689068 systemd[1]: sshd@7-138.201.89.219:22-139.178.89.65:47644.service: Deactivated successfully. Mar 17 18:55:29.695961 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:55:29.700162 systemd-logind[1484]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:55:29.703476 systemd-logind[1484]: Removed session 8. Mar 17 18:55:34.882104 systemd[1]: Started sshd@8-138.201.89.219:22-139.178.89.65:54314.service - OpenSSH per-connection server daemon (139.178.89.65:54314). Mar 17 18:55:35.910416 sshd[4231]: Accepted publickey for core from 139.178.89.65 port 54314 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:55:35.914502 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:55:35.923737 systemd-logind[1484]: New session 9 of user core. Mar 17 18:55:35.931269 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 18:55:36.731019 sshd[4233]: Connection closed by 139.178.89.65 port 54314 Mar 17 18:55:36.730227 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:36.734873 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:55:36.737856 systemd[1]: sshd@8-138.201.89.219:22-139.178.89.65:54314.service: Deactivated successfully. Mar 17 18:55:36.738247 systemd-logind[1484]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:55:36.742115 systemd-logind[1484]: Removed session 9. Mar 17 18:55:41.916434 systemd[1]: Started sshd@9-138.201.89.219:22-139.178.89.65:40246.service - OpenSSH per-connection server daemon (139.178.89.65:40246). Mar 17 18:55:42.934811 sshd[4248]: Accepted publickey for core from 139.178.89.65 port 40246 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:55:42.940022 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:55:42.950937 systemd-logind[1484]: New session 10 of user core. Mar 17 18:55:42.956637 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 18:55:43.740031 sshd[4250]: Connection closed by 139.178.89.65 port 40246 Mar 17 18:55:43.740800 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:43.750165 systemd[1]: sshd@9-138.201.89.219:22-139.178.89.65:40246.service: Deactivated successfully. Mar 17 18:55:43.755621 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:55:43.760362 systemd-logind[1484]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:55:43.762827 systemd-logind[1484]: Removed session 10. Mar 17 18:55:43.924570 systemd[1]: Started sshd@10-138.201.89.219:22-139.178.89.65:40248.service - OpenSSH per-connection server daemon (139.178.89.65:40248). Mar 17 18:55:44.916874 sshd[4265]: Accepted publickey for core from 139.178.89.65 port 40248 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:55:44.919621 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:55:44.927372 systemd-logind[1484]: New session 11 of user core. Mar 17 18:55:44.936407 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 18:55:45.741272 sshd[4267]: Connection closed by 139.178.89.65 port 40248 Mar 17 18:55:45.742201 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:45.761847 systemd[1]: sshd@10-138.201.89.219:22-139.178.89.65:40248.service: Deactivated successfully. Mar 17 18:55:45.762173 systemd-logind[1484]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:55:45.770736 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:55:45.773214 systemd-logind[1484]: Removed session 11. Mar 17 18:55:45.930431 systemd[1]: Started sshd@11-138.201.89.219:22-139.178.89.65:40252.service - OpenSSH per-connection server daemon (139.178.89.65:40252). Mar 17 18:55:46.934163 sshd[4277]: Accepted publickey for core from 139.178.89.65 port 40252 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:55:46.937318 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:55:46.944617 systemd-logind[1484]: New session 12 of user core. Mar 17 18:55:46.951429 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 18:55:47.700399 sshd[4279]: Connection closed by 139.178.89.65 port 40252 Mar 17 18:55:47.701550 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:47.708095 systemd-logind[1484]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:55:47.708476 systemd[1]: sshd@11-138.201.89.219:22-139.178.89.65:40252.service: Deactivated successfully. Mar 17 18:55:47.711212 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:55:47.713098 systemd-logind[1484]: Removed session 12. Mar 17 18:55:52.884349 systemd[1]: Started sshd@12-138.201.89.219:22-139.178.89.65:55536.service - OpenSSH per-connection server daemon (139.178.89.65:55536). Mar 17 18:55:53.882035 sshd[4291]: Accepted publickey for core from 139.178.89.65 port 55536 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:55:53.884187 sshd-session[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:55:53.890372 systemd-logind[1484]: New session 13 of user core. Mar 17 18:55:53.897251 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 18:55:54.666583 sshd[4293]: Connection closed by 139.178.89.65 port 55536 Mar 17 18:55:54.667410 sshd-session[4291]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:54.677094 systemd-logind[1484]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:55:54.677503 systemd[1]: sshd@12-138.201.89.219:22-139.178.89.65:55536.service: Deactivated successfully. Mar 17 18:55:54.682495 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:55:54.684479 systemd-logind[1484]: Removed session 13. Mar 17 18:55:54.842392 systemd[1]: Started sshd@13-138.201.89.219:22-139.178.89.65:55544.service - OpenSSH per-connection server daemon (139.178.89.65:55544). Mar 17 18:55:55.844584 sshd[4305]: Accepted publickey for core from 139.178.89.65 port 55544 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:55:55.846677 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:55:55.858257 systemd-logind[1484]: New session 14 of user core. Mar 17 18:55:55.863366 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 18:55:56.683074 sshd[4307]: Connection closed by 139.178.89.65 port 55544 Mar 17 18:55:56.683721 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Mar 17 18:55:56.692190 systemd-logind[1484]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:55:56.692710 systemd[1]: sshd@13-138.201.89.219:22-139.178.89.65:55544.service: Deactivated successfully. Mar 17 18:55:56.695915 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:55:56.698363 systemd-logind[1484]: Removed session 14. Mar 17 18:55:56.864454 systemd[1]: Started sshd@14-138.201.89.219:22-139.178.89.65:55554.service - OpenSSH per-connection server daemon (139.178.89.65:55554). Mar 17 18:55:57.862046 sshd[4317]: Accepted publickey for core from 139.178.89.65 port 55554 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:55:57.863814 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:55:57.871686 systemd-logind[1484]: New session 15 of user core. Mar 17 18:55:57.879352 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 18:56:00.551080 sshd[4319]: Connection closed by 139.178.89.65 port 55554 Mar 17 18:56:00.553729 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:00.559085 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:56:00.563659 systemd[1]: sshd@14-138.201.89.219:22-139.178.89.65:55554.service: Deactivated successfully. Mar 17 18:56:00.567542 systemd-logind[1484]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:56:00.569462 systemd-logind[1484]: Removed session 15. Mar 17 18:56:00.728130 systemd[1]: Started sshd@15-138.201.89.219:22-139.178.89.65:55558.service - OpenSSH per-connection server daemon (139.178.89.65:55558). Mar 17 18:56:01.716554 sshd[4337]: Accepted publickey for core from 139.178.89.65 port 55558 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:56:01.718113 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:56:01.726632 systemd-logind[1484]: New session 16 of user core. Mar 17 18:56:01.731191 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 18:56:02.608855 sshd[4339]: Connection closed by 139.178.89.65 port 55558 Mar 17 18:56:02.610435 sshd-session[4337]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:02.618929 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:56:02.620463 systemd[1]: sshd@15-138.201.89.219:22-139.178.89.65:55558.service: Deactivated successfully. Mar 17 18:56:02.625548 systemd-logind[1484]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:56:02.627534 systemd-logind[1484]: Removed session 16. Mar 17 18:56:02.802702 systemd[1]: Started sshd@16-138.201.89.219:22-139.178.89.65:49232.service - OpenSSH per-connection server daemon (139.178.89.65:49232). Mar 17 18:56:03.803699 sshd[4349]: Accepted publickey for core from 139.178.89.65 port 49232 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:56:03.804958 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:56:03.815229 systemd-logind[1484]: New session 17 of user core. Mar 17 18:56:03.821287 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 18:56:04.584571 sshd[4351]: Connection closed by 139.178.89.65 port 49232 Mar 17 18:56:04.587395 sshd-session[4349]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:04.590815 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:56:04.594604 systemd[1]: sshd@16-138.201.89.219:22-139.178.89.65:49232.service: Deactivated successfully. Mar 17 18:56:04.607833 systemd-logind[1484]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:56:04.610458 systemd-logind[1484]: Removed session 17. Mar 17 18:56:09.771546 systemd[1]: Started sshd@17-138.201.89.219:22-139.178.89.65:49238.service - OpenSSH per-connection server daemon (139.178.89.65:49238). Mar 17 18:56:10.779427 sshd[4366]: Accepted publickey for core from 139.178.89.65 port 49238 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:56:10.783721 sshd-session[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:56:10.801672 systemd-logind[1484]: New session 18 of user core. Mar 17 18:56:10.805509 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 18:56:11.558190 sshd[4368]: Connection closed by 139.178.89.65 port 49238 Mar 17 18:56:11.559151 sshd-session[4366]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:11.564302 systemd[1]: sshd@17-138.201.89.219:22-139.178.89.65:49238.service: Deactivated successfully. Mar 17 18:56:11.567765 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:56:11.571188 systemd-logind[1484]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:56:11.573533 systemd-logind[1484]: Removed session 18. Mar 17 18:56:16.734358 systemd[1]: Started sshd@18-138.201.89.219:22-139.178.89.65:39030.service - OpenSSH per-connection server daemon (139.178.89.65:39030). Mar 17 18:56:17.725274 sshd[4384]: Accepted publickey for core from 139.178.89.65 port 39030 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:56:17.728381 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:56:17.738314 systemd-logind[1484]: New session 19 of user core. Mar 17 18:56:17.745343 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 18:56:18.523086 sshd[4386]: Connection closed by 139.178.89.65 port 39030 Mar 17 18:56:18.523833 sshd-session[4384]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:18.530297 systemd[1]: sshd@18-138.201.89.219:22-139.178.89.65:39030.service: Deactivated successfully. Mar 17 18:56:18.534744 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:56:18.538240 systemd-logind[1484]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:56:18.541121 systemd-logind[1484]: Removed session 19. Mar 17 18:56:18.703810 systemd[1]: Started sshd@19-138.201.89.219:22-139.178.89.65:39044.service - OpenSSH per-connection server daemon (139.178.89.65:39044). Mar 17 18:56:19.711303 sshd[4398]: Accepted publickey for core from 139.178.89.65 port 39044 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:56:19.714441 sshd-session[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:56:19.721875 systemd-logind[1484]: New session 20 of user core. Mar 17 18:56:19.730286 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 18:56:22.101336 kubelet[2795]: I0317 18:56:22.099280 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vmnqw" podStartSLOduration=340.099260767 podStartE2EDuration="5m40.099260767s" podCreationTimestamp="2025-03-17 18:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:51:01.604481713 +0000 UTC m=+23.398954912" watchObservedRunningTime="2025-03-17 18:56:22.099260767 +0000 UTC m=+343.893733926" Mar 17 18:56:22.131337 containerd[1508]: time="2025-03-17T18:56:22.131287704Z" level=info msg="StopContainer for \"078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74\" with timeout 30 (s)" Mar 17 18:56:22.132504 systemd[1]: run-containerd-runc-k8s.io-617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7-runc.q7FGod.mount: Deactivated successfully. Mar 17 18:56:22.139134 containerd[1508]: time="2025-03-17T18:56:22.137775231Z" level=info msg="Stop container \"078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74\" with signal terminated" Mar 17 18:56:22.151121 containerd[1508]: time="2025-03-17T18:56:22.151068228Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:56:22.163004 containerd[1508]: time="2025-03-17T18:56:22.162878105Z" level=info msg="StopContainer for \"617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7\" with timeout 2 (s)" Mar 17 18:56:22.163526 containerd[1508]: time="2025-03-17T18:56:22.163490872Z" level=info msg="Stop container \"617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7\" with signal terminated" Mar 17 18:56:22.164716 systemd[1]: cri-containerd-078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74.scope: Deactivated successfully. Mar 17 18:56:22.179736 systemd-networkd[1400]: lxc_health: Link DOWN Mar 17 18:56:22.179744 systemd-networkd[1400]: lxc_health: Lost carrier Mar 17 18:56:22.201213 systemd[1]: cri-containerd-617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7.scope: Deactivated successfully. Mar 17 18:56:22.203180 systemd[1]: cri-containerd-617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7.scope: Consumed 8.797s CPU time, 125.3M memory peak, 144K read from disk, 12.9M written to disk. Mar 17 18:56:22.213175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74-rootfs.mount: Deactivated successfully. Mar 17 18:56:22.231617 containerd[1508]: time="2025-03-17T18:56:22.231540088Z" level=info msg="shim disconnected" id=078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74 namespace=k8s.io Mar 17 18:56:22.231617 containerd[1508]: time="2025-03-17T18:56:22.231616764Z" level=warning msg="cleaning up after shim disconnected" id=078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74 namespace=k8s.io Mar 17 18:56:22.231617 containerd[1508]: time="2025-03-17T18:56:22.231626043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:56:22.244874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7-rootfs.mount: Deactivated successfully. Mar 17 18:56:22.252634 containerd[1508]: time="2025-03-17T18:56:22.252562104Z" level=info msg="shim disconnected" id=617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7 namespace=k8s.io Mar 17 18:56:22.252634 containerd[1508]: time="2025-03-17T18:56:22.252628420Z" level=warning msg="cleaning up after shim disconnected" id=617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7 namespace=k8s.io Mar 17 18:56:22.252634 containerd[1508]: time="2025-03-17T18:56:22.252637100Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:56:22.258243 containerd[1508]: time="2025-03-17T18:56:22.258060765Z" level=info msg="StopContainer for \"078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74\" returns successfully" Mar 17 18:56:22.259026 containerd[1508]: time="2025-03-17T18:56:22.258846802Z" level=info msg="StopPodSandbox for \"978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357\"" Mar 17 18:56:22.259026 containerd[1508]: time="2025-03-17T18:56:22.258967435Z" level=info msg="Container to stop \"078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:56:22.261930 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357-shm.mount: Deactivated successfully. Mar 17 18:56:22.272049 systemd[1]: cri-containerd-978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357.scope: Deactivated successfully. Mar 17 18:56:22.284597 containerd[1508]: time="2025-03-17T18:56:22.284543363Z" level=info msg="StopContainer for \"617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7\" returns successfully" Mar 17 18:56:22.286006 containerd[1508]: time="2025-03-17T18:56:22.285840933Z" level=info msg="StopPodSandbox for \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\"" Mar 17 18:56:22.286006 containerd[1508]: time="2025-03-17T18:56:22.285927808Z" level=info msg="Container to stop \"da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:56:22.286006 containerd[1508]: time="2025-03-17T18:56:22.285955447Z" level=info msg="Container to stop \"8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:56:22.286006 containerd[1508]: time="2025-03-17T18:56:22.285965366Z" level=info msg="Container to stop \"3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:56:22.286371 containerd[1508]: time="2025-03-17T18:56:22.285978605Z" level=info msg="Container to stop \"4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:56:22.286371 containerd[1508]: time="2025-03-17T18:56:22.286265150Z" level=info msg="Container to stop \"617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:56:22.298128 systemd[1]: cri-containerd-e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c.scope: Deactivated successfully. Mar 17 18:56:22.320816 containerd[1508]: time="2025-03-17T18:56:22.320734034Z" level=info msg="shim disconnected" id=978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357 namespace=k8s.io Mar 17 18:56:22.320816 containerd[1508]: time="2025-03-17T18:56:22.320803670Z" level=warning msg="cleaning up after shim disconnected" id=978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357 namespace=k8s.io Mar 17 18:56:22.320816 containerd[1508]: time="2025-03-17T18:56:22.320816589Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:56:22.341339 containerd[1508]: time="2025-03-17T18:56:22.340974572Z" level=info msg="shim disconnected" id=e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c namespace=k8s.io Mar 17 18:56:22.341339 containerd[1508]: time="2025-03-17T18:56:22.341268836Z" level=warning msg="cleaning up after shim disconnected" id=e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c namespace=k8s.io Mar 17 18:56:22.341339 containerd[1508]: time="2025-03-17T18:56:22.341284755Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:56:22.344063 containerd[1508]: time="2025-03-17T18:56:22.343738542Z" level=info msg="TearDown network for sandbox \"978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357\" successfully" Mar 17 18:56:22.344063 containerd[1508]: time="2025-03-17T18:56:22.343777500Z" level=info msg="StopPodSandbox for \"978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357\" returns successfully" Mar 17 18:56:22.368104 containerd[1508]: time="2025-03-17T18:56:22.366424867Z" level=info msg="TearDown network for sandbox \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\" successfully" Mar 17 18:56:22.368104 containerd[1508]: time="2025-03-17T18:56:22.366470584Z" level=info msg="StopPodSandbox for \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\" returns successfully" Mar 17 18:56:22.446857 kubelet[2795]: I0317 18:56:22.446790 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-hostproc\") pod \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " Mar 17 18:56:22.447077 kubelet[2795]: I0317 18:56:22.446885 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21bb9504-3da2-48f5-b8f5-92d2af0a3644-cilium-config-path\") pod \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " Mar 17 18:56:22.447077 kubelet[2795]: I0317 18:56:22.446921 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-etc-cni-netd\") pod \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " Mar 17 18:56:22.447077 kubelet[2795]: I0317 18:56:22.446958 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-lib-modules\") pod \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " Mar 17 18:56:22.447077 kubelet[2795]: I0317 18:56:22.447022 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-cilium-cgroup\") pod \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " Mar 17 18:56:22.447197 kubelet[2795]: I0317 18:56:22.447056 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-bpf-maps\") pod \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " Mar 17 18:56:22.447197 kubelet[2795]: I0317 18:56:22.447152 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-cni-path\") pod \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " Mar 17 18:56:22.447245 kubelet[2795]: I0317 18:56:22.447204 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-xtables-lock\") pod \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " Mar 17 18:56:22.447274 kubelet[2795]: I0317 18:56:22.447259 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad251a9a-5ee3-4488-a273-a2d788bdf63e-cilium-config-path\") pod \"ad251a9a-5ee3-4488-a273-a2d788bdf63e\" (UID: \"ad251a9a-5ee3-4488-a273-a2d788bdf63e\") " Mar 17 18:56:22.447332 kubelet[2795]: I0317 18:56:22.447306 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdsnv\" (UniqueName: \"kubernetes.io/projected/ad251a9a-5ee3-4488-a273-a2d788bdf63e-kube-api-access-wdsnv\") pod \"ad251a9a-5ee3-4488-a273-a2d788bdf63e\" (UID: \"ad251a9a-5ee3-4488-a273-a2d788bdf63e\") " Mar 17 18:56:22.447390 kubelet[2795]: I0317 18:56:22.447367 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21bb9504-3da2-48f5-b8f5-92d2af0a3644-clustermesh-secrets\") pod \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " Mar 17 18:56:22.447434 kubelet[2795]: I0317 18:56:22.447407 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-cilium-run\") pod \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " Mar 17 18:56:22.447463 kubelet[2795]: I0317 18:56:22.447446 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkcls\" (UniqueName: \"kubernetes.io/projected/21bb9504-3da2-48f5-b8f5-92d2af0a3644-kube-api-access-xkcls\") pod \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " Mar 17 18:56:22.447510 kubelet[2795]: I0317 18:56:22.447487 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21bb9504-3da2-48f5-b8f5-92d2af0a3644-hubble-tls\") pod \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " Mar 17 18:56:22.447544 kubelet[2795]: I0317 18:56:22.447530 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-host-proc-sys-net\") pod \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " Mar 17 18:56:22.447588 kubelet[2795]: I0317 18:56:22.447567 2795 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-host-proc-sys-kernel\") pod \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\" (UID: \"21bb9504-3da2-48f5-b8f5-92d2af0a3644\") " Mar 17 18:56:22.447813 kubelet[2795]: I0317 18:56:22.447758 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "21bb9504-3da2-48f5-b8f5-92d2af0a3644" (UID: "21bb9504-3da2-48f5-b8f5-92d2af0a3644"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:56:22.447882 kubelet[2795]: I0317 18:56:22.447846 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-hostproc" (OuterVolumeSpecName: "hostproc") pod "21bb9504-3da2-48f5-b8f5-92d2af0a3644" (UID: "21bb9504-3da2-48f5-b8f5-92d2af0a3644"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:56:22.452223 kubelet[2795]: I0317 18:56:22.452032 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad251a9a-5ee3-4488-a273-a2d788bdf63e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ad251a9a-5ee3-4488-a273-a2d788bdf63e" (UID: "ad251a9a-5ee3-4488-a273-a2d788bdf63e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:56:22.452223 kubelet[2795]: I0317 18:56:22.452142 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "21bb9504-3da2-48f5-b8f5-92d2af0a3644" (UID: "21bb9504-3da2-48f5-b8f5-92d2af0a3644"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:56:22.452223 kubelet[2795]: I0317 18:56:22.452167 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "21bb9504-3da2-48f5-b8f5-92d2af0a3644" (UID: "21bb9504-3da2-48f5-b8f5-92d2af0a3644"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:56:22.452223 kubelet[2795]: I0317 18:56:22.452188 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "21bb9504-3da2-48f5-b8f5-92d2af0a3644" (UID: "21bb9504-3da2-48f5-b8f5-92d2af0a3644"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:56:22.452223 kubelet[2795]: I0317 18:56:22.452206 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "21bb9504-3da2-48f5-b8f5-92d2af0a3644" (UID: "21bb9504-3da2-48f5-b8f5-92d2af0a3644"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:56:22.452584 kubelet[2795]: I0317 18:56:22.452223 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-cni-path" (OuterVolumeSpecName: "cni-path") pod "21bb9504-3da2-48f5-b8f5-92d2af0a3644" (UID: "21bb9504-3da2-48f5-b8f5-92d2af0a3644"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:56:22.452584 kubelet[2795]: I0317 18:56:22.452239 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "21bb9504-3da2-48f5-b8f5-92d2af0a3644" (UID: "21bb9504-3da2-48f5-b8f5-92d2af0a3644"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:56:22.456793 kubelet[2795]: I0317 18:56:22.456348 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "21bb9504-3da2-48f5-b8f5-92d2af0a3644" (UID: "21bb9504-3da2-48f5-b8f5-92d2af0a3644"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:56:22.457185 kubelet[2795]: I0317 18:56:22.457021 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21bb9504-3da2-48f5-b8f5-92d2af0a3644-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "21bb9504-3da2-48f5-b8f5-92d2af0a3644" (UID: "21bb9504-3da2-48f5-b8f5-92d2af0a3644"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:56:22.458081 kubelet[2795]: I0317 18:56:22.457560 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "21bb9504-3da2-48f5-b8f5-92d2af0a3644" (UID: "21bb9504-3da2-48f5-b8f5-92d2af0a3644"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:56:22.460294 kubelet[2795]: I0317 18:56:22.460223 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21bb9504-3da2-48f5-b8f5-92d2af0a3644-kube-api-access-xkcls" (OuterVolumeSpecName: "kube-api-access-xkcls") pod "21bb9504-3da2-48f5-b8f5-92d2af0a3644" (UID: "21bb9504-3da2-48f5-b8f5-92d2af0a3644"). InnerVolumeSpecName "kube-api-access-xkcls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:56:22.460470 kubelet[2795]: I0317 18:56:22.460446 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad251a9a-5ee3-4488-a273-a2d788bdf63e-kube-api-access-wdsnv" (OuterVolumeSpecName: "kube-api-access-wdsnv") pod "ad251a9a-5ee3-4488-a273-a2d788bdf63e" (UID: "ad251a9a-5ee3-4488-a273-a2d788bdf63e"). InnerVolumeSpecName "kube-api-access-wdsnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:56:22.461834 kubelet[2795]: I0317 18:56:22.461782 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21bb9504-3da2-48f5-b8f5-92d2af0a3644-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "21bb9504-3da2-48f5-b8f5-92d2af0a3644" (UID: "21bb9504-3da2-48f5-b8f5-92d2af0a3644"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:56:22.462691 kubelet[2795]: I0317 18:56:22.462622 2795 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21bb9504-3da2-48f5-b8f5-92d2af0a3644-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "21bb9504-3da2-48f5-b8f5-92d2af0a3644" (UID: "21bb9504-3da2-48f5-b8f5-92d2af0a3644"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:56:22.548160 kubelet[2795]: I0317 18:56:22.548093 2795 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-hostproc\") on node \"ci-4230-1-0-9-a87a0d0143\" DevicePath \"\"" Mar 17 18:56:22.548160 kubelet[2795]: I0317 18:56:22.548145 2795 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21bb9504-3da2-48f5-b8f5-92d2af0a3644-cilium-config-path\") on node \"ci-4230-1-0-9-a87a0d0143\" DevicePath \"\"" Mar 17 18:56:22.548160 kubelet[2795]: I0317 18:56:22.548170 2795 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-etc-cni-netd\") on node \"ci-4230-1-0-9-a87a0d0143\" DevicePath \"\"" Mar 17 18:56:22.548379 kubelet[2795]: I0317 18:56:22.548187 2795 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-cni-path\") on node \"ci-4230-1-0-9-a87a0d0143\" DevicePath \"\"" Mar 17 18:56:22.548379 kubelet[2795]: I0317 18:56:22.548201 2795 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-lib-modules\") on node \"ci-4230-1-0-9-a87a0d0143\" DevicePath \"\"" Mar 17 18:56:22.548379 kubelet[2795]: I0317 18:56:22.548213 2795 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-cilium-cgroup\") on node \"ci-4230-1-0-9-a87a0d0143\" DevicePath \"\"" Mar 17 18:56:22.548379 kubelet[2795]: I0317 18:56:22.548225 2795 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-bpf-maps\") on node \"ci-4230-1-0-9-a87a0d0143\" DevicePath \"\"" Mar 17 18:56:22.548379 kubelet[2795]: I0317 18:56:22.548237 2795 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-xtables-lock\") on node \"ci-4230-1-0-9-a87a0d0143\" DevicePath \"\"" Mar 17 18:56:22.548379 kubelet[2795]: I0317 18:56:22.548250 2795 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad251a9a-5ee3-4488-a273-a2d788bdf63e-cilium-config-path\") on node \"ci-4230-1-0-9-a87a0d0143\" DevicePath \"\"" Mar 17 18:56:22.548379 kubelet[2795]: I0317 18:56:22.548263 2795 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wdsnv\" (UniqueName: \"kubernetes.io/projected/ad251a9a-5ee3-4488-a273-a2d788bdf63e-kube-api-access-wdsnv\") on node \"ci-4230-1-0-9-a87a0d0143\" DevicePath \"\"" Mar 17 18:56:22.548379 kubelet[2795]: I0317 18:56:22.548276 2795 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21bb9504-3da2-48f5-b8f5-92d2af0a3644-clustermesh-secrets\") on node \"ci-4230-1-0-9-a87a0d0143\" DevicePath \"\"" Mar 17 18:56:22.548583 kubelet[2795]: I0317 18:56:22.548289 2795 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-cilium-run\") on node \"ci-4230-1-0-9-a87a0d0143\" DevicePath \"\"" Mar 17 18:56:22.548583 kubelet[2795]: I0317 18:56:22.548303 2795 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xkcls\" (UniqueName: \"kubernetes.io/projected/21bb9504-3da2-48f5-b8f5-92d2af0a3644-kube-api-access-xkcls\") on node \"ci-4230-1-0-9-a87a0d0143\" DevicePath \"\"" Mar 17 18:56:22.548583 kubelet[2795]: I0317 18:56:22.548315 2795 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21bb9504-3da2-48f5-b8f5-92d2af0a3644-hubble-tls\") on node \"ci-4230-1-0-9-a87a0d0143\" DevicePath \"\"" Mar 17 18:56:22.548583 kubelet[2795]: I0317 18:56:22.548329 2795 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-host-proc-sys-net\") on node \"ci-4230-1-0-9-a87a0d0143\" DevicePath \"\"" Mar 17 18:56:22.548583 kubelet[2795]: I0317 18:56:22.548342 2795 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21bb9504-3da2-48f5-b8f5-92d2af0a3644-host-proc-sys-kernel\") on node \"ci-4230-1-0-9-a87a0d0143\" DevicePath \"\"" Mar 17 18:56:22.562865 kubelet[2795]: I0317 18:56:22.562266 2795 scope.go:117] "RemoveContainer" containerID="078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74" Mar 17 18:56:22.565106 containerd[1508]: time="2025-03-17T18:56:22.565063616Z" level=info msg="RemoveContainer for \"078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74\"" Mar 17 18:56:22.577535 containerd[1508]: time="2025-03-17T18:56:22.577161998Z" level=info msg="RemoveContainer for \"078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74\" returns successfully" Mar 17 18:56:22.579015 systemd[1]: Removed slice kubepods-besteffort-podad251a9a_5ee3_4488_a273_a2d788bdf63e.slice - libcontainer container kubepods-besteffort-podad251a9a_5ee3_4488_a273_a2d788bdf63e.slice. Mar 17 18:56:22.581285 kubelet[2795]: I0317 18:56:22.580835 2795 scope.go:117] "RemoveContainer" containerID="078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74" Mar 17 18:56:22.581832 containerd[1508]: time="2025-03-17T18:56:22.581106823Z" level=error msg="ContainerStatus for \"078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74\": not found" Mar 17 18:56:22.581906 kubelet[2795]: E0317 18:56:22.581294 2795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74\": not found" containerID="078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74" Mar 17 18:56:22.581906 kubelet[2795]: I0317 18:56:22.581329 2795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74"} err="failed to get container status \"078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74\": rpc error: code = NotFound desc = an error occurred when try to find container \"078c02e9b91abeabb5c593ba04a09274f218996488a7f37632d87acf0aaa8d74\": not found" Mar 17 18:56:22.581906 kubelet[2795]: I0317 18:56:22.581442 2795 scope.go:117] "RemoveContainer" containerID="617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7" Mar 17 18:56:22.584704 systemd[1]: Removed slice kubepods-burstable-pod21bb9504_3da2_48f5_b8f5_92d2af0a3644.slice - libcontainer container kubepods-burstable-pod21bb9504_3da2_48f5_b8f5_92d2af0a3644.slice. Mar 17 18:56:22.586336 containerd[1508]: time="2025-03-17T18:56:22.584539796Z" level=info msg="RemoveContainer for \"617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7\"" Mar 17 18:56:22.584831 systemd[1]: kubepods-burstable-pod21bb9504_3da2_48f5_b8f5_92d2af0a3644.slice: Consumed 8.896s CPU time, 125.8M memory peak, 144K read from disk, 12.9M written to disk. Mar 17 18:56:22.593889 containerd[1508]: time="2025-03-17T18:56:22.593455551Z" level=info msg="RemoveContainer for \"617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7\" returns successfully" Mar 17 18:56:22.594973 kubelet[2795]: I0317 18:56:22.594941 2795 scope.go:117] "RemoveContainer" containerID="4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d" Mar 17 18:56:22.597976 containerd[1508]: time="2025-03-17T18:56:22.597606045Z" level=info msg="RemoveContainer for \"4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d\"" Mar 17 18:56:22.602773 containerd[1508]: time="2025-03-17T18:56:22.602722006Z" level=info msg="RemoveContainer for \"4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d\" returns successfully" Mar 17 18:56:22.603212 kubelet[2795]: I0317 18:56:22.603051 2795 scope.go:117] "RemoveContainer" containerID="3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417" Mar 17 18:56:22.606637 containerd[1508]: time="2025-03-17T18:56:22.606600555Z" level=info msg="RemoveContainer for \"3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417\"" Mar 17 18:56:22.611630 containerd[1508]: time="2025-03-17T18:56:22.611572125Z" level=info msg="RemoveContainer for \"3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417\" returns successfully" Mar 17 18:56:22.612013 kubelet[2795]: I0317 18:56:22.611882 2795 scope.go:117] "RemoveContainer" containerID="8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8" Mar 17 18:56:22.615400 containerd[1508]: time="2025-03-17T18:56:22.615349239Z" level=info msg="RemoveContainer for \"8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8\"" Mar 17 18:56:22.622134 containerd[1508]: time="2025-03-17T18:56:22.621773690Z" level=info msg="RemoveContainer for \"8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8\" returns successfully" Mar 17 18:56:22.624219 kubelet[2795]: I0317 18:56:22.622103 2795 scope.go:117] "RemoveContainer" containerID="da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243" Mar 17 18:56:22.627518 containerd[1508]: time="2025-03-17T18:56:22.627343306Z" level=info msg="RemoveContainer for \"da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243\"" Mar 17 18:56:22.637749 containerd[1508]: time="2025-03-17T18:56:22.636279980Z" level=info msg="RemoveContainer for \"da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243\" returns successfully" Mar 17 18:56:22.640746 kubelet[2795]: I0317 18:56:22.640710 2795 scope.go:117] "RemoveContainer" containerID="617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7" Mar 17 18:56:22.641319 containerd[1508]: time="2025-03-17T18:56:22.641270748Z" level=error msg="ContainerStatus for \"617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7\": not found" Mar 17 18:56:22.641698 kubelet[2795]: E0317 18:56:22.641667 2795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7\": not found" containerID="617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7" Mar 17 18:56:22.641829 kubelet[2795]: I0317 18:56:22.641802 2795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7"} err="failed to get container status \"617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"617a46b1ede1b57c6f305908644a23a174ff6737c79cd4ec3090a33d029190e7\": not found" Mar 17 18:56:22.641899 kubelet[2795]: I0317 18:56:22.641889 2795 scope.go:117] "RemoveContainer" containerID="4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d" Mar 17 18:56:22.642267 containerd[1508]: time="2025-03-17T18:56:22.642224337Z" level=error msg="ContainerStatus for \"4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d\": not found" Mar 17 18:56:22.642449 kubelet[2795]: E0317 18:56:22.642417 2795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d\": not found" containerID="4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d" Mar 17 18:56:22.642501 kubelet[2795]: I0317 18:56:22.642459 2795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d"} err="failed to get container status \"4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d\": rpc error: code = NotFound desc = an error occurred when try to find container \"4001c2b36372d88338cd9cac071a6f31dc604ac1764a8d6c7137748f8320e34d\": not found" Mar 17 18:56:22.642501 kubelet[2795]: I0317 18:56:22.642483 2795 scope.go:117] "RemoveContainer" containerID="3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417" Mar 17 18:56:22.642889 containerd[1508]: time="2025-03-17T18:56:22.642801025Z" level=error msg="ContainerStatus for \"3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417\": not found" Mar 17 18:56:22.643228 kubelet[2795]: E0317 18:56:22.643095 2795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417\": not found" containerID="3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417" Mar 17 18:56:22.643228 kubelet[2795]: I0317 18:56:22.643125 2795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417"} err="failed to get container status \"3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417\": rpc error: code = NotFound desc = an error occurred when try to find container \"3d86e6e9583b3110784b5f5d50f6e95ec1a8e9cbe8c2a930d7d019d040c3b417\": not found" Mar 17 18:56:22.643228 kubelet[2795]: I0317 18:56:22.643143 2795 scope.go:117] "RemoveContainer" containerID="8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8" Mar 17 18:56:22.643820 containerd[1508]: time="2025-03-17T18:56:22.643734534Z" level=error msg="ContainerStatus for \"8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8\": not found" Mar 17 18:56:22.644159 kubelet[2795]: E0317 18:56:22.644022 2795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8\": not found" containerID="8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8" Mar 17 18:56:22.644159 kubelet[2795]: I0317 18:56:22.644051 2795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8"} err="failed to get container status \"8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"8147e11d40ffe745ae1430b95e1971b8a21363fecdccd3181faa3738f770f4b8\": not found" Mar 17 18:56:22.644159 kubelet[2795]: I0317 18:56:22.644081 2795 scope.go:117] "RemoveContainer" containerID="da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243" Mar 17 18:56:22.644567 containerd[1508]: time="2025-03-17T18:56:22.644495173Z" level=error msg="ContainerStatus for \"da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243\": not found" Mar 17 18:56:22.644892 kubelet[2795]: E0317 18:56:22.644789 2795 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243\": not found" containerID="da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243" Mar 17 18:56:22.644892 kubelet[2795]: I0317 18:56:22.644819 2795 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243"} err="failed to get container status \"da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243\": rpc error: code = NotFound desc = an error occurred when try to find container \"da6e4178d3242b54e38bd31589e91eea2d9c910bfa2ea4a9c6618a40383b3243\": not found" Mar 17 18:56:23.122954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357-rootfs.mount: Deactivated successfully. Mar 17 18:56:23.123437 systemd[1]: var-lib-kubelet-pods-ad251a9a\x2d5ee3\x2d4488\x2da273\x2da2d788bdf63e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwdsnv.mount: Deactivated successfully. Mar 17 18:56:23.124193 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c-rootfs.mount: Deactivated successfully. Mar 17 18:56:23.124329 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c-shm.mount: Deactivated successfully. Mar 17 18:56:23.124459 systemd[1]: var-lib-kubelet-pods-21bb9504\x2d3da2\x2d48f5\x2db8f5\x2d92d2af0a3644-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxkcls.mount: Deactivated successfully. Mar 17 18:56:23.124599 systemd[1]: var-lib-kubelet-pods-21bb9504\x2d3da2\x2d48f5\x2db8f5\x2d92d2af0a3644-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:56:23.125321 systemd[1]: var-lib-kubelet-pods-21bb9504\x2d3da2\x2d48f5\x2db8f5\x2d92d2af0a3644-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:56:23.577369 kubelet[2795]: E0317 18:56:23.577131 2795 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:56:24.189427 sshd[4400]: Connection closed by 139.178.89.65 port 39044 Mar 17 18:56:24.189307 sshd-session[4398]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:24.193785 systemd[1]: sshd@19-138.201.89.219:22-139.178.89.65:39044.service: Deactivated successfully. Mar 17 18:56:24.197916 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:56:24.198191 systemd[1]: session-20.scope: Consumed 1.211s CPU time, 23.7M memory peak. Mar 17 18:56:24.201204 systemd-logind[1484]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:56:24.203341 systemd-logind[1484]: Removed session 20. Mar 17 18:56:24.209031 kubelet[2795]: I0317 18:56:24.208938 2795 setters.go:600] "Node became not ready" node="ci-4230-1-0-9-a87a0d0143" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:56:24Z","lastTransitionTime":"2025-03-17T18:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:56:24.352581 kubelet[2795]: I0317 18:56:24.351227 2795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21bb9504-3da2-48f5-b8f5-92d2af0a3644" path="/var/lib/kubelet/pods/21bb9504-3da2-48f5-b8f5-92d2af0a3644/volumes" Mar 17 18:56:24.352581 kubelet[2795]: I0317 18:56:24.352254 2795 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad251a9a-5ee3-4488-a273-a2d788bdf63e" path="/var/lib/kubelet/pods/ad251a9a-5ee3-4488-a273-a2d788bdf63e/volumes" Mar 17 18:56:24.364424 systemd[1]: Started sshd@20-138.201.89.219:22-139.178.89.65:37124.service - OpenSSH per-connection server daemon (139.178.89.65:37124). Mar 17 18:56:25.350313 sshd[4565]: Accepted publickey for core from 139.178.89.65 port 37124 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:56:25.353251 sshd-session[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:56:25.361639 systemd-logind[1484]: New session 21 of user core. Mar 17 18:56:25.371271 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 18:56:27.571860 kubelet[2795]: E0317 18:56:27.569905 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21bb9504-3da2-48f5-b8f5-92d2af0a3644" containerName="mount-bpf-fs" Mar 17 18:56:27.571860 kubelet[2795]: E0317 18:56:27.569944 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21bb9504-3da2-48f5-b8f5-92d2af0a3644" containerName="apply-sysctl-overwrites" Mar 17 18:56:27.571860 kubelet[2795]: E0317 18:56:27.569951 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21bb9504-3da2-48f5-b8f5-92d2af0a3644" containerName="mount-cgroup" Mar 17 18:56:27.571860 kubelet[2795]: E0317 18:56:27.569957 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ad251a9a-5ee3-4488-a273-a2d788bdf63e" containerName="cilium-operator" Mar 17 18:56:27.571860 kubelet[2795]: E0317 18:56:27.569962 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21bb9504-3da2-48f5-b8f5-92d2af0a3644" containerName="clean-cilium-state" Mar 17 18:56:27.571860 kubelet[2795]: E0317 18:56:27.569968 2795 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21bb9504-3da2-48f5-b8f5-92d2af0a3644" containerName="cilium-agent" Mar 17 18:56:27.571860 kubelet[2795]: I0317 18:56:27.570003 2795 memory_manager.go:354] "RemoveStaleState removing state" podUID="21bb9504-3da2-48f5-b8f5-92d2af0a3644" containerName="cilium-agent" Mar 17 18:56:27.571860 kubelet[2795]: I0317 18:56:27.570010 2795 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad251a9a-5ee3-4488-a273-a2d788bdf63e" containerName="cilium-operator" Mar 17 18:56:27.580496 systemd[1]: Created slice kubepods-burstable-podf79b90bd_6edc_427b_b79c_ec32d233262e.slice - libcontainer container kubepods-burstable-podf79b90bd_6edc_427b_b79c_ec32d233262e.slice. Mar 17 18:56:27.686505 kubelet[2795]: I0317 18:56:27.686448 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j55wd\" (UniqueName: \"kubernetes.io/projected/f79b90bd-6edc-427b-b79c-ec32d233262e-kube-api-access-j55wd\") pod \"cilium-ws4wp\" (UID: \"f79b90bd-6edc-427b-b79c-ec32d233262e\") " pod="kube-system/cilium-ws4wp" Mar 17 18:56:27.686833 kubelet[2795]: I0317 18:56:27.686801 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f79b90bd-6edc-427b-b79c-ec32d233262e-cilium-run\") pod \"cilium-ws4wp\" (UID: \"f79b90bd-6edc-427b-b79c-ec32d233262e\") " pod="kube-system/cilium-ws4wp" Mar 17 18:56:27.687233 kubelet[2795]: I0317 18:56:27.687202 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f79b90bd-6edc-427b-b79c-ec32d233262e-cilium-cgroup\") pod \"cilium-ws4wp\" (UID: \"f79b90bd-6edc-427b-b79c-ec32d233262e\") " pod="kube-system/cilium-ws4wp" Mar 17 18:56:27.687460 kubelet[2795]: I0317 18:56:27.687426 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f79b90bd-6edc-427b-b79c-ec32d233262e-lib-modules\") pod \"cilium-ws4wp\" (UID: \"f79b90bd-6edc-427b-b79c-ec32d233262e\") " pod="kube-system/cilium-ws4wp" Mar 17 18:56:27.687898 kubelet[2795]: I0317 18:56:27.687775 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f79b90bd-6edc-427b-b79c-ec32d233262e-clustermesh-secrets\") pod \"cilium-ws4wp\" (UID: \"f79b90bd-6edc-427b-b79c-ec32d233262e\") " pod="kube-system/cilium-ws4wp" Mar 17 18:56:27.688159 kubelet[2795]: I0317 18:56:27.688031 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f79b90bd-6edc-427b-b79c-ec32d233262e-host-proc-sys-kernel\") pod \"cilium-ws4wp\" (UID: \"f79b90bd-6edc-427b-b79c-ec32d233262e\") " pod="kube-system/cilium-ws4wp" Mar 17 18:56:27.688159 kubelet[2795]: I0317 18:56:27.688092 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f79b90bd-6edc-427b-b79c-ec32d233262e-etc-cni-netd\") pod \"cilium-ws4wp\" (UID: \"f79b90bd-6edc-427b-b79c-ec32d233262e\") " pod="kube-system/cilium-ws4wp" Mar 17 18:56:27.688417 kubelet[2795]: I0317 18:56:27.688221 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f79b90bd-6edc-427b-b79c-ec32d233262e-cilium-ipsec-secrets\") pod \"cilium-ws4wp\" (UID: \"f79b90bd-6edc-427b-b79c-ec32d233262e\") " pod="kube-system/cilium-ws4wp" Mar 17 18:56:27.688748 kubelet[2795]: I0317 18:56:27.688514 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f79b90bd-6edc-427b-b79c-ec32d233262e-bpf-maps\") pod \"cilium-ws4wp\" (UID: \"f79b90bd-6edc-427b-b79c-ec32d233262e\") " pod="kube-system/cilium-ws4wp" Mar 17 18:56:27.688748 kubelet[2795]: I0317 18:56:27.688606 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f79b90bd-6edc-427b-b79c-ec32d233262e-hostproc\") pod \"cilium-ws4wp\" (UID: \"f79b90bd-6edc-427b-b79c-ec32d233262e\") " pod="kube-system/cilium-ws4wp" Mar 17 18:56:27.688748 kubelet[2795]: I0317 18:56:27.688641 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f79b90bd-6edc-427b-b79c-ec32d233262e-cni-path\") pod \"cilium-ws4wp\" (UID: \"f79b90bd-6edc-427b-b79c-ec32d233262e\") " pod="kube-system/cilium-ws4wp" Mar 17 18:56:27.688748 kubelet[2795]: I0317 18:56:27.688714 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f79b90bd-6edc-427b-b79c-ec32d233262e-xtables-lock\") pod \"cilium-ws4wp\" (UID: \"f79b90bd-6edc-427b-b79c-ec32d233262e\") " pod="kube-system/cilium-ws4wp" Mar 17 18:56:27.689496 kubelet[2795]: I0317 18:56:27.689221 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f79b90bd-6edc-427b-b79c-ec32d233262e-host-proc-sys-net\") pod \"cilium-ws4wp\" (UID: \"f79b90bd-6edc-427b-b79c-ec32d233262e\") " pod="kube-system/cilium-ws4wp" Mar 17 18:56:27.689496 kubelet[2795]: I0317 18:56:27.689390 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f79b90bd-6edc-427b-b79c-ec32d233262e-hubble-tls\") pod \"cilium-ws4wp\" (UID: \"f79b90bd-6edc-427b-b79c-ec32d233262e\") " pod="kube-system/cilium-ws4wp" Mar 17 18:56:27.689496 kubelet[2795]: I0317 18:56:27.689446 2795 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f79b90bd-6edc-427b-b79c-ec32d233262e-cilium-config-path\") pod \"cilium-ws4wp\" (UID: \"f79b90bd-6edc-427b-b79c-ec32d233262e\") " pod="kube-system/cilium-ws4wp" Mar 17 18:56:27.732060 sshd[4567]: Connection closed by 139.178.89.65 port 37124 Mar 17 18:56:27.733036 sshd-session[4565]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:27.749925 systemd[1]: sshd@20-138.201.89.219:22-139.178.89.65:37124.service: Deactivated successfully. Mar 17 18:56:27.754579 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:56:27.756062 systemd[1]: session-21.scope: Consumed 1.581s CPU time, 25.8M memory peak. Mar 17 18:56:27.759742 systemd-logind[1484]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:56:27.762690 systemd-logind[1484]: Removed session 21. Mar 17 18:56:27.886971 containerd[1508]: time="2025-03-17T18:56:27.886485153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ws4wp,Uid:f79b90bd-6edc-427b-b79c-ec32d233262e,Namespace:kube-system,Attempt:0,}" Mar 17 18:56:27.920346 systemd[1]: Started sshd@21-138.201.89.219:22-139.178.89.65:37140.service - OpenSSH per-connection server daemon (139.178.89.65:37140). Mar 17 18:56:27.936222 containerd[1508]: time="2025-03-17T18:56:27.936112510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:56:27.936369 containerd[1508]: time="2025-03-17T18:56:27.936259622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:56:27.936369 containerd[1508]: time="2025-03-17T18:56:27.936351618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:56:27.936929 containerd[1508]: time="2025-03-17T18:56:27.936850393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:56:27.961435 systemd[1]: Started cri-containerd-83441b477383937f8688ec79fbea905ea1b2c4f32dc8b6dca4e845773c4e24a3.scope - libcontainer container 83441b477383937f8688ec79fbea905ea1b2c4f32dc8b6dca4e845773c4e24a3. Mar 17 18:56:27.997039 containerd[1508]: time="2025-03-17T18:56:27.996955425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ws4wp,Uid:f79b90bd-6edc-427b-b79c-ec32d233262e,Namespace:kube-system,Attempt:0,} returns sandbox id \"83441b477383937f8688ec79fbea905ea1b2c4f32dc8b6dca4e845773c4e24a3\"" Mar 17 18:56:28.002234 containerd[1508]: time="2025-03-17T18:56:28.002192884Z" level=info msg="CreateContainer within sandbox \"83441b477383937f8688ec79fbea905ea1b2c4f32dc8b6dca4e845773c4e24a3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:56:28.015745 containerd[1508]: time="2025-03-17T18:56:28.015676621Z" level=info msg="CreateContainer within sandbox \"83441b477383937f8688ec79fbea905ea1b2c4f32dc8b6dca4e845773c4e24a3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ad727064eae38b3c0076a5ed491b8be733bda4c98ee330a5b9aa4839f67bdb90\"" Mar 17 18:56:28.016590 containerd[1508]: time="2025-03-17T18:56:28.016449863Z" level=info msg="StartContainer for \"ad727064eae38b3c0076a5ed491b8be733bda4c98ee330a5b9aa4839f67bdb90\"" Mar 17 18:56:28.054196 systemd[1]: Started cri-containerd-ad727064eae38b3c0076a5ed491b8be733bda4c98ee330a5b9aa4839f67bdb90.scope - libcontainer container ad727064eae38b3c0076a5ed491b8be733bda4c98ee330a5b9aa4839f67bdb90. Mar 17 18:56:28.080121 containerd[1508]: time="2025-03-17T18:56:28.079185857Z" level=info msg="StartContainer for \"ad727064eae38b3c0076a5ed491b8be733bda4c98ee330a5b9aa4839f67bdb90\" returns successfully" Mar 17 18:56:28.093261 systemd[1]: cri-containerd-ad727064eae38b3c0076a5ed491b8be733bda4c98ee330a5b9aa4839f67bdb90.scope: Deactivated successfully. Mar 17 18:56:28.138510 containerd[1508]: time="2025-03-17T18:56:28.138348587Z" level=info msg="shim disconnected" id=ad727064eae38b3c0076a5ed491b8be733bda4c98ee330a5b9aa4839f67bdb90 namespace=k8s.io Mar 17 18:56:28.139363 containerd[1508]: time="2025-03-17T18:56:28.138917239Z" level=warning msg="cleaning up after shim disconnected" id=ad727064eae38b3c0076a5ed491b8be733bda4c98ee330a5b9aa4839f67bdb90 namespace=k8s.io Mar 17 18:56:28.139363 containerd[1508]: time="2025-03-17T18:56:28.138948318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:56:28.151746 containerd[1508]: time="2025-03-17T18:56:28.151564257Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:56:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 18:56:28.578188 kubelet[2795]: E0317 18:56:28.578123 2795 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:56:28.614026 containerd[1508]: time="2025-03-17T18:56:28.610643198Z" level=info msg="CreateContainer within sandbox \"83441b477383937f8688ec79fbea905ea1b2c4f32dc8b6dca4e845773c4e24a3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:56:28.637479 containerd[1508]: time="2025-03-17T18:56:28.637419761Z" level=info msg="CreateContainer within sandbox \"83441b477383937f8688ec79fbea905ea1b2c4f32dc8b6dca4e845773c4e24a3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f9f8ea47a7d1dd257a9c4a3aa956edd89efe881b9d9f7325dacaae7fa31ac583\"" Mar 17 18:56:28.638604 containerd[1508]: time="2025-03-17T18:56:28.638572504Z" level=info msg="StartContainer for \"f9f8ea47a7d1dd257a9c4a3aa956edd89efe881b9d9f7325dacaae7fa31ac583\"" Mar 17 18:56:28.667275 systemd[1]: Started cri-containerd-f9f8ea47a7d1dd257a9c4a3aa956edd89efe881b9d9f7325dacaae7fa31ac583.scope - libcontainer container f9f8ea47a7d1dd257a9c4a3aa956edd89efe881b9d9f7325dacaae7fa31ac583. Mar 17 18:56:28.704634 containerd[1508]: time="2025-03-17T18:56:28.704541260Z" level=info msg="StartContainer for \"f9f8ea47a7d1dd257a9c4a3aa956edd89efe881b9d9f7325dacaae7fa31ac583\" returns successfully" Mar 17 18:56:28.709560 systemd[1]: cri-containerd-f9f8ea47a7d1dd257a9c4a3aa956edd89efe881b9d9f7325dacaae7fa31ac583.scope: Deactivated successfully. Mar 17 18:56:28.737526 containerd[1508]: time="2025-03-17T18:56:28.737461920Z" level=info msg="shim disconnected" id=f9f8ea47a7d1dd257a9c4a3aa956edd89efe881b9d9f7325dacaae7fa31ac583 namespace=k8s.io Mar 17 18:56:28.737526 containerd[1508]: time="2025-03-17T18:56:28.737523717Z" level=warning msg="cleaning up after shim disconnected" id=f9f8ea47a7d1dd257a9c4a3aa956edd89efe881b9d9f7325dacaae7fa31ac583 namespace=k8s.io Mar 17 18:56:28.737526 containerd[1508]: time="2025-03-17T18:56:28.737531837Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:56:28.922791 sshd[4585]: Accepted publickey for core from 139.178.89.65 port 37140 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:56:28.925602 sshd-session[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:56:28.934430 systemd-logind[1484]: New session 22 of user core. Mar 17 18:56:28.943397 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 18:56:29.612025 sshd[4747]: Connection closed by 139.178.89.65 port 37140 Mar 17 18:56:29.613557 containerd[1508]: time="2025-03-17T18:56:29.613352881Z" level=info msg="CreateContainer within sandbox \"83441b477383937f8688ec79fbea905ea1b2c4f32dc8b6dca4e845773c4e24a3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:56:29.614044 sshd-session[4585]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:29.623976 systemd[1]: sshd@21-138.201.89.219:22-139.178.89.65:37140.service: Deactivated successfully. Mar 17 18:56:29.631171 containerd[1508]: time="2025-03-17T18:56:29.631120822Z" level=info msg="CreateContainer within sandbox \"83441b477383937f8688ec79fbea905ea1b2c4f32dc8b6dca4e845773c4e24a3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"87424a9d08a2fd1ea2321528e815e0193629ebba3c53f46f8bd00828fe945322\"" Mar 17 18:56:29.638253 containerd[1508]: time="2025-03-17T18:56:29.634576135Z" level=info msg="StartContainer for \"87424a9d08a2fd1ea2321528e815e0193629ebba3c53f46f8bd00828fe945322\"" Mar 17 18:56:29.637681 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:56:29.639401 systemd-logind[1484]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:56:29.644170 systemd-logind[1484]: Removed session 22. Mar 17 18:56:29.691785 systemd[1]: run-containerd-runc-k8s.io-87424a9d08a2fd1ea2321528e815e0193629ebba3c53f46f8bd00828fe945322-runc.1viyEX.mount: Deactivated successfully. Mar 17 18:56:29.699274 systemd[1]: Started cri-containerd-87424a9d08a2fd1ea2321528e815e0193629ebba3c53f46f8bd00828fe945322.scope - libcontainer container 87424a9d08a2fd1ea2321528e815e0193629ebba3c53f46f8bd00828fe945322. Mar 17 18:56:29.731771 containerd[1508]: time="2025-03-17T18:56:29.731706360Z" level=info msg="StartContainer for \"87424a9d08a2fd1ea2321528e815e0193629ebba3c53f46f8bd00828fe945322\" returns successfully" Mar 17 18:56:29.737442 systemd[1]: cri-containerd-87424a9d08a2fd1ea2321528e815e0193629ebba3c53f46f8bd00828fe945322.scope: Deactivated successfully. Mar 17 18:56:29.768526 containerd[1508]: time="2025-03-17T18:56:29.768305471Z" level=info msg="shim disconnected" id=87424a9d08a2fd1ea2321528e815e0193629ebba3c53f46f8bd00828fe945322 namespace=k8s.io Mar 17 18:56:29.768526 containerd[1508]: time="2025-03-17T18:56:29.768366788Z" level=warning msg="cleaning up after shim disconnected" id=87424a9d08a2fd1ea2321528e815e0193629ebba3c53f46f8bd00828fe945322 namespace=k8s.io Mar 17 18:56:29.768526 containerd[1508]: time="2025-03-17T18:56:29.768374868Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:56:29.784334 systemd[1]: Started sshd@22-138.201.89.219:22-139.178.89.65:37142.service - OpenSSH per-connection server daemon (139.178.89.65:37142). Mar 17 18:56:29.800917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87424a9d08a2fd1ea2321528e815e0193629ebba3c53f46f8bd00828fe945322-rootfs.mount: Deactivated successfully. Mar 17 18:56:30.619604 containerd[1508]: time="2025-03-17T18:56:30.619254019Z" level=info msg="CreateContainer within sandbox \"83441b477383937f8688ec79fbea905ea1b2c4f32dc8b6dca4e845773c4e24a3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:56:30.637658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3177597395.mount: Deactivated successfully. Mar 17 18:56:30.643091 containerd[1508]: time="2025-03-17T18:56:30.641297012Z" level=info msg="CreateContainer within sandbox \"83441b477383937f8688ec79fbea905ea1b2c4f32dc8b6dca4e845773c4e24a3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d3ec82bdc7829447037f721698293ab21465bb8abf5f6fe4d749010d922fff31\"" Mar 17 18:56:30.647822 containerd[1508]: time="2025-03-17T18:56:30.646828070Z" level=info msg="StartContainer for \"d3ec82bdc7829447037f721698293ab21465bb8abf5f6fe4d749010d922fff31\"" Mar 17 18:56:30.682286 systemd[1]: Started cri-containerd-d3ec82bdc7829447037f721698293ab21465bb8abf5f6fe4d749010d922fff31.scope - libcontainer container d3ec82bdc7829447037f721698293ab21465bb8abf5f6fe4d749010d922fff31. Mar 17 18:56:30.714514 systemd[1]: cri-containerd-d3ec82bdc7829447037f721698293ab21465bb8abf5f6fe4d749010d922fff31.scope: Deactivated successfully. Mar 17 18:56:30.719649 containerd[1508]: time="2025-03-17T18:56:30.719434661Z" level=info msg="StartContainer for \"d3ec82bdc7829447037f721698293ab21465bb8abf5f6fe4d749010d922fff31\" returns successfully" Mar 17 18:56:30.753169 containerd[1508]: time="2025-03-17T18:56:30.753062704Z" level=info msg="shim disconnected" id=d3ec82bdc7829447037f721698293ab21465bb8abf5f6fe4d749010d922fff31 namespace=k8s.io Mar 17 18:56:30.753813 containerd[1508]: time="2025-03-17T18:56:30.753508523Z" level=warning msg="cleaning up after shim disconnected" id=d3ec82bdc7829447037f721698293ab21465bb8abf5f6fe4d749010d922fff31 namespace=k8s.io Mar 17 18:56:30.753813 containerd[1508]: time="2025-03-17T18:56:30.753549361Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:56:30.787134 sshd[4806]: Accepted publickey for core from 139.178.89.65 port 37142 ssh2: RSA SHA256:v/asyzeddMvawcqTHyrMQabrN1x7tHOvH9FvogCn6lE Mar 17 18:56:30.789065 sshd-session[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:56:30.799809 systemd-logind[1484]: New session 23 of user core. Mar 17 18:56:30.802202 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 18:56:30.805795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3ec82bdc7829447037f721698293ab21465bb8abf5f6fe4d749010d922fff31-rootfs.mount: Deactivated successfully. Mar 17 18:56:31.627480 containerd[1508]: time="2025-03-17T18:56:31.627212232Z" level=info msg="CreateContainer within sandbox \"83441b477383937f8688ec79fbea905ea1b2c4f32dc8b6dca4e845773c4e24a3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:56:31.648456 containerd[1508]: time="2025-03-17T18:56:31.648315287Z" level=info msg="CreateContainer within sandbox \"83441b477383937f8688ec79fbea905ea1b2c4f32dc8b6dca4e845773c4e24a3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ce08b3d23fd22c0326bb0c92d3a9d0ea9f830b68ec72912920c300727538f6ed\"" Mar 17 18:56:31.652517 containerd[1508]: time="2025-03-17T18:56:31.650936485Z" level=info msg="StartContainer for \"ce08b3d23fd22c0326bb0c92d3a9d0ea9f830b68ec72912920c300727538f6ed\"" Mar 17 18:56:31.694343 systemd[1]: Started cri-containerd-ce08b3d23fd22c0326bb0c92d3a9d0ea9f830b68ec72912920c300727538f6ed.scope - libcontainer container ce08b3d23fd22c0326bb0c92d3a9d0ea9f830b68ec72912920c300727538f6ed. Mar 17 18:56:31.734020 containerd[1508]: time="2025-03-17T18:56:31.733606028Z" level=info msg="StartContainer for \"ce08b3d23fd22c0326bb0c92d3a9d0ea9f830b68ec72912920c300727538f6ed\" returns successfully" Mar 17 18:56:31.804529 systemd[1]: run-containerd-runc-k8s.io-ce08b3d23fd22c0326bb0c92d3a9d0ea9f830b68ec72912920c300727538f6ed-runc.9WktdA.mount: Deactivated successfully. Mar 17 18:56:32.130828 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 17 18:56:32.656633 kubelet[2795]: I0317 18:56:32.656436 2795 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ws4wp" podStartSLOduration=5.65638672 podStartE2EDuration="5.65638672s" podCreationTimestamp="2025-03-17 18:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:56:32.655298569 +0000 UTC m=+354.449771768" watchObservedRunningTime="2025-03-17 18:56:32.65638672 +0000 UTC m=+354.450859919" Mar 17 18:56:35.384543 systemd-networkd[1400]: lxc_health: Link UP Mar 17 18:56:35.396188 systemd-networkd[1400]: lxc_health: Gained carrier Mar 17 18:56:35.676516 systemd[1]: run-containerd-runc-k8s.io-ce08b3d23fd22c0326bb0c92d3a9d0ea9f830b68ec72912920c300727538f6ed-runc.hkqK6i.mount: Deactivated successfully. Mar 17 18:56:37.353328 systemd-networkd[1400]: lxc_health: Gained IPv6LL Mar 17 18:56:37.935529 systemd[1]: run-containerd-runc-k8s.io-ce08b3d23fd22c0326bb0c92d3a9d0ea9f830b68ec72912920c300727538f6ed-runc.fLQmC8.mount: Deactivated successfully. Mar 17 18:56:38.368762 containerd[1508]: time="2025-03-17T18:56:38.368525925Z" level=info msg="StopPodSandbox for \"978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357\"" Mar 17 18:56:38.368762 containerd[1508]: time="2025-03-17T18:56:38.368634960Z" level=info msg="TearDown network for sandbox \"978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357\" successfully" Mar 17 18:56:38.368762 containerd[1508]: time="2025-03-17T18:56:38.368646600Z" level=info msg="StopPodSandbox for \"978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357\" returns successfully" Mar 17 18:56:38.370609 containerd[1508]: time="2025-03-17T18:56:38.370174097Z" level=info msg="RemovePodSandbox for \"978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357\"" Mar 17 18:56:38.370609 containerd[1508]: time="2025-03-17T18:56:38.370225575Z" level=info msg="Forcibly stopping sandbox \"978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357\"" Mar 17 18:56:38.370609 containerd[1508]: time="2025-03-17T18:56:38.370304212Z" level=info msg="TearDown network for sandbox \"978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357\" successfully" Mar 17 18:56:38.376126 containerd[1508]: time="2025-03-17T18:56:38.375595035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:56:38.376126 containerd[1508]: time="2025-03-17T18:56:38.375694191Z" level=info msg="RemovePodSandbox \"978f95b0de03dfa28312762b078b3db56837bb21a7f4aa35e966998358dd8357\" returns successfully" Mar 17 18:56:38.376808 containerd[1508]: time="2025-03-17T18:56:38.376465240Z" level=info msg="StopPodSandbox for \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\"" Mar 17 18:56:38.376808 containerd[1508]: time="2025-03-17T18:56:38.376565875Z" level=info msg="TearDown network for sandbox \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\" successfully" Mar 17 18:56:38.376808 containerd[1508]: time="2025-03-17T18:56:38.376576075Z" level=info msg="StopPodSandbox for \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\" returns successfully" Mar 17 18:56:38.379116 containerd[1508]: time="2025-03-17T18:56:38.377699669Z" level=info msg="RemovePodSandbox for \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\"" Mar 17 18:56:38.379116 containerd[1508]: time="2025-03-17T18:56:38.377740467Z" level=info msg="Forcibly stopping sandbox \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\"" Mar 17 18:56:38.379116 containerd[1508]: time="2025-03-17T18:56:38.377837623Z" level=info msg="TearDown network for sandbox \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\" successfully" Mar 17 18:56:38.383087 containerd[1508]: time="2025-03-17T18:56:38.383038610Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:56:38.383294 containerd[1508]: time="2025-03-17T18:56:38.383274601Z" level=info msg="RemovePodSandbox \"e1025936796cf388af751a6a0bedf8b6c7ee26d48f578b0d6f6f35f09a73c47c\" returns successfully" Mar 17 18:56:42.517590 sshd[4869]: Connection closed by 139.178.89.65 port 37142 Mar 17 18:56:42.517576 sshd-session[4806]: pam_unix(sshd:session): session closed for user core Mar 17 18:56:42.523216 systemd[1]: sshd@22-138.201.89.219:22-139.178.89.65:37142.service: Deactivated successfully. Mar 17 18:56:42.527585 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:56:42.530858 systemd-logind[1484]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:56:42.533126 systemd-logind[1484]: Removed session 23.