Mar 17 17:30:59.872524 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:30:59.872545 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Mar 17 16:05:23 -00 2025 Mar 17 17:30:59.872555 kernel: KASLR enabled Mar 17 17:30:59.872560 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 17 17:30:59.872566 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d98 Mar 17 17:30:59.872571 kernel: random: crng init done Mar 17 17:30:59.872578 kernel: secureboot: Secure boot disabled Mar 17 17:30:59.872584 kernel: ACPI: Early table checksum verification disabled Mar 17 17:30:59.872590 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Mar 17 17:30:59.872597 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:30:59.872603 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:30:59.872609 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:30:59.872614 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:30:59.872621 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:30:59.872628 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:30:59.872635 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:30:59.872642 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:30:59.872648 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:30:59.872654 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:30:59.872660 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Mar 17 17:30:59.872666 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Mar 17 17:30:59.872672 kernel: NUMA: Failed to initialise from firmware Mar 17 17:30:59.872678 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Mar 17 17:30:59.872684 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Mar 17 17:30:59.872690 kernel: Zone ranges: Mar 17 17:30:59.872698 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 17:30:59.872704 kernel: DMA32 empty Mar 17 17:30:59.872710 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Mar 17 17:30:59.872716 kernel: Movable zone start for each node Mar 17 17:30:59.872722 kernel: Early memory node ranges Mar 17 17:30:59.872728 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Mar 17 17:30:59.872734 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Mar 17 17:30:59.872740 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Mar 17 17:30:59.872746 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Mar 17 17:30:59.872752 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Mar 17 17:30:59.872758 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Mar 17 17:30:59.872764 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Mar 17 17:30:59.872772 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Mar 17 17:30:59.872778 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 17 17:30:59.872784 kernel: psci: probing for conduit method from ACPI. Mar 17 17:30:59.872793 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:30:59.872800 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:30:59.872807 kernel: psci: Trusted OS migration not required Mar 17 17:30:59.872815 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:30:59.872822 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 17:30:59.872828 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:30:59.872835 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:30:59.872841 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 17:30:59.872893 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:30:59.872901 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:30:59.872908 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:30:59.872915 kernel: CPU features: detected: Spectre-v4 Mar 17 17:30:59.872921 kernel: CPU features: detected: Spectre-BHB Mar 17 17:30:59.872930 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:30:59.872937 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:30:59.872975 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:30:59.872982 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:30:59.872989 kernel: alternatives: applying boot alternatives Mar 17 17:30:59.872996 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:30:59.873004 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:30:59.873010 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:30:59.873017 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:30:59.873023 kernel: Fallback order for Node 0: 0 Mar 17 17:30:59.873030 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Mar 17 17:30:59.873039 kernel: Policy zone: Normal Mar 17 17:30:59.873046 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:30:59.873052 kernel: software IO TLB: area num 2. Mar 17 17:30:59.873059 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Mar 17 17:30:59.873066 kernel: Memory: 3882612K/4096000K available (10240K kernel code, 2186K rwdata, 8100K rodata, 39744K init, 897K bss, 213388K reserved, 0K cma-reserved) Mar 17 17:30:59.873072 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:30:59.873079 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:30:59.873086 kernel: rcu: RCU event tracing is enabled. Mar 17 17:30:59.873093 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:30:59.873100 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:30:59.873106 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:30:59.873113 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:30:59.873121 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:30:59.873127 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:30:59.873134 kernel: GICv3: 256 SPIs implemented Mar 17 17:30:59.873140 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:30:59.873166 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:30:59.873173 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:30:59.873179 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 17:30:59.873186 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 17:30:59.873193 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:30:59.873200 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:30:59.873206 kernel: GICv3: using LPI property table @0x00000001000e0000 Mar 17 17:30:59.873216 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Mar 17 17:30:59.873223 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:30:59.873229 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:30:59.873236 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:30:59.873242 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:30:59.873249 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:30:59.873256 kernel: Console: colour dummy device 80x25 Mar 17 17:30:59.873263 kernel: ACPI: Core revision 20230628 Mar 17 17:30:59.873270 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:30:59.873277 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:30:59.873285 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:30:59.873292 kernel: landlock: Up and running. Mar 17 17:30:59.873298 kernel: SELinux: Initializing. Mar 17 17:30:59.873305 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:30:59.873312 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:30:59.873319 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:30:59.873326 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:30:59.873333 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:30:59.873339 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:30:59.873346 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 17:30:59.873354 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 17:30:59.873361 kernel: Remapping and enabling EFI services. Mar 17 17:30:59.873368 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:30:59.873374 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:30:59.873381 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 17:30:59.873388 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Mar 17 17:30:59.873395 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:30:59.873402 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:30:59.873409 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:30:59.873417 kernel: SMP: Total of 2 processors activated. Mar 17 17:30:59.873424 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:30:59.873436 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:30:59.873444 kernel: CPU features: detected: Common not Private translations Mar 17 17:30:59.873451 kernel: CPU features: detected: CRC32 instructions Mar 17 17:30:59.873458 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 17 17:30:59.873465 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:30:59.873473 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:30:59.873480 kernel: CPU features: detected: Privileged Access Never Mar 17 17:30:59.873489 kernel: CPU features: detected: RAS Extension Support Mar 17 17:30:59.873496 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 17:30:59.873503 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:30:59.873511 kernel: alternatives: applying system-wide alternatives Mar 17 17:30:59.873518 kernel: devtmpfs: initialized Mar 17 17:30:59.873525 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:30:59.873536 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:30:59.873545 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:30:59.873552 kernel: SMBIOS 3.0.0 present. Mar 17 17:30:59.873559 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Mar 17 17:30:59.873569 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:30:59.873576 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:30:59.873585 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:30:59.873595 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:30:59.873604 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:30:59.873611 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Mar 17 17:30:59.873620 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:30:59.873627 kernel: cpuidle: using governor menu Mar 17 17:30:59.873636 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:30:59.873644 kernel: ASID allocator initialised with 32768 entries Mar 17 17:30:59.873654 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:30:59.873662 kernel: Serial: AMBA PL011 UART driver Mar 17 17:30:59.873671 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:30:59.873678 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:30:59.873687 kernel: Modules: 508944 pages in range for PLT usage Mar 17 17:30:59.873697 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:30:59.873704 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:30:59.873711 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:30:59.873718 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:30:59.873725 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:30:59.873733 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:30:59.873740 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:30:59.873747 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:30:59.873754 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:30:59.873761 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:30:59.873769 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:30:59.873777 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:30:59.873784 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:30:59.873791 kernel: ACPI: Interpreter enabled Mar 17 17:30:59.873798 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:30:59.873805 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:30:59.873813 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:30:59.873820 kernel: printk: console [ttyAMA0] enabled Mar 17 17:30:59.873827 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:30:59.873983 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:30:59.874063 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:30:59.874128 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:30:59.874209 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 17:30:59.874271 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 17:30:59.874281 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 17:30:59.874288 kernel: PCI host bridge to bus 0000:00 Mar 17 17:30:59.874359 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 17:30:59.874418 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:30:59.874474 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 17:30:59.874530 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:30:59.874622 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 17:30:59.874708 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Mar 17 17:30:59.874778 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Mar 17 17:30:59.874843 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Mar 17 17:30:59.874913 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 17 17:30:59.875028 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Mar 17 17:30:59.875137 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 17 17:30:59.875220 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Mar 17 17:30:59.875297 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 17 17:30:59.875362 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Mar 17 17:30:59.875433 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 17 17:30:59.875498 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Mar 17 17:30:59.875569 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 17 17:30:59.875633 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Mar 17 17:30:59.875706 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 17 17:30:59.875771 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Mar 17 17:30:59.875840 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 17 17:30:59.875919 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Mar 17 17:30:59.876007 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 17 17:30:59.876075 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Mar 17 17:30:59.876504 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 17 17:30:59.876623 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Mar 17 17:30:59.876700 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Mar 17 17:30:59.876765 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Mar 17 17:30:59.876840 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 17:30:59.876906 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Mar 17 17:30:59.877017 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:30:59.877095 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 17 17:30:59.879279 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 17 17:30:59.879366 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Mar 17 17:30:59.879442 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 17 17:30:59.879510 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Mar 17 17:30:59.879575 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Mar 17 17:30:59.879650 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 17 17:30:59.879723 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Mar 17 17:30:59.879796 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 17 17:30:59.879862 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Mar 17 17:30:59.879927 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Mar 17 17:30:59.880019 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 17 17:30:59.880088 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Mar 17 17:30:59.880180 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Mar 17 17:30:59.880263 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 17:30:59.880344 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Mar 17 17:30:59.880413 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Mar 17 17:30:59.880495 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 17 17:30:59.880578 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Mar 17 17:30:59.880649 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Mar 17 17:30:59.880714 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Mar 17 17:30:59.880779 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Mar 17 17:30:59.880843 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Mar 17 17:30:59.880907 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Mar 17 17:30:59.880986 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 17 17:30:59.881052 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Mar 17 17:30:59.881116 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Mar 17 17:30:59.883364 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 17 17:30:59.883445 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Mar 17 17:30:59.883512 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Mar 17 17:30:59.883579 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 17 17:30:59.883642 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Mar 17 17:30:59.883704 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Mar 17 17:30:59.883769 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 17 17:30:59.883838 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Mar 17 17:30:59.883900 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Mar 17 17:30:59.883987 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 17 17:30:59.884055 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Mar 17 17:30:59.884119 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Mar 17 17:30:59.884197 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 17 17:30:59.884262 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Mar 17 17:30:59.884329 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Mar 17 17:30:59.884394 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 17 17:30:59.884458 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Mar 17 17:30:59.884521 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Mar 17 17:30:59.884585 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Mar 17 17:30:59.884648 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:30:59.884712 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Mar 17 17:30:59.884776 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:30:59.884843 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Mar 17 17:30:59.884906 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:30:59.884984 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Mar 17 17:30:59.885049 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:30:59.885113 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Mar 17 17:30:59.888247 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:30:59.888337 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Mar 17 17:30:59.888405 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:30:59.888474 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Mar 17 17:30:59.888539 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:30:59.888605 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Mar 17 17:30:59.888670 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:30:59.888736 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Mar 17 17:30:59.888804 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:30:59.888874 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Mar 17 17:30:59.888954 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Mar 17 17:30:59.889028 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Mar 17 17:30:59.889093 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 17 17:30:59.889185 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Mar 17 17:30:59.889262 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 17 17:30:59.889327 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Mar 17 17:30:59.889394 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 17 17:30:59.889457 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Mar 17 17:30:59.889520 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 17 17:30:59.889583 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Mar 17 17:30:59.889647 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 17 17:30:59.889711 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Mar 17 17:30:59.889774 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 17 17:30:59.889839 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Mar 17 17:30:59.889905 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 17 17:30:59.889989 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Mar 17 17:30:59.890067 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 17 17:30:59.890133 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Mar 17 17:30:59.890275 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Mar 17 17:30:59.890346 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Mar 17 17:30:59.890416 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Mar 17 17:30:59.890481 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:30:59.890550 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Mar 17 17:30:59.890613 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 17 17:30:59.890676 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 17 17:30:59.890738 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Mar 17 17:30:59.890819 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:30:59.891239 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Mar 17 17:30:59.891324 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 17 17:30:59.891391 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 17 17:30:59.891455 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Mar 17 17:30:59.891520 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:30:59.891594 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Mar 17 17:30:59.891663 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Mar 17 17:30:59.891731 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 17 17:30:59.891798 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 17 17:30:59.891862 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Mar 17 17:30:59.891957 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:30:59.892041 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Mar 17 17:30:59.892111 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 17 17:30:59.892291 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 17 17:30:59.892360 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Mar 17 17:30:59.892427 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:30:59.892500 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Mar 17 17:30:59.892570 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Mar 17 17:30:59.892648 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 17 17:30:59.892715 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 17 17:30:59.892777 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 17 17:30:59.892846 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:30:59.892924 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Mar 17 17:30:59.893009 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Mar 17 17:30:59.893075 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 17 17:30:59.893137 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 17 17:30:59.893213 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Mar 17 17:30:59.893295 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:30:59.893369 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Mar 17 17:30:59.893435 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Mar 17 17:30:59.893500 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Mar 17 17:30:59.893568 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 17 17:30:59.893630 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 17 17:30:59.893693 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Mar 17 17:30:59.893754 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:30:59.893818 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 17 17:30:59.893881 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 17 17:30:59.893993 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Mar 17 17:30:59.894069 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:30:59.894139 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 17 17:30:59.894283 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Mar 17 17:30:59.894348 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Mar 17 17:30:59.894410 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:30:59.894473 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 17:30:59.894530 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:30:59.894588 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 17:30:59.894660 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 17 17:30:59.894722 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Mar 17 17:30:59.894781 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Mar 17 17:30:59.894846 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Mar 17 17:30:59.894904 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Mar 17 17:30:59.894982 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Mar 17 17:30:59.895050 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Mar 17 17:30:59.895115 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Mar 17 17:30:59.895254 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Mar 17 17:30:59.895328 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Mar 17 17:30:59.895387 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Mar 17 17:30:59.895444 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Mar 17 17:30:59.895509 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Mar 17 17:30:59.895571 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Mar 17 17:30:59.895628 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Mar 17 17:30:59.895696 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Mar 17 17:30:59.895757 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Mar 17 17:30:59.895817 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 17 17:30:59.895882 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Mar 17 17:30:59.895974 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Mar 17 17:30:59.896046 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 17 17:30:59.896118 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Mar 17 17:30:59.896189 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Mar 17 17:30:59.896249 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 17 17:30:59.896317 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Mar 17 17:30:59.896377 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Mar 17 17:30:59.896435 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Mar 17 17:30:59.896445 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:30:59.896453 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:30:59.896461 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:30:59.896469 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:30:59.896477 kernel: iommu: Default domain type: Translated Mar 17 17:30:59.896487 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:30:59.896495 kernel: efivars: Registered efivars operations Mar 17 17:30:59.896502 kernel: vgaarb: loaded Mar 17 17:30:59.896510 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:30:59.896518 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:30:59.896526 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:30:59.896534 kernel: pnp: PnP ACPI init Mar 17 17:30:59.896602 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 17:30:59.896615 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:30:59.896623 kernel: NET: Registered PF_INET protocol family Mar 17 17:30:59.896640 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:30:59.896649 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:30:59.896657 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:30:59.896664 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:30:59.896672 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:30:59.896680 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:30:59.896688 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:30:59.896697 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:30:59.896705 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:30:59.896779 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Mar 17 17:30:59.896790 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:30:59.896797 kernel: kvm [1]: HYP mode not available Mar 17 17:30:59.896805 kernel: Initialise system trusted keyrings Mar 17 17:30:59.896813 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:30:59.896820 kernel: Key type asymmetric registered Mar 17 17:30:59.896828 kernel: Asymmetric key parser 'x509' registered Mar 17 17:30:59.896837 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:30:59.896845 kernel: io scheduler mq-deadline registered Mar 17 17:30:59.896852 kernel: io scheduler kyber registered Mar 17 17:30:59.896860 kernel: io scheduler bfq registered Mar 17 17:30:59.896868 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 17:30:59.896934 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Mar 17 17:30:59.897017 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Mar 17 17:30:59.897081 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:30:59.897158 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Mar 17 17:30:59.897225 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Mar 17 17:30:59.897301 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:30:59.897367 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Mar 17 17:30:59.897432 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Mar 17 17:30:59.897501 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:30:59.897571 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Mar 17 17:30:59.897637 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Mar 17 17:30:59.897701 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:30:59.897766 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Mar 17 17:30:59.897830 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Mar 17 17:30:59.897894 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:30:59.897975 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Mar 17 17:30:59.898048 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Mar 17 17:30:59.898115 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:30:59.898191 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Mar 17 17:30:59.898257 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Mar 17 17:30:59.898323 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:30:59.898394 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Mar 17 17:30:59.898458 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Mar 17 17:30:59.898524 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:30:59.898534 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Mar 17 17:30:59.898597 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Mar 17 17:30:59.898661 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Mar 17 17:30:59.898730 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 17:30:59.898743 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:30:59.898752 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:30:59.898761 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:30:59.898834 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Mar 17 17:30:59.898906 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Mar 17 17:30:59.898917 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:30:59.898926 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 17:30:59.899032 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Mar 17 17:30:59.899048 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Mar 17 17:30:59.899056 kernel: thunder_xcv, ver 1.0 Mar 17 17:30:59.899064 kernel: thunder_bgx, ver 1.0 Mar 17 17:30:59.899072 kernel: nicpf, ver 1.0 Mar 17 17:30:59.899079 kernel: nicvf, ver 1.0 Mar 17 17:30:59.899166 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:30:59.899231 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:30:59 UTC (1742232659) Mar 17 17:30:59.899241 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:30:59.899252 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 17:30:59.899259 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:30:59.899269 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:30:59.899276 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:30:59.899284 kernel: Segment Routing with IPv6 Mar 17 17:30:59.899291 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:30:59.899299 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:30:59.899306 kernel: Key type dns_resolver registered Mar 17 17:30:59.899314 kernel: registered taskstats version 1 Mar 17 17:30:59.899323 kernel: Loading compiled-in X.509 certificates Mar 17 17:30:59.899331 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 74c9b4f5dfad711856d7363c976664fc02c1e24c' Mar 17 17:30:59.899338 kernel: Key type .fscrypt registered Mar 17 17:30:59.899346 kernel: Key type fscrypt-provisioning registered Mar 17 17:30:59.899353 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:30:59.899361 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:30:59.899368 kernel: ima: No architecture policies found Mar 17 17:30:59.899376 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:30:59.899385 kernel: clk: Disabling unused clocks Mar 17 17:30:59.899393 kernel: Freeing unused kernel memory: 39744K Mar 17 17:30:59.899400 kernel: Run /init as init process Mar 17 17:30:59.899408 kernel: with arguments: Mar 17 17:30:59.899416 kernel: /init Mar 17 17:30:59.899423 kernel: with environment: Mar 17 17:30:59.899430 kernel: HOME=/ Mar 17 17:30:59.899438 kernel: TERM=linux Mar 17 17:30:59.899446 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:30:59.899456 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:30:59.899467 systemd[1]: Detected virtualization kvm. Mar 17 17:30:59.899475 systemd[1]: Detected architecture arm64. Mar 17 17:30:59.899488 systemd[1]: Running in initrd. Mar 17 17:30:59.899497 systemd[1]: No hostname configured, using default hostname. Mar 17 17:30:59.899505 systemd[1]: Hostname set to . Mar 17 17:30:59.899513 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:30:59.899523 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:30:59.899531 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:30:59.899539 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:30:59.899547 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:30:59.899555 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:30:59.899564 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:30:59.899572 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:30:59.899581 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:30:59.899591 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:30:59.899599 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:30:59.899607 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:30:59.899615 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:30:59.899623 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:30:59.899631 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:30:59.899639 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:30:59.899647 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:30:59.899657 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:30:59.899665 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:30:59.899674 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:30:59.899682 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:30:59.899690 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:30:59.899698 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:30:59.899706 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:30:59.899714 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:30:59.899724 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:30:59.899732 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:30:59.899740 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:30:59.899748 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:30:59.899756 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:30:59.899764 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:30:59.899772 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:30:59.899780 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:30:59.899809 systemd-journald[237]: Collecting audit messages is disabled. Mar 17 17:30:59.899831 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:30:59.899841 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:30:59.899850 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:30:59.899859 kernel: Bridge firewalling registered Mar 17 17:30:59.899867 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:30:59.899875 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:30:59.899884 systemd-journald[237]: Journal started Mar 17 17:30:59.899904 systemd-journald[237]: Runtime Journal (/run/log/journal/af00ba37a01e47b3947237f3d3abafca) is 8.0M, max 76.6M, 68.6M free. Mar 17 17:30:59.873925 systemd-modules-load[238]: Inserted module 'overlay' Mar 17 17:30:59.901365 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:30:59.892467 systemd-modules-load[238]: Inserted module 'br_netfilter' Mar 17 17:30:59.909398 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:30:59.911803 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:30:59.915381 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:30:59.922485 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:30:59.926188 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:30:59.937209 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:30:59.939230 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:30:59.940576 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:30:59.943796 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:30:59.950406 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:30:59.956365 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:30:59.964984 dracut-cmdline[272]: dracut-dracut-053 Mar 17 17:30:59.969451 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:30:59.994735 systemd-resolved[274]: Positive Trust Anchors: Mar 17 17:30:59.994805 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:30:59.994836 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:31:00.004330 systemd-resolved[274]: Defaulting to hostname 'linux'. Mar 17 17:31:00.006290 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:31:00.006923 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:31:00.037197 kernel: SCSI subsystem initialized Mar 17 17:31:00.042172 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:31:00.049190 kernel: iscsi: registered transport (tcp) Mar 17 17:31:00.062203 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:31:00.062311 kernel: QLogic iSCSI HBA Driver Mar 17 17:31:00.110575 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:31:00.116312 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:31:00.137374 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:31:00.137521 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:31:00.137553 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:31:00.186211 kernel: raid6: neonx8 gen() 15700 MB/s Mar 17 17:31:00.203195 kernel: raid6: neonx4 gen() 15553 MB/s Mar 17 17:31:00.220201 kernel: raid6: neonx2 gen() 13155 MB/s Mar 17 17:31:00.237190 kernel: raid6: neonx1 gen() 10445 MB/s Mar 17 17:31:00.254180 kernel: raid6: int64x8 gen() 6928 MB/s Mar 17 17:31:00.271197 kernel: raid6: int64x4 gen() 7290 MB/s Mar 17 17:31:00.288216 kernel: raid6: int64x2 gen() 6099 MB/s Mar 17 17:31:00.305201 kernel: raid6: int64x1 gen() 5031 MB/s Mar 17 17:31:00.305284 kernel: raid6: using algorithm neonx8 gen() 15700 MB/s Mar 17 17:31:00.322188 kernel: raid6: .... xor() 11838 MB/s, rmw enabled Mar 17 17:31:00.322242 kernel: raid6: using neon recovery algorithm Mar 17 17:31:00.327296 kernel: xor: measuring software checksum speed Mar 17 17:31:00.327340 kernel: 8regs : 19783 MB/sec Mar 17 17:31:00.327363 kernel: 32regs : 17300 MB/sec Mar 17 17:31:00.328184 kernel: arm64_neon : 27079 MB/sec Mar 17 17:31:00.328217 kernel: xor: using function: arm64_neon (27079 MB/sec) Mar 17 17:31:00.378198 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:31:00.392471 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:31:00.400317 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:31:00.413667 systemd-udevd[456]: Using default interface naming scheme 'v255'. Mar 17 17:31:00.416931 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:31:00.425369 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:31:00.440163 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Mar 17 17:31:00.483534 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:31:00.488342 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:31:00.537542 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:31:00.545333 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:31:00.562803 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:31:00.565249 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:31:00.565863 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:31:00.568140 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:31:00.574349 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:31:00.600751 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:31:00.640720 kernel: scsi host0: Virtio SCSI HBA Mar 17 17:31:00.650995 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:31:00.651801 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 17 17:31:00.662555 kernel: ACPI: bus type USB registered Mar 17 17:31:00.662732 kernel: usbcore: registered new interface driver usbfs Mar 17 17:31:00.662998 kernel: usbcore: registered new interface driver hub Mar 17 17:31:00.663014 kernel: usbcore: registered new device driver usb Mar 17 17:31:00.670824 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:31:00.670979 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:31:00.676201 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:31:00.676768 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:31:00.676926 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:31:00.678795 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:31:00.687425 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:31:00.697524 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 17:31:00.708404 kernel: sr 0:0:0:0: Power-on or device reset occurred Mar 17 17:31:00.708564 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 17 17:31:00.709671 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 17 17:31:00.709775 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Mar 17 17:31:00.709872 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:31:00.709882 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 17:31:00.709984 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 17 17:31:00.710070 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 17 17:31:00.710405 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:31:00.710534 kernel: hub 1-0:1.0: USB hub found Mar 17 17:31:00.710638 kernel: hub 1-0:1.0: 4 ports detected Mar 17 17:31:00.710738 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 17 17:31:00.710842 kernel: hub 2-0:1.0: USB hub found Mar 17 17:31:00.710984 kernel: hub 2-0:1.0: 4 ports detected Mar 17 17:31:00.711789 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:31:00.719321 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:31:00.725672 kernel: sd 0:0:0:1: Power-on or device reset occurred Mar 17 17:31:00.736965 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 17 17:31:00.737092 kernel: sd 0:0:0:1: [sda] Write Protect is off Mar 17 17:31:00.737222 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Mar 17 17:31:00.737305 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 17 17:31:00.737383 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:31:00.737400 kernel: GPT:17805311 != 80003071 Mar 17 17:31:00.737410 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:31:00.737419 kernel: GPT:17805311 != 80003071 Mar 17 17:31:00.737428 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:31:00.737436 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:31:00.737446 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Mar 17 17:31:00.744238 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:31:00.776653 kernel: BTRFS: device fsid c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (509) Mar 17 17:31:00.781183 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (519) Mar 17 17:31:00.794317 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 17 17:31:00.799061 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 17 17:31:00.804601 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 17:31:00.810839 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 17 17:31:00.811565 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 17 17:31:00.817323 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:31:00.824203 disk-uuid[577]: Primary Header is updated. Mar 17 17:31:00.824203 disk-uuid[577]: Secondary Entries is updated. Mar 17 17:31:00.824203 disk-uuid[577]: Secondary Header is updated. Mar 17 17:31:00.830259 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:31:00.954238 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 17 17:31:01.196228 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Mar 17 17:31:01.331254 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Mar 17 17:31:01.331332 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 17 17:31:01.332633 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Mar 17 17:31:01.387207 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Mar 17 17:31:01.387648 kernel: usbcore: registered new interface driver usbhid Mar 17 17:31:01.389555 kernel: usbhid: USB HID core driver Mar 17 17:31:01.844163 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:31:01.845644 disk-uuid[578]: The operation has completed successfully. Mar 17 17:31:01.905218 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:31:01.905333 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:31:01.912328 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:31:01.916580 sh[593]: Success Mar 17 17:31:01.929374 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:31:01.998919 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:31:02.002621 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:31:02.006311 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:31:02.022826 kernel: BTRFS info (device dm-0): first mount of filesystem c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 Mar 17 17:31:02.022894 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:31:02.022916 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:31:02.022956 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:31:02.024191 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:31:02.031188 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:31:02.033188 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:31:02.033856 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:31:02.042450 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:31:02.047408 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:31:02.060397 kernel: BTRFS info (device sda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:31:02.060476 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:31:02.060502 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:31:02.066214 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:31:02.066275 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:31:02.077606 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:31:02.078752 kernel: BTRFS info (device sda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:31:02.084164 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:31:02.090508 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:31:02.182406 ignition[681]: Ignition 2.20.0 Mar 17 17:31:02.182416 ignition[681]: Stage: fetch-offline Mar 17 17:31:02.182455 ignition[681]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:31:02.182464 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:31:02.182628 ignition[681]: parsed url from cmdline: "" Mar 17 17:31:02.182632 ignition[681]: no config URL provided Mar 17 17:31:02.182637 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:31:02.185962 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:31:02.182643 ignition[681]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:31:02.182648 ignition[681]: failed to fetch config: resource requires networking Mar 17 17:31:02.182911 ignition[681]: Ignition finished successfully Mar 17 17:31:02.191201 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:31:02.196352 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:31:02.218229 systemd-networkd[779]: lo: Link UP Mar 17 17:31:02.218246 systemd-networkd[779]: lo: Gained carrier Mar 17 17:31:02.219906 systemd-networkd[779]: Enumeration completed Mar 17 17:31:02.220319 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:31:02.221065 systemd[1]: Reached target network.target - Network. Mar 17 17:31:02.221394 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:31:02.221397 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:31:02.222433 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:31:02.222436 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:31:02.222988 systemd-networkd[779]: eth0: Link UP Mar 17 17:31:02.222991 systemd-networkd[779]: eth0: Gained carrier Mar 17 17:31:02.222998 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:31:02.228717 systemd-networkd[779]: eth1: Link UP Mar 17 17:31:02.228720 systemd-networkd[779]: eth1: Gained carrier Mar 17 17:31:02.228729 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:31:02.229089 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:31:02.241624 ignition[782]: Ignition 2.20.0 Mar 17 17:31:02.241645 ignition[782]: Stage: fetch Mar 17 17:31:02.241814 ignition[782]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:31:02.241824 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:31:02.241914 ignition[782]: parsed url from cmdline: "" Mar 17 17:31:02.241954 ignition[782]: no config URL provided Mar 17 17:31:02.241965 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:31:02.241975 ignition[782]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:31:02.242068 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 17 17:31:02.243054 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 17 17:31:02.258291 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:31:02.286240 systemd-networkd[779]: eth0: DHCPv4 address 138.199.148.212/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 17:31:02.443732 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 17 17:31:02.449546 ignition[782]: GET result: OK Mar 17 17:31:02.449716 ignition[782]: parsing config with SHA512: 7bd77f13a5e7dc34a5c835e4e73290af93176928f625d1127220baf02504397867c127469b549eb4a20da70dbb4d9202da2c94e0610e4a143f8465616c36f253 Mar 17 17:31:02.458416 unknown[782]: fetched base config from "system" Mar 17 17:31:02.458429 unknown[782]: fetched base config from "system" Mar 17 17:31:02.459044 ignition[782]: fetch: fetch complete Mar 17 17:31:02.458436 unknown[782]: fetched user config from "hetzner" Mar 17 17:31:02.459051 ignition[782]: fetch: fetch passed Mar 17 17:31:02.461023 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:31:02.459104 ignition[782]: Ignition finished successfully Mar 17 17:31:02.468415 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:31:02.484504 ignition[790]: Ignition 2.20.0 Mar 17 17:31:02.484515 ignition[790]: Stage: kargs Mar 17 17:31:02.484679 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:31:02.484688 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:31:02.488580 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:31:02.485625 ignition[790]: kargs: kargs passed Mar 17 17:31:02.485674 ignition[790]: Ignition finished successfully Mar 17 17:31:02.498344 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:31:02.509521 ignition[796]: Ignition 2.20.0 Mar 17 17:31:02.509532 ignition[796]: Stage: disks Mar 17 17:31:02.509694 ignition[796]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:31:02.509704 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:31:02.511847 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:31:02.510612 ignition[796]: disks: disks passed Mar 17 17:31:02.513602 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:31:02.510657 ignition[796]: Ignition finished successfully Mar 17 17:31:02.515343 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:31:02.516208 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:31:02.516901 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:31:02.517730 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:31:02.523414 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:31:02.541353 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 17 17:31:02.544687 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:31:02.550266 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:31:02.592489 kernel: EXT4-fs (sda9): mounted filesystem 6b579bf2-7716-4d59-98eb-b92ea668693e r/w with ordered data mode. Quota mode: none. Mar 17 17:31:02.593323 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:31:02.594532 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:31:02.611356 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:31:02.615016 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:31:02.619292 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:31:02.623584 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:31:02.624072 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:31:02.631773 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (812) Mar 17 17:31:02.631813 kernel: BTRFS info (device sda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:31:02.631827 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:31:02.632204 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:31:02.638221 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:31:02.638270 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:31:02.639774 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:31:02.651697 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:31:02.656296 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:31:02.692123 coreos-metadata[814]: Mar 17 17:31:02.691 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 17 17:31:02.694217 coreos-metadata[814]: Mar 17 17:31:02.694 INFO Fetch successful Mar 17 17:31:02.695534 coreos-metadata[814]: Mar 17 17:31:02.695 INFO wrote hostname ci-4152-2-2-0-5dd1d5cf3a to /sysroot/etc/hostname Mar 17 17:31:02.699237 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:31:02.701515 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:31:02.707427 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:31:02.713104 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:31:02.717888 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:31:02.810885 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:31:02.817270 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:31:02.819380 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:31:02.827175 kernel: BTRFS info (device sda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:31:02.849527 ignition[929]: INFO : Ignition 2.20.0 Mar 17 17:31:02.849945 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:31:02.852517 ignition[929]: INFO : Stage: mount Mar 17 17:31:02.852517 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:31:02.852517 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:31:02.854127 ignition[929]: INFO : mount: mount passed Mar 17 17:31:02.854616 ignition[929]: INFO : Ignition finished successfully Mar 17 17:31:02.855374 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:31:02.861281 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:31:03.022721 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:31:03.028365 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:31:03.039647 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (940) Mar 17 17:31:03.039711 kernel: BTRFS info (device sda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:31:03.039735 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:31:03.040470 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:31:03.044162 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 17:31:03.044212 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:31:03.045857 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:31:03.075315 ignition[957]: INFO : Ignition 2.20.0 Mar 17 17:31:03.075315 ignition[957]: INFO : Stage: files Mar 17 17:31:03.076406 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:31:03.076406 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:31:03.076406 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:31:03.079327 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:31:03.079327 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:31:03.081829 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:31:03.083614 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:31:03.083614 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:31:03.082343 unknown[957]: wrote ssh authorized keys file for user: core Mar 17 17:31:03.086701 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:31:03.086701 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 17:31:03.131898 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:31:03.340102 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:31:03.340102 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:31:03.340102 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 17:31:03.538441 systemd-networkd[779]: eth1: Gained IPv6LL Mar 17 17:31:04.001462 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:31:04.177702 systemd-networkd[779]: eth0: Gained IPv6LL Mar 17 17:31:04.611821 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:31:04.613622 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:31:04.613622 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:31:04.613622 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:31:04.613622 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:31:04.613622 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:31:04.613622 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:31:04.613622 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:31:04.613622 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:31:04.613622 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:31:04.613622 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:31:04.613622 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:31:04.613622 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:31:04.613622 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:31:04.613622 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 17:31:04.977407 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:31:05.440289 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:31:05.440289 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:31:05.443929 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:31:05.443929 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:31:05.443929 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:31:05.443929 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 17 17:31:05.443929 ignition[957]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 17:31:05.443929 ignition[957]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 17:31:05.443929 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 17 17:31:05.443929 ignition[957]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:31:05.443929 ignition[957]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:31:05.443929 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:31:05.443929 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:31:05.443929 ignition[957]: INFO : files: files passed Mar 17 17:31:05.443929 ignition[957]: INFO : Ignition finished successfully Mar 17 17:31:05.444791 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:31:05.451424 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:31:05.456463 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:31:05.459421 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:31:05.459521 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:31:05.478271 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:31:05.478271 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:31:05.480612 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:31:05.483094 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:31:05.484039 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:31:05.493378 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:31:05.522800 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:31:05.522986 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:31:05.524684 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:31:05.525693 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:31:05.526868 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:31:05.533439 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:31:05.549486 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:31:05.556356 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:31:05.567331 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:31:05.568798 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:31:05.569686 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:31:05.570773 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:31:05.570932 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:31:05.572915 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:31:05.573630 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:31:05.574619 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:31:05.575641 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:31:05.576789 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:31:05.577850 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:31:05.578921 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:31:05.580041 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:31:05.581121 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:31:05.582087 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:31:05.582978 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:31:05.583104 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:31:05.584345 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:31:05.584996 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:31:05.586006 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:31:05.590230 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:31:05.590943 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:31:05.591066 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:31:05.594128 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:31:05.594383 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:31:05.596817 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:31:05.597061 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:31:05.598686 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:31:05.598798 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:31:05.608729 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:31:05.610082 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:31:05.611057 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:31:05.613554 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:31:05.614586 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:31:05.615324 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:31:05.616056 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:31:05.617483 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:31:05.623412 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:31:05.625178 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:31:05.630203 ignition[1009]: INFO : Ignition 2.20.0 Mar 17 17:31:05.630203 ignition[1009]: INFO : Stage: umount Mar 17 17:31:05.630203 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:31:05.630203 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 17:31:05.634136 ignition[1009]: INFO : umount: umount passed Mar 17 17:31:05.634136 ignition[1009]: INFO : Ignition finished successfully Mar 17 17:31:05.634874 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:31:05.635486 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:31:05.637004 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:31:05.637099 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:31:05.639126 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:31:05.639300 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:31:05.640912 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:31:05.641000 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:31:05.643103 systemd[1]: Stopped target network.target - Network. Mar 17 17:31:05.643758 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:31:05.643820 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:31:05.644666 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:31:05.645555 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:31:05.649240 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:31:05.650282 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:31:05.651196 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:31:05.652025 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:31:05.652070 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:31:05.653182 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:31:05.653233 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:31:05.654222 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:31:05.654275 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:31:05.655190 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:31:05.655229 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:31:05.656191 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:31:05.657460 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:31:05.661184 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:31:05.661718 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:31:05.661802 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:31:05.663369 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:31:05.663449 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:31:05.663670 systemd-networkd[779]: eth1: DHCPv6 lease lost Mar 17 17:31:05.668221 systemd-networkd[779]: eth0: DHCPv6 lease lost Mar 17 17:31:05.669994 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:31:05.670718 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:31:05.672644 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:31:05.672793 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:31:05.675207 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:31:05.675257 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:31:05.680258 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:31:05.680731 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:31:05.680786 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:31:05.681479 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:31:05.681522 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:31:05.682138 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:31:05.682761 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:31:05.684008 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:31:05.684085 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:31:05.685604 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:31:05.697630 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:31:05.697735 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:31:05.706741 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:31:05.708227 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:31:05.709622 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:31:05.709699 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:31:05.711531 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:31:05.711577 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:31:05.712839 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:31:05.712885 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:31:05.715282 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:31:05.715327 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:31:05.716690 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:31:05.716733 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:31:05.723357 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:31:05.724483 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:31:05.724576 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:31:05.728374 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:31:05.728421 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:31:05.729943 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:31:05.730046 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:31:05.731030 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:31:05.736288 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:31:05.744252 systemd[1]: Switching root. Mar 17 17:31:05.776653 systemd-journald[237]: Journal stopped Mar 17 17:31:06.632296 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Mar 17 17:31:06.632357 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:31:06.632369 kernel: SELinux: policy capability open_perms=1 Mar 17 17:31:06.632378 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:31:06.632388 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:31:06.632397 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:31:06.632407 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:31:06.632419 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:31:06.632428 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:31:06.632438 kernel: audit: type=1403 audit(1742232665.936:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:31:06.632448 systemd[1]: Successfully loaded SELinux policy in 34.028ms. Mar 17 17:31:06.632470 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.676ms. Mar 17 17:31:06.632482 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:31:06.632492 systemd[1]: Detected virtualization kvm. Mar 17 17:31:06.632503 systemd[1]: Detected architecture arm64. Mar 17 17:31:06.632515 systemd[1]: Detected first boot. Mar 17 17:31:06.632525 systemd[1]: Hostname set to . Mar 17 17:31:06.632535 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:31:06.632545 zram_generator::config[1052]: No configuration found. Mar 17 17:31:06.632556 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:31:06.632566 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:31:06.632576 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:31:06.632586 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:31:06.632602 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:31:06.632612 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:31:06.632622 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:31:06.632632 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:31:06.632642 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:31:06.632653 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:31:06.632663 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:31:06.632673 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:31:06.632683 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:31:06.632695 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:31:06.632706 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:31:06.632715 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:31:06.632726 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:31:06.632736 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:31:06.632746 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 17:31:06.632756 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:31:06.632766 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:31:06.632781 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:31:06.632792 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:31:06.632802 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:31:06.632812 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:31:06.632826 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:31:06.632837 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:31:06.632850 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:31:06.632863 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:31:06.632873 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:31:06.632916 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:31:06.632929 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:31:06.632939 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:31:06.632949 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:31:06.632960 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:31:06.632969 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:31:06.632979 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:31:06.632992 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:31:06.633003 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:31:06.633013 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:31:06.633027 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:31:06.633039 systemd[1]: Reached target machines.target - Containers. Mar 17 17:31:06.633049 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:31:06.633061 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:31:06.633072 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:31:06.633082 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:31:06.633092 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:31:06.633103 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:31:06.633113 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:31:06.633124 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:31:06.633134 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:31:06.639230 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:31:06.639264 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:31:06.639275 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:31:06.639288 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:31:06.639302 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:31:06.639313 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:31:06.639324 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:31:06.639335 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:31:06.639345 kernel: loop: module loaded Mar 17 17:31:06.639364 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:31:06.639375 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:31:06.639385 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:31:06.639395 systemd[1]: Stopped verity-setup.service. Mar 17 17:31:06.639406 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:31:06.639416 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:31:06.639426 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:31:06.639437 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:31:06.639449 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:31:06.639459 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:31:06.639470 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:31:06.639480 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:31:06.639490 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:31:06.639500 kernel: ACPI: bus type drm_connector registered Mar 17 17:31:06.639511 kernel: fuse: init (API version 7.39) Mar 17 17:31:06.639557 systemd-journald[1122]: Collecting audit messages is disabled. Mar 17 17:31:06.639590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:31:06.639603 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:31:06.639614 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:31:06.639624 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:31:06.639634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:31:06.639644 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:31:06.639659 systemd-journald[1122]: Journal started Mar 17 17:31:06.639681 systemd-journald[1122]: Runtime Journal (/run/log/journal/af00ba37a01e47b3947237f3d3abafca) is 8.0M, max 76.6M, 68.6M free. Mar 17 17:31:06.397589 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:31:06.420611 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 17 17:31:06.421348 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:31:06.647842 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:31:06.643018 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:31:06.643182 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:31:06.645424 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:31:06.645987 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:31:06.646997 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:31:06.648509 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:31:06.649545 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:31:06.650688 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:31:06.663712 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:31:06.670381 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:31:06.673260 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:31:06.676377 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:31:06.676411 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:31:06.678055 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:31:06.687373 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:31:06.690680 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:31:06.693832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:31:06.696073 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:31:06.698552 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:31:06.700237 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:31:06.701282 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:31:06.701867 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:31:06.705296 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:31:06.707320 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:31:06.711109 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:31:06.714602 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:31:06.716057 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:31:06.718328 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:31:06.734684 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:31:06.738995 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:31:06.744622 systemd-journald[1122]: Time spent on flushing to /var/log/journal/af00ba37a01e47b3947237f3d3abafca is 69.336ms for 1132 entries. Mar 17 17:31:06.744622 systemd-journald[1122]: System Journal (/var/log/journal/af00ba37a01e47b3947237f3d3abafca) is 8.0M, max 584.8M, 576.8M free. Mar 17 17:31:06.829831 kernel: loop0: detected capacity change from 0 to 8 Mar 17 17:31:06.830102 systemd-journald[1122]: Received client request to flush runtime journal. Mar 17 17:31:06.830552 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:31:06.830609 kernel: loop1: detected capacity change from 0 to 113536 Mar 17 17:31:06.764512 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:31:06.765334 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:31:06.773443 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:31:06.796949 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:31:06.801845 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:31:06.825092 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:31:06.827402 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:31:06.836067 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:31:06.840804 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:31:06.850908 kernel: loop2: detected capacity change from 0 to 116808 Mar 17 17:31:06.851486 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:31:06.875120 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 17 17:31:06.875462 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 17 17:31:06.881179 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:31:06.884189 kernel: loop3: detected capacity change from 0 to 194096 Mar 17 17:31:06.926414 kernel: loop4: detected capacity change from 0 to 8 Mar 17 17:31:06.928176 kernel: loop5: detected capacity change from 0 to 113536 Mar 17 17:31:06.942267 kernel: loop6: detected capacity change from 0 to 116808 Mar 17 17:31:06.956505 kernel: loop7: detected capacity change from 0 to 194096 Mar 17 17:31:06.980853 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 17 17:31:06.981364 (sd-merge)[1193]: Merged extensions into '/usr'. Mar 17 17:31:06.988118 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:31:06.988137 systemd[1]: Reloading... Mar 17 17:31:07.084303 zram_generator::config[1219]: No configuration found. Mar 17 17:31:07.219349 ldconfig[1161]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:31:07.247891 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:31:07.293063 systemd[1]: Reloading finished in 304 ms. Mar 17 17:31:07.319381 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:31:07.322933 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:31:07.332455 systemd[1]: Starting ensure-sysext.service... Mar 17 17:31:07.337322 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:31:07.346237 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:31:07.346252 systemd[1]: Reloading... Mar 17 17:31:07.373549 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:31:07.373796 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:31:07.374474 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:31:07.374672 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 17 17:31:07.374715 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 17 17:31:07.379112 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:31:07.379270 systemd-tmpfiles[1257]: Skipping /boot Mar 17 17:31:07.388200 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:31:07.388303 systemd-tmpfiles[1257]: Skipping /boot Mar 17 17:31:07.418250 zram_generator::config[1284]: No configuration found. Mar 17 17:31:07.511763 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:31:07.556689 systemd[1]: Reloading finished in 210 ms. Mar 17 17:31:07.573569 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:31:07.574595 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:31:07.594766 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:31:07.600504 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:31:07.602523 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:31:07.607944 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:31:07.611276 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:31:07.614386 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:31:07.619471 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:31:07.622527 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:31:07.625274 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:31:07.627434 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:31:07.628300 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:31:07.630013 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:31:07.630261 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:31:07.632311 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:31:07.634361 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:31:07.635784 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:31:07.645322 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:31:07.649044 systemd[1]: Finished ensure-sysext.service. Mar 17 17:31:07.653320 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:31:07.662941 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:31:07.685230 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:31:07.690323 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:31:07.691940 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:31:07.693376 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:31:07.698824 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:31:07.701814 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:31:07.707116 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:31:07.707274 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:31:07.708951 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:31:07.713312 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:31:07.713436 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:31:07.715120 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:31:07.725684 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Mar 17 17:31:07.733496 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:31:07.734503 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:31:07.738255 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:31:07.741424 augenrules[1364]: No rules Mar 17 17:31:07.743078 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:31:07.745408 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:31:07.745647 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:31:07.769065 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:31:07.777327 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:31:07.817113 systemd-resolved[1327]: Positive Trust Anchors: Mar 17 17:31:07.817472 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:31:07.817546 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:31:07.822697 systemd-resolved[1327]: Using system hostname 'ci-4152-2-2-0-5dd1d5cf3a'. Mar 17 17:31:07.825200 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:31:07.826345 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:31:07.827487 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:31:07.828965 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:31:07.867860 systemd-networkd[1376]: lo: Link UP Mar 17 17:31:07.868203 systemd-networkd[1376]: lo: Gained carrier Mar 17 17:31:07.869243 systemd-networkd[1376]: Enumeration completed Mar 17 17:31:07.869648 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:31:07.870441 systemd[1]: Reached target network.target - Network. Mar 17 17:31:07.882552 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:31:07.895360 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 17 17:31:07.943588 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:31:07.943717 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:31:07.945138 systemd-networkd[1376]: eth0: Link UP Mar 17 17:31:07.945270 systemd-networkd[1376]: eth0: Gained carrier Mar 17 17:31:07.945328 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:31:07.950170 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:31:07.967037 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:31:07.967047 systemd-networkd[1376]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:31:07.968661 systemd-networkd[1376]: eth1: Link UP Mar 17 17:31:07.968958 systemd-networkd[1376]: eth1: Gained carrier Mar 17 17:31:07.968978 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:31:07.990327 systemd-networkd[1376]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:31:07.990928 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Mar 17 17:31:08.005307 systemd-networkd[1376]: eth0: DHCPv4 address 138.199.148.212/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 17:31:08.006373 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Mar 17 17:31:08.007160 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Mar 17 17:31:08.016866 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 17 17:31:08.017034 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:31:08.022522 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:31:08.028139 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:31:08.029160 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1392) Mar 17 17:31:08.037353 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:31:08.040616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:31:08.040656 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:31:08.040999 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:31:08.042222 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:31:08.050775 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:31:08.050951 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:31:08.055730 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:31:08.060973 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:31:08.061113 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:31:08.063858 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:31:08.093166 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Mar 17 17:31:08.093248 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 17 17:31:08.093263 kernel: [drm] features: -context_init Mar 17 17:31:08.094205 kernel: [drm] number of scanouts: 1 Mar 17 17:31:08.094257 kernel: [drm] number of cap sets: 0 Mar 17 17:31:08.097169 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 17 17:31:08.103313 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 17:31:08.103234 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:31:08.110201 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 17 17:31:08.115218 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 17:31:08.120186 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:31:08.125553 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:31:08.126895 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:31:08.135432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:31:08.155223 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:31:08.205387 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:31:08.254606 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:31:08.265388 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:31:08.280265 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:31:08.311128 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:31:08.314499 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:31:08.316091 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:31:08.317334 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:31:08.318213 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:31:08.319365 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:31:08.320079 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:31:08.320822 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:31:08.321543 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:31:08.321587 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:31:08.322098 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:31:08.324117 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:31:08.326183 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:31:08.332435 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:31:08.334512 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:31:08.335676 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:31:08.336396 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:31:08.336989 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:31:08.337622 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:31:08.337652 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:31:08.340278 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:31:08.344307 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:31:08.344338 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:31:08.349334 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:31:08.361709 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:31:08.365448 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:31:08.367743 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:31:08.369754 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:31:08.372916 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:31:08.375339 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 17 17:31:08.379918 jq[1448]: false Mar 17 17:31:08.379335 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:31:08.383365 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:31:08.396395 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:31:08.397752 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:31:08.404645 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:31:08.407717 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:31:08.412655 coreos-metadata[1446]: Mar 17 17:31:08.411 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 17 17:31:08.413244 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:31:08.418994 coreos-metadata[1446]: Mar 17 17:31:08.413 INFO Fetch successful Mar 17 17:31:08.418994 coreos-metadata[1446]: Mar 17 17:31:08.415 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 17 17:31:08.418994 coreos-metadata[1446]: Mar 17 17:31:08.415 INFO Fetch successful Mar 17 17:31:08.415029 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:31:08.421190 dbus-daemon[1447]: [system] SELinux support is enabled Mar 17 17:31:08.434505 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:31:08.449506 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:31:08.458233 jq[1459]: true Mar 17 17:31:08.453224 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:31:08.465037 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:31:08.465093 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:31:08.466286 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:31:08.466305 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:31:08.473293 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:31:08.473501 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:31:08.486039 extend-filesystems[1449]: Found loop4 Mar 17 17:31:08.492197 extend-filesystems[1449]: Found loop5 Mar 17 17:31:08.492197 extend-filesystems[1449]: Found loop6 Mar 17 17:31:08.492197 extend-filesystems[1449]: Found loop7 Mar 17 17:31:08.492197 extend-filesystems[1449]: Found sda Mar 17 17:31:08.492197 extend-filesystems[1449]: Found sda1 Mar 17 17:31:08.492197 extend-filesystems[1449]: Found sda2 Mar 17 17:31:08.492197 extend-filesystems[1449]: Found sda3 Mar 17 17:31:08.492197 extend-filesystems[1449]: Found usr Mar 17 17:31:08.492197 extend-filesystems[1449]: Found sda4 Mar 17 17:31:08.492197 extend-filesystems[1449]: Found sda6 Mar 17 17:31:08.492197 extend-filesystems[1449]: Found sda7 Mar 17 17:31:08.492197 extend-filesystems[1449]: Found sda9 Mar 17 17:31:08.492197 extend-filesystems[1449]: Checking size of /dev/sda9 Mar 17 17:31:08.498281 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:31:08.524356 update_engine[1458]: I20250317 17:31:08.521806 1458 main.cc:92] Flatcar Update Engine starting Mar 17 17:31:08.524880 jq[1481]: true Mar 17 17:31:08.499223 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:31:08.516255 (ntainerd)[1477]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:31:08.529462 extend-filesystems[1449]: Resized partition /dev/sda9 Mar 17 17:31:08.536858 extend-filesystems[1496]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:31:08.538242 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:31:08.542229 update_engine[1458]: I20250317 17:31:08.539096 1458 update_check_scheduler.cc:74] Next update check in 11m30s Mar 17 17:31:08.548820 tar[1471]: linux-arm64/helm Mar 17 17:31:08.554029 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 17 17:31:08.553991 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:31:08.554693 systemd-logind[1457]: New seat seat0. Mar 17 17:31:08.562405 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:31:08.562427 systemd-logind[1457]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Mar 17 17:31:08.562764 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:31:08.590292 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:31:08.591540 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:31:08.645856 bash[1516]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:31:08.647547 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:31:08.677434 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1392) Mar 17 17:31:08.672937 systemd[1]: Starting sshkeys.service... Mar 17 17:31:08.704359 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:31:08.708190 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 17 17:31:08.720564 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:31:08.723676 extend-filesystems[1496]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 17 17:31:08.723676 extend-filesystems[1496]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 17 17:31:08.723676 extend-filesystems[1496]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 17 17:31:08.736019 extend-filesystems[1449]: Resized filesystem in /dev/sda9 Mar 17 17:31:08.736019 extend-filesystems[1449]: Found sr0 Mar 17 17:31:08.726461 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:31:08.726650 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:31:08.795263 coreos-metadata[1525]: Mar 17 17:31:08.795 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 17 17:31:08.796940 coreos-metadata[1525]: Mar 17 17:31:08.796 INFO Fetch successful Mar 17 17:31:08.801923 unknown[1525]: wrote ssh authorized keys file for user: core Mar 17 17:31:08.808876 containerd[1477]: time="2025-03-17T17:31:08.806333440Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:31:08.827802 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:31:08.831526 update-ssh-keys[1536]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:31:08.833159 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:31:08.836586 systemd[1]: Finished sshkeys.service. Mar 17 17:31:08.888184 containerd[1477]: time="2025-03-17T17:31:08.887260120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:31:08.889366 containerd[1477]: time="2025-03-17T17:31:08.889334000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:31:08.889432 containerd[1477]: time="2025-03-17T17:31:08.889418960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:31:08.891338 containerd[1477]: time="2025-03-17T17:31:08.890184040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:31:08.891338 containerd[1477]: time="2025-03-17T17:31:08.890357440Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:31:08.891338 containerd[1477]: time="2025-03-17T17:31:08.890381160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:31:08.891338 containerd[1477]: time="2025-03-17T17:31:08.890443360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:31:08.891338 containerd[1477]: time="2025-03-17T17:31:08.890455360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:31:08.891338 containerd[1477]: time="2025-03-17T17:31:08.890617920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:31:08.891338 containerd[1477]: time="2025-03-17T17:31:08.890632640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:31:08.891338 containerd[1477]: time="2025-03-17T17:31:08.890645200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:31:08.891338 containerd[1477]: time="2025-03-17T17:31:08.890653600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:31:08.891338 containerd[1477]: time="2025-03-17T17:31:08.890723840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:31:08.891338 containerd[1477]: time="2025-03-17T17:31:08.890943600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:31:08.891562 containerd[1477]: time="2025-03-17T17:31:08.891041680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:31:08.891562 containerd[1477]: time="2025-03-17T17:31:08.891054640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:31:08.891562 containerd[1477]: time="2025-03-17T17:31:08.891136920Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:31:08.891562 containerd[1477]: time="2025-03-17T17:31:08.891198880Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:31:08.902156 containerd[1477]: time="2025-03-17T17:31:08.900268600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:31:08.902156 containerd[1477]: time="2025-03-17T17:31:08.900335040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:31:08.902156 containerd[1477]: time="2025-03-17T17:31:08.900351360Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:31:08.902156 containerd[1477]: time="2025-03-17T17:31:08.900366520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:31:08.902156 containerd[1477]: time="2025-03-17T17:31:08.900380040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:31:08.902156 containerd[1477]: time="2025-03-17T17:31:08.900526600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:31:08.902156 containerd[1477]: time="2025-03-17T17:31:08.900825360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:31:08.902156 containerd[1477]: time="2025-03-17T17:31:08.900966280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:31:08.902156 containerd[1477]: time="2025-03-17T17:31:08.900984800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:31:08.902156 containerd[1477]: time="2025-03-17T17:31:08.900999360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:31:08.902156 containerd[1477]: time="2025-03-17T17:31:08.901012520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:31:08.902156 containerd[1477]: time="2025-03-17T17:31:08.901024160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:31:08.902156 containerd[1477]: time="2025-03-17T17:31:08.901036280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:31:08.902156 containerd[1477]: time="2025-03-17T17:31:08.901049560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:31:08.902438 containerd[1477]: time="2025-03-17T17:31:08.901063160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:31:08.902438 containerd[1477]: time="2025-03-17T17:31:08.901076760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:31:08.902438 containerd[1477]: time="2025-03-17T17:31:08.901097600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:31:08.902438 containerd[1477]: time="2025-03-17T17:31:08.901108880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:31:08.902438 containerd[1477]: time="2025-03-17T17:31:08.901131520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:31:08.903203 containerd[1477]: time="2025-03-17T17:31:08.903183680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:31:08.903287 containerd[1477]: time="2025-03-17T17:31:08.903273200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:31:08.903359 containerd[1477]: time="2025-03-17T17:31:08.903346800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:31:08.903423 containerd[1477]: time="2025-03-17T17:31:08.903401600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:31:08.903480 containerd[1477]: time="2025-03-17T17:31:08.903468920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:31:08.903541 containerd[1477]: time="2025-03-17T17:31:08.903529840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:31:08.903615 containerd[1477]: time="2025-03-17T17:31:08.903603400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:31:08.903680 containerd[1477]: time="2025-03-17T17:31:08.903660120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:31:08.903732 containerd[1477]: time="2025-03-17T17:31:08.903721720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:31:08.903791 containerd[1477]: time="2025-03-17T17:31:08.903780600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:31:08.903900 containerd[1477]: time="2025-03-17T17:31:08.903850000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:31:08.903973 containerd[1477]: time="2025-03-17T17:31:08.903958560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:31:08.904049 containerd[1477]: time="2025-03-17T17:31:08.904036840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:31:08.905294 containerd[1477]: time="2025-03-17T17:31:08.905169200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:31:08.905294 containerd[1477]: time="2025-03-17T17:31:08.905192240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:31:08.905294 containerd[1477]: time="2025-03-17T17:31:08.905203680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:31:08.905500 containerd[1477]: time="2025-03-17T17:31:08.905484840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:31:08.905664 containerd[1477]: time="2025-03-17T17:31:08.905547280Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:31:08.905664 containerd[1477]: time="2025-03-17T17:31:08.905561680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:31:08.905664 containerd[1477]: time="2025-03-17T17:31:08.905573360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:31:08.905664 containerd[1477]: time="2025-03-17T17:31:08.905581760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:31:08.905664 containerd[1477]: time="2025-03-17T17:31:08.905594080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:31:08.905664 containerd[1477]: time="2025-03-17T17:31:08.905604280Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:31:08.905664 containerd[1477]: time="2025-03-17T17:31:08.905613560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:31:08.908953 containerd[1477]: time="2025-03-17T17:31:08.908192000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:31:08.908953 containerd[1477]: time="2025-03-17T17:31:08.908249840Z" level=info msg="Connect containerd service" Mar 17 17:31:08.908953 containerd[1477]: time="2025-03-17T17:31:08.908296200Z" level=info msg="using legacy CRI server" Mar 17 17:31:08.908953 containerd[1477]: time="2025-03-17T17:31:08.908303160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:31:08.908953 containerd[1477]: time="2025-03-17T17:31:08.908564680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:31:08.909599 containerd[1477]: time="2025-03-17T17:31:08.909573600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:31:08.910792 containerd[1477]: time="2025-03-17T17:31:08.910736040Z" level=info msg="Start subscribing containerd event" Mar 17 17:31:08.910830 containerd[1477]: time="2025-03-17T17:31:08.910795840Z" level=info msg="Start recovering state" Mar 17 17:31:08.910895 containerd[1477]: time="2025-03-17T17:31:08.910877120Z" level=info msg="Start event monitor" Mar 17 17:31:08.910895 containerd[1477]: time="2025-03-17T17:31:08.910889240Z" level=info msg="Start snapshots syncer" Mar 17 17:31:08.910937 containerd[1477]: time="2025-03-17T17:31:08.910901280Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:31:08.910937 containerd[1477]: time="2025-03-17T17:31:08.910908560Z" level=info msg="Start streaming server" Mar 17 17:31:08.911376 containerd[1477]: time="2025-03-17T17:31:08.911347880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:31:08.911410 containerd[1477]: time="2025-03-17T17:31:08.911401000Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:31:08.912394 containerd[1477]: time="2025-03-17T17:31:08.911448120Z" level=info msg="containerd successfully booted in 0.108976s" Mar 17 17:31:08.911541 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:31:09.150824 tar[1471]: linux-arm64/LICENSE Mar 17 17:31:09.150941 tar[1471]: linux-arm64/README.md Mar 17 17:31:09.170251 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:31:09.246685 sshd_keygen[1483]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:31:09.267812 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:31:09.276597 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:31:09.283930 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:31:09.284189 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:31:09.295572 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:31:09.297411 systemd-networkd[1376]: eth0: Gained IPv6LL Mar 17 17:31:09.298172 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Mar 17 17:31:09.302516 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:31:09.304916 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:31:09.312445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:31:09.315524 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:31:09.319194 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:31:09.327553 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:31:09.339492 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 17:31:09.340611 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:31:09.356037 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:31:09.873336 systemd-networkd[1376]: eth1: Gained IPv6LL Mar 17 17:31:09.875260 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Mar 17 17:31:09.965135 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:31:09.967833 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:31:09.970921 (kubelet)[1578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:31:09.975713 systemd[1]: Startup finished in 760ms (kernel) + 6.244s (initrd) + 4.072s (userspace) = 11.077s. Mar 17 17:31:10.512045 kubelet[1578]: E0317 17:31:10.511994 1578 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:31:10.516390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:31:10.516534 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:31:20.767308 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:31:20.776516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:31:20.878496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:31:20.890630 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:31:20.951694 kubelet[1598]: E0317 17:31:20.951622 1598 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:31:20.955386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:31:20.955620 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:31:31.206480 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:31:31.217528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:31:31.330523 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:31:31.330925 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:31:31.378397 kubelet[1614]: E0317 17:31:31.378343 1614 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:31:31.380597 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:31:31.380857 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:31:40.258250 systemd-timesyncd[1341]: Contacted time server 78.47.170.34:123 (2.flatcar.pool.ntp.org). Mar 17 17:31:40.258329 systemd-timesyncd[1341]: Initial clock synchronization to Mon 2025-03-17 17:31:40.121264 UTC. Mar 17 17:31:41.631246 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:31:41.637411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:31:41.737313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:31:41.741541 (kubelet)[1630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:31:41.787131 kubelet[1630]: E0317 17:31:41.787073 1630 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:31:41.791524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:31:41.791760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:31:51.903670 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 17:31:51.911488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:31:52.026494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:31:52.028110 (kubelet)[1646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:31:52.076537 kubelet[1646]: E0317 17:31:52.076493 1646 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:31:52.079943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:31:52.080137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:31:53.911241 update_engine[1458]: I20250317 17:31:53.910990 1458 update_attempter.cc:509] Updating boot flags... Mar 17 17:31:53.958179 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1663) Mar 17 17:31:54.016627 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1665) Mar 17 17:32:02.153575 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 17:32:02.162558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:32:02.272287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:32:02.286680 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:32:02.331252 kubelet[1680]: E0317 17:32:02.331198 1680 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:32:02.333866 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:32:02.334160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:32:12.403515 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 17:32:12.414523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:32:12.525654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:32:12.526621 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:32:12.575614 kubelet[1696]: E0317 17:32:12.575539 1696 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:32:12.579734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:32:12.579882 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:32:22.653858 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 17 17:32:22.663655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:32:22.774488 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:32:22.788929 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:32:22.837929 kubelet[1712]: E0317 17:32:22.837818 1712 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:32:22.840239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:32:22.840384 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:32:32.903752 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 17 17:32:32.913482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:32:33.015502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:32:33.025680 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:32:33.072824 kubelet[1727]: E0317 17:32:33.072764 1727 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:32:33.075415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:32:33.075576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:32:43.153891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 17 17:32:43.160492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:32:43.267904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:32:43.273001 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:32:43.317031 kubelet[1743]: E0317 17:32:43.316960 1743 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:32:43.320246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:32:43.320507 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:32:53.403403 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 17 17:32:53.416538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:32:53.531507 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:32:53.531560 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:32:53.577519 kubelet[1759]: E0317 17:32:53.577477 1759 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:32:53.581182 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:32:53.581492 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:32:58.851100 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:32:58.861740 systemd[1]: Started sshd@0-138.199.148.212:22-139.178.89.65:52938.service - OpenSSH per-connection server daemon (139.178.89.65:52938). Mar 17 17:32:59.867773 sshd[1768]: Accepted publickey for core from 139.178.89.65 port 52938 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:32:59.870054 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:32:59.882302 systemd-logind[1457]: New session 1 of user core. Mar 17 17:32:59.884686 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:32:59.897686 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:32:59.911728 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:32:59.920822 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:32:59.925246 (systemd)[1772]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:33:00.038310 systemd[1772]: Queued start job for default target default.target. Mar 17 17:33:00.051232 systemd[1772]: Created slice app.slice - User Application Slice. Mar 17 17:33:00.051290 systemd[1772]: Reached target paths.target - Paths. Mar 17 17:33:00.051555 systemd[1772]: Reached target timers.target - Timers. Mar 17 17:33:00.054006 systemd[1772]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:33:00.081615 systemd[1772]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:33:00.081744 systemd[1772]: Reached target sockets.target - Sockets. Mar 17 17:33:00.081758 systemd[1772]: Reached target basic.target - Basic System. Mar 17 17:33:00.081805 systemd[1772]: Reached target default.target - Main User Target. Mar 17 17:33:00.081833 systemd[1772]: Startup finished in 150ms. Mar 17 17:33:00.081977 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:33:00.089522 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:33:00.792611 systemd[1]: Started sshd@1-138.199.148.212:22-139.178.89.65:52942.service - OpenSSH per-connection server daemon (139.178.89.65:52942). Mar 17 17:33:01.769982 sshd[1783]: Accepted publickey for core from 139.178.89.65 port 52942 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:33:01.772177 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:01.778801 systemd-logind[1457]: New session 2 of user core. Mar 17 17:33:01.784521 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:33:02.445548 sshd[1785]: Connection closed by 139.178.89.65 port 52942 Mar 17 17:33:02.446431 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:02.451554 systemd[1]: sshd@1-138.199.148.212:22-139.178.89.65:52942.service: Deactivated successfully. Mar 17 17:33:02.453289 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:33:02.454184 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:33:02.455493 systemd-logind[1457]: Removed session 2. Mar 17 17:33:02.623711 systemd[1]: Started sshd@2-138.199.148.212:22-139.178.89.65:39938.service - OpenSSH per-connection server daemon (139.178.89.65:39938). Mar 17 17:33:03.618732 sshd[1790]: Accepted publickey for core from 139.178.89.65 port 39938 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:33:03.620654 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:03.621574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Mar 17 17:33:03.627414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:33:03.634972 systemd-logind[1457]: New session 3 of user core. Mar 17 17:33:03.635460 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:33:03.746446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:33:03.746739 (kubelet)[1801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:33:03.799519 kubelet[1801]: E0317 17:33:03.799477 1801 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:33:03.802180 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:33:03.802331 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:33:04.298340 sshd[1795]: Connection closed by 139.178.89.65 port 39938 Mar 17 17:33:04.299218 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:04.303855 systemd[1]: sshd@2-138.199.148.212:22-139.178.89.65:39938.service: Deactivated successfully. Mar 17 17:33:04.307110 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:33:04.309043 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:33:04.310904 systemd-logind[1457]: Removed session 3. Mar 17 17:33:04.478622 systemd[1]: Started sshd@3-138.199.148.212:22-139.178.89.65:39948.service - OpenSSH per-connection server daemon (139.178.89.65:39948). Mar 17 17:33:05.456791 sshd[1813]: Accepted publickey for core from 139.178.89.65 port 39948 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:33:05.458825 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:05.463491 systemd-logind[1457]: New session 4 of user core. Mar 17 17:33:05.478524 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:33:06.138570 sshd[1815]: Connection closed by 139.178.89.65 port 39948 Mar 17 17:33:06.137453 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:06.142758 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:33:06.143555 systemd[1]: sshd@3-138.199.148.212:22-139.178.89.65:39948.service: Deactivated successfully. Mar 17 17:33:06.146567 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:33:06.150316 systemd-logind[1457]: Removed session 4. Mar 17 17:33:06.319598 systemd[1]: Started sshd@4-138.199.148.212:22-139.178.89.65:39954.service - OpenSSH per-connection server daemon (139.178.89.65:39954). Mar 17 17:33:07.313854 sshd[1820]: Accepted publickey for core from 139.178.89.65 port 39954 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:33:07.315774 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:07.320502 systemd-logind[1457]: New session 5 of user core. Mar 17 17:33:07.331484 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:33:07.846650 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:33:07.846935 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:33:07.863129 sudo[1823]: pam_unix(sudo:session): session closed for user root Mar 17 17:33:08.026087 sshd[1822]: Connection closed by 139.178.89.65 port 39954 Mar 17 17:33:08.024709 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:08.029523 systemd[1]: sshd@4-138.199.148.212:22-139.178.89.65:39954.service: Deactivated successfully. Mar 17 17:33:08.032276 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:33:08.033638 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:33:08.035059 systemd-logind[1457]: Removed session 5. Mar 17 17:33:08.198632 systemd[1]: Started sshd@5-138.199.148.212:22-139.178.89.65:39958.service - OpenSSH per-connection server daemon (139.178.89.65:39958). Mar 17 17:33:09.178519 sshd[1828]: Accepted publickey for core from 139.178.89.65 port 39958 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:33:09.180979 sshd-session[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:09.186072 systemd-logind[1457]: New session 6 of user core. Mar 17 17:33:09.193466 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:33:09.703025 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:33:09.703667 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:33:09.708849 sudo[1832]: pam_unix(sudo:session): session closed for user root Mar 17 17:33:09.714254 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:33:09.714549 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:33:09.733739 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:33:09.761621 augenrules[1854]: No rules Mar 17 17:33:09.762902 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:33:09.763253 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:33:09.764903 sudo[1831]: pam_unix(sudo:session): session closed for user root Mar 17 17:33:09.923167 sshd[1830]: Connection closed by 139.178.89.65 port 39958 Mar 17 17:33:09.924129 sshd-session[1828]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:09.929473 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:33:09.931119 systemd[1]: sshd@5-138.199.148.212:22-139.178.89.65:39958.service: Deactivated successfully. Mar 17 17:33:09.933084 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:33:09.934354 systemd-logind[1457]: Removed session 6. Mar 17 17:33:10.100636 systemd[1]: Started sshd@6-138.199.148.212:22-139.178.89.65:39966.service - OpenSSH per-connection server daemon (139.178.89.65:39966). Mar 17 17:33:11.079701 sshd[1862]: Accepted publickey for core from 139.178.89.65 port 39966 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:33:11.082196 sshd-session[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:11.088176 systemd-logind[1457]: New session 7 of user core. Mar 17 17:33:11.095974 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:33:11.597649 sudo[1865]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:33:11.597914 sudo[1865]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:33:11.922648 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:33:11.922790 (dockerd)[1883]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:33:12.150784 dockerd[1883]: time="2025-03-17T17:33:12.150715890Z" level=info msg="Starting up" Mar 17 17:33:12.245133 dockerd[1883]: time="2025-03-17T17:33:12.244642578Z" level=info msg="Loading containers: start." Mar 17 17:33:12.407183 kernel: Initializing XFRM netlink socket Mar 17 17:33:12.504918 systemd-networkd[1376]: docker0: Link UP Mar 17 17:33:12.543687 dockerd[1883]: time="2025-03-17T17:33:12.543640971Z" level=info msg="Loading containers: done." Mar 17 17:33:12.558663 dockerd[1883]: time="2025-03-17T17:33:12.558591969Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:33:12.558879 dockerd[1883]: time="2025-03-17T17:33:12.558696250Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:33:12.558879 dockerd[1883]: time="2025-03-17T17:33:12.558805810Z" level=info msg="Daemon has completed initialization" Mar 17 17:33:12.615989 dockerd[1883]: time="2025-03-17T17:33:12.615912627Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:33:12.616226 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:33:13.706933 containerd[1477]: time="2025-03-17T17:33:13.706667980Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 17:33:13.903183 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Mar 17 17:33:13.908439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:33:14.031468 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:33:14.031824 (kubelet)[2082]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:33:14.086581 kubelet[2082]: E0317 17:33:14.086522 2082 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:33:14.088882 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:33:14.089024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:33:14.320756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1388995031.mount: Deactivated successfully. Mar 17 17:33:15.248242 containerd[1477]: time="2025-03-17T17:33:15.247268242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:15.250029 containerd[1477]: time="2025-03-17T17:33:15.249934096Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=29793616" Mar 17 17:33:15.250763 containerd[1477]: time="2025-03-17T17:33:15.250701659Z" level=info msg="ImageCreate event name:\"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:15.254856 containerd[1477]: time="2025-03-17T17:33:15.254792280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:15.256555 containerd[1477]: time="2025-03-17T17:33:15.256385728Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"29790324\" in 1.549673548s" Mar 17 17:33:15.256555 containerd[1477]: time="2025-03-17T17:33:15.256421208Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 17 17:33:15.287722 containerd[1477]: time="2025-03-17T17:33:15.287689487Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 17:33:16.544104 containerd[1477]: time="2025-03-17T17:33:16.544036260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:16.545208 containerd[1477]: time="2025-03-17T17:33:16.545126625Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=26861187" Mar 17 17:33:16.546513 containerd[1477]: time="2025-03-17T17:33:16.546458312Z" level=info msg="ImageCreate event name:\"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:16.550021 containerd[1477]: time="2025-03-17T17:33:16.549963929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:16.554160 containerd[1477]: time="2025-03-17T17:33:16.553303866Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"28301963\" in 1.265427179s" Mar 17 17:33:16.554160 containerd[1477]: time="2025-03-17T17:33:16.553351346Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 17 17:33:16.577970 containerd[1477]: time="2025-03-17T17:33:16.577932790Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 17:33:17.510138 containerd[1477]: time="2025-03-17T17:33:17.510036456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:17.511347 containerd[1477]: time="2025-03-17T17:33:17.511296425Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=16264656" Mar 17 17:33:17.512121 containerd[1477]: time="2025-03-17T17:33:17.512073936Z" level=info msg="ImageCreate event name:\"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:17.516408 containerd[1477]: time="2025-03-17T17:33:17.516361943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:17.518170 containerd[1477]: time="2025-03-17T17:33:17.517647554Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"17705450\" in 939.521202ms" Mar 17 17:33:17.518170 containerd[1477]: time="2025-03-17T17:33:17.517684715Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 17 17:33:17.540096 containerd[1477]: time="2025-03-17T17:33:17.540042589Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:33:18.511360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1149913742.mount: Deactivated successfully. Mar 17 17:33:18.852132 containerd[1477]: time="2025-03-17T17:33:18.852010261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:18.853600 containerd[1477]: time="2025-03-17T17:33:18.853555440Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=25771874" Mar 17 17:33:18.856203 containerd[1477]: time="2025-03-17T17:33:18.854346070Z" level=info msg="ImageCreate event name:\"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:18.858420 containerd[1477]: time="2025-03-17T17:33:18.858379344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:18.859626 containerd[1477]: time="2025-03-17T17:33:18.859589990Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"25770867\" in 1.319502599s" Mar 17 17:33:18.860020 containerd[1477]: time="2025-03-17T17:33:18.860001446Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 17:33:18.884240 containerd[1477]: time="2025-03-17T17:33:18.884186008Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:33:19.442955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2729931157.mount: Deactivated successfully. Mar 17 17:33:20.065858 containerd[1477]: time="2025-03-17T17:33:20.065751542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:20.067895 containerd[1477]: time="2025-03-17T17:33:20.067509926Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Mar 17 17:33:20.070275 containerd[1477]: time="2025-03-17T17:33:20.068890016Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:20.074354 containerd[1477]: time="2025-03-17T17:33:20.073845636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:20.075548 containerd[1477]: time="2025-03-17T17:33:20.075496256Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.191259605s" Mar 17 17:33:20.075548 containerd[1477]: time="2025-03-17T17:33:20.075546137Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 17:33:20.097240 containerd[1477]: time="2025-03-17T17:33:20.097192603Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 17:33:20.619235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount888645686.mount: Deactivated successfully. Mar 17 17:33:20.624878 containerd[1477]: time="2025-03-17T17:33:20.624092767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:20.625044 containerd[1477]: time="2025-03-17T17:33:20.625014440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Mar 17 17:33:20.625984 containerd[1477]: time="2025-03-17T17:33:20.625958874Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:20.628219 containerd[1477]: time="2025-03-17T17:33:20.628195316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:20.628964 containerd[1477]: time="2025-03-17T17:33:20.628923942Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 531.428688ms" Mar 17 17:33:20.628964 containerd[1477]: time="2025-03-17T17:33:20.628954863Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 17 17:33:20.649802 containerd[1477]: time="2025-03-17T17:33:20.649767619Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 17:33:21.212587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1343978710.mount: Deactivated successfully. Mar 17 17:33:22.654826 containerd[1477]: time="2025-03-17T17:33:22.653602506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:22.654826 containerd[1477]: time="2025-03-17T17:33:22.654782987Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Mar 17 17:33:22.655377 containerd[1477]: time="2025-03-17T17:33:22.655351047Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:22.659871 containerd[1477]: time="2025-03-17T17:33:22.659839962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:22.661486 containerd[1477]: time="2025-03-17T17:33:22.661457538Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.011626836s" Mar 17 17:33:22.661592 containerd[1477]: time="2025-03-17T17:33:22.661575702Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 17 17:33:24.153708 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Mar 17 17:33:24.163386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:33:24.273400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:33:24.273965 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:33:24.319740 kubelet[2343]: E0317 17:33:24.319691 2343 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:33:24.322054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:33:24.322330 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:33:28.506286 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:33:28.525728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:33:28.559068 systemd[1]: Reloading requested from client PID 2357 ('systemctl') (unit session-7.scope)... Mar 17 17:33:28.559090 systemd[1]: Reloading... Mar 17 17:33:28.654184 zram_generator::config[2393]: No configuration found. Mar 17 17:33:28.764492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:33:28.831005 systemd[1]: Reloading finished in 271 ms. Mar 17 17:33:28.874786 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:33:28.874938 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:33:28.875469 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:33:28.882439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:33:28.982343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:33:28.988937 (kubelet)[2444]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:33:29.036598 kubelet[2444]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:33:29.036598 kubelet[2444]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:33:29.036598 kubelet[2444]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:33:29.038081 kubelet[2444]: I0317 17:33:29.037997 2444 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:33:29.791937 kubelet[2444]: I0317 17:33:29.791881 2444 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:33:29.791937 kubelet[2444]: I0317 17:33:29.791925 2444 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:33:29.792320 kubelet[2444]: I0317 17:33:29.792303 2444 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:33:29.812227 kubelet[2444]: E0317 17:33:29.812193 2444 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://138.199.148.212:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:33:29.812702 kubelet[2444]: I0317 17:33:29.812466 2444 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:33:29.824035 kubelet[2444]: I0317 17:33:29.823984 2444 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:33:29.825602 kubelet[2444]: I0317 17:33:29.825528 2444 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:33:29.825788 kubelet[2444]: I0317 17:33:29.825585 2444 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-2-0-5dd1d5cf3a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:33:29.825907 kubelet[2444]: I0317 17:33:29.825849 2444 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:33:29.825907 kubelet[2444]: I0317 17:33:29.825862 2444 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:33:29.826158 kubelet[2444]: I0317 17:33:29.826120 2444 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:33:29.828187 kubelet[2444]: I0317 17:33:29.827734 2444 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:33:29.828187 kubelet[2444]: I0317 17:33:29.827759 2444 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:33:29.828187 kubelet[2444]: I0317 17:33:29.827918 2444 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:33:29.828187 kubelet[2444]: W0317 17:33:29.827896 2444 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.148.212:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-0-5dd1d5cf3a&limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:33:29.828187 kubelet[2444]: E0317 17:33:29.827988 2444 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.148.212:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-0-5dd1d5cf3a&limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:33:29.828187 kubelet[2444]: I0317 17:33:29.828003 2444 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:33:29.829427 kubelet[2444]: I0317 17:33:29.829396 2444 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:33:29.829812 kubelet[2444]: I0317 17:33:29.829785 2444 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:33:29.829910 kubelet[2444]: W0317 17:33:29.829892 2444 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:33:29.830904 kubelet[2444]: I0317 17:33:29.830868 2444 server.go:1264] "Started kubelet" Mar 17 17:33:29.831020 kubelet[2444]: W0317 17:33:29.830979 2444 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.148.212:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:33:29.831020 kubelet[2444]: E0317 17:33:29.831022 2444 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.148.212:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:33:29.836176 kubelet[2444]: I0317 17:33:29.836130 2444 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:33:29.839872 kubelet[2444]: E0317 17:33:29.839065 2444 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.148.212:6443/api/v1/namespaces/default/events\": dial tcp 138.199.148.212:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-2-0-5dd1d5cf3a.182da78281ff8fb2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-2-0-5dd1d5cf3a,UID:ci-4152-2-2-0-5dd1d5cf3a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-2-0-5dd1d5cf3a,},FirstTimestamp:2025-03-17 17:33:29.830846386 +0000 UTC m=+0.838498838,LastTimestamp:2025-03-17 17:33:29.830846386 +0000 UTC m=+0.838498838,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-2-0-5dd1d5cf3a,}" Mar 17 17:33:29.843477 kubelet[2444]: I0317 17:33:29.843198 2444 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:33:29.845965 kubelet[2444]: I0317 17:33:29.844432 2444 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:33:29.845965 kubelet[2444]: I0317 17:33:29.845430 2444 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:33:29.846473 kubelet[2444]: I0317 17:33:29.846425 2444 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:33:29.846790 kubelet[2444]: I0317 17:33:29.846771 2444 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:33:29.846969 kubelet[2444]: I0317 17:33:29.846936 2444 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:33:29.847017 kubelet[2444]: I0317 17:33:29.847009 2444 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:33:29.847313 kubelet[2444]: E0317 17:33:29.847286 2444 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.148.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-0-5dd1d5cf3a?timeout=10s\": dial tcp 138.199.148.212:6443: connect: connection refused" interval="200ms" Mar 17 17:33:29.847630 kubelet[2444]: I0317 17:33:29.847609 2444 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:33:29.847777 kubelet[2444]: I0317 17:33:29.847759 2444 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:33:29.849473 kubelet[2444]: W0317 17:33:29.849433 2444 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.148.212:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:33:29.849654 kubelet[2444]: E0317 17:33:29.849635 2444 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.148.212:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:33:29.849929 kubelet[2444]: I0317 17:33:29.849908 2444 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:33:29.859015 kubelet[2444]: I0317 17:33:29.858940 2444 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:33:29.860081 kubelet[2444]: I0317 17:33:29.860040 2444 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:33:29.860243 kubelet[2444]: I0317 17:33:29.860217 2444 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:33:29.860243 kubelet[2444]: I0317 17:33:29.860242 2444 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:33:29.860325 kubelet[2444]: E0317 17:33:29.860285 2444 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:33:29.868510 kubelet[2444]: W0317 17:33:29.868410 2444 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.148.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:33:29.868510 kubelet[2444]: E0317 17:33:29.868488 2444 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.148.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:33:29.868630 kubelet[2444]: E0317 17:33:29.868557 2444 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:33:29.878381 kubelet[2444]: I0317 17:33:29.878350 2444 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:33:29.878524 kubelet[2444]: I0317 17:33:29.878367 2444 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:33:29.878524 kubelet[2444]: I0317 17:33:29.878458 2444 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:33:29.880360 kubelet[2444]: I0317 17:33:29.880339 2444 policy_none.go:49] "None policy: Start" Mar 17 17:33:29.881096 kubelet[2444]: I0317 17:33:29.881006 2444 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:33:29.881191 kubelet[2444]: I0317 17:33:29.881104 2444 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:33:29.889005 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:33:29.904842 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:33:29.911125 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:33:29.920071 kubelet[2444]: I0317 17:33:29.920023 2444 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:33:29.920646 kubelet[2444]: I0317 17:33:29.920408 2444 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:33:29.920646 kubelet[2444]: I0317 17:33:29.920566 2444 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:33:29.923544 kubelet[2444]: E0317 17:33:29.923489 2444 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-2-0-5dd1d5cf3a\" not found" Mar 17 17:33:29.947340 kubelet[2444]: I0317 17:33:29.947289 2444 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:29.947863 kubelet[2444]: E0317 17:33:29.947818 2444 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.148.212:6443/api/v1/nodes\": dial tcp 138.199.148.212:6443: connect: connection refused" node="ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:29.961611 kubelet[2444]: I0317 17:33:29.961381 2444 topology_manager.go:215] "Topology Admit Handler" podUID="54aa9d03f1f655313a81e05671d6ae93" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:29.964364 kubelet[2444]: I0317 17:33:29.963979 2444 topology_manager.go:215] "Topology Admit Handler" podUID="dd5283fcf1bea7eb1b3e3a107600d58b" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:29.968216 kubelet[2444]: I0317 17:33:29.968083 2444 topology_manager.go:215] "Topology Admit Handler" podUID="21457cf4fdd4a10480acb3902f4e166e" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:29.976871 systemd[1]: Created slice kubepods-burstable-pod54aa9d03f1f655313a81e05671d6ae93.slice - libcontainer container kubepods-burstable-pod54aa9d03f1f655313a81e05671d6ae93.slice. Mar 17 17:33:30.001498 systemd[1]: Created slice kubepods-burstable-poddd5283fcf1bea7eb1b3e3a107600d58b.slice - libcontainer container kubepods-burstable-poddd5283fcf1bea7eb1b3e3a107600d58b.slice. Mar 17 17:33:30.006714 systemd[1]: Created slice kubepods-burstable-pod21457cf4fdd4a10480acb3902f4e166e.slice - libcontainer container kubepods-burstable-pod21457cf4fdd4a10480acb3902f4e166e.slice. Mar 17 17:33:30.048898 kubelet[2444]: E0317 17:33:30.047917 2444 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.148.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-0-5dd1d5cf3a?timeout=10s\": dial tcp 138.199.148.212:6443: connect: connection refused" interval="400ms" Mar 17 17:33:30.048898 kubelet[2444]: I0317 17:33:30.048111 2444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21457cf4fdd4a10480acb3902f4e166e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"21457cf4fdd4a10480acb3902f4e166e\") " pod="kube-system/kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:30.048898 kubelet[2444]: I0317 17:33:30.048169 2444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54aa9d03f1f655313a81e05671d6ae93-ca-certs\") pod \"kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"54aa9d03f1f655313a81e05671d6ae93\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:30.048898 kubelet[2444]: I0317 17:33:30.048222 2444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/54aa9d03f1f655313a81e05671d6ae93-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"54aa9d03f1f655313a81e05671d6ae93\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:30.048898 kubelet[2444]: I0317 17:33:30.048303 2444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54aa9d03f1f655313a81e05671d6ae93-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"54aa9d03f1f655313a81e05671d6ae93\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:30.049631 kubelet[2444]: I0317 17:33:30.048365 2444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54aa9d03f1f655313a81e05671d6ae93-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"54aa9d03f1f655313a81e05671d6ae93\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:30.049631 kubelet[2444]: I0317 17:33:30.048398 2444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21457cf4fdd4a10480acb3902f4e166e-ca-certs\") pod \"kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"21457cf4fdd4a10480acb3902f4e166e\") " pod="kube-system/kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:30.049631 kubelet[2444]: I0317 17:33:30.048427 2444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21457cf4fdd4a10480acb3902f4e166e-k8s-certs\") pod \"kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"21457cf4fdd4a10480acb3902f4e166e\") " pod="kube-system/kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:30.049631 kubelet[2444]: I0317 17:33:30.048459 2444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd5283fcf1bea7eb1b3e3a107600d58b-kubeconfig\") pod \"kube-scheduler-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"dd5283fcf1bea7eb1b3e3a107600d58b\") " pod="kube-system/kube-scheduler-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:30.049631 kubelet[2444]: I0317 17:33:30.048491 2444 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54aa9d03f1f655313a81e05671d6ae93-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"54aa9d03f1f655313a81e05671d6ae93\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:30.151381 kubelet[2444]: I0317 17:33:30.151110 2444 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:30.151735 kubelet[2444]: E0317 17:33:30.151648 2444 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.148.212:6443/api/v1/nodes\": dial tcp 138.199.148.212:6443: connect: connection refused" node="ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:30.298399 containerd[1477]: time="2025-03-17T17:33:30.298339862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a,Uid:54aa9d03f1f655313a81e05671d6ae93,Namespace:kube-system,Attempt:0,}" Mar 17 17:33:30.305355 containerd[1477]: time="2025-03-17T17:33:30.304913490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-2-0-5dd1d5cf3a,Uid:dd5283fcf1bea7eb1b3e3a107600d58b,Namespace:kube-system,Attempt:0,}" Mar 17 17:33:30.310176 containerd[1477]: time="2025-03-17T17:33:30.310094319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a,Uid:21457cf4fdd4a10480acb3902f4e166e,Namespace:kube-system,Attempt:0,}" Mar 17 17:33:30.450528 kubelet[2444]: E0317 17:33:30.450464 2444 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.148.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-0-5dd1d5cf3a?timeout=10s\": dial tcp 138.199.148.212:6443: connect: connection refused" interval="800ms" Mar 17 17:33:30.555480 kubelet[2444]: I0317 17:33:30.555031 2444 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:30.555623 kubelet[2444]: E0317 17:33:30.555489 2444 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.148.212:6443/api/v1/nodes\": dial tcp 138.199.148.212:6443: connect: connection refused" node="ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:30.805191 kubelet[2444]: W0317 17:33:30.804984 2444 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.148.212:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:33:30.805191 kubelet[2444]: E0317 17:33:30.805098 2444 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.148.212:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:33:30.842065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3435802948.mount: Deactivated successfully. Mar 17 17:33:30.847687 containerd[1477]: time="2025-03-17T17:33:30.847640330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:33:30.850248 containerd[1477]: time="2025-03-17T17:33:30.850194324Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Mar 17 17:33:30.851768 containerd[1477]: time="2025-03-17T17:33:30.851735688Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:33:30.853069 containerd[1477]: time="2025-03-17T17:33:30.853035325Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:33:30.853854 containerd[1477]: time="2025-03-17T17:33:30.853827788Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:33:30.854607 containerd[1477]: time="2025-03-17T17:33:30.854455926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:33:30.855426 containerd[1477]: time="2025-03-17T17:33:30.855379432Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:33:30.857315 containerd[1477]: time="2025-03-17T17:33:30.856727991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:33:30.859482 containerd[1477]: time="2025-03-17T17:33:30.859450589Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.270548ms" Mar 17 17:33:30.861318 containerd[1477]: time="2025-03-17T17:33:30.861275121Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 556.260748ms" Mar 17 17:33:30.861880 containerd[1477]: time="2025-03-17T17:33:30.861837698Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.418393ms" Mar 17 17:33:30.963604 containerd[1477]: time="2025-03-17T17:33:30.963494292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:33:30.963876 containerd[1477]: time="2025-03-17T17:33:30.963747379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:33:30.963876 containerd[1477]: time="2025-03-17T17:33:30.963774180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:33:30.964159 containerd[1477]: time="2025-03-17T17:33:30.964066748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:33:30.967070 containerd[1477]: time="2025-03-17T17:33:30.966943631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:33:30.967251 containerd[1477]: time="2025-03-17T17:33:30.967055674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:33:30.967386 containerd[1477]: time="2025-03-17T17:33:30.967233879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:33:30.968076 containerd[1477]: time="2025-03-17T17:33:30.968000941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:33:30.969798 containerd[1477]: time="2025-03-17T17:33:30.969349380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:33:30.969798 containerd[1477]: time="2025-03-17T17:33:30.968048863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:33:30.970580 containerd[1477]: time="2025-03-17T17:33:30.969769752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:33:30.970580 containerd[1477]: time="2025-03-17T17:33:30.969870875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:33:30.993319 systemd[1]: Started cri-containerd-2a0328a3cab34f478b846bb1c8bde04abb7c7f680e2a1e6ce86134708b7affe2.scope - libcontainer container 2a0328a3cab34f478b846bb1c8bde04abb7c7f680e2a1e6ce86134708b7affe2. Mar 17 17:33:30.999214 systemd[1]: Started cri-containerd-4342219d82960077af97f05ec1b3a3f418c939ed8288c9486aa2251a325b23cf.scope - libcontainer container 4342219d82960077af97f05ec1b3a3f418c939ed8288c9486aa2251a325b23cf. Mar 17 17:33:31.001350 systemd[1]: Started cri-containerd-c7796abdd348d04e2bd5135eee5dbd156b29d8099ef3eba712cf7bcca0ed111d.scope - libcontainer container c7796abdd348d04e2bd5135eee5dbd156b29d8099ef3eba712cf7bcca0ed111d. Mar 17 17:33:31.005120 kubelet[2444]: W0317 17:33:31.004796 2444 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.148.212:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:33:31.005380 kubelet[2444]: E0317 17:33:31.005357 2444 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.148.212:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:33:31.034517 kubelet[2444]: W0317 17:33:31.034457 2444 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.148.212:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-0-5dd1d5cf3a&limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:33:31.035038 kubelet[2444]: E0317 17:33:31.034897 2444 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.148.212:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-2-0-5dd1d5cf3a&limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:33:31.046704 containerd[1477]: time="2025-03-17T17:33:31.046660367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a,Uid:21457cf4fdd4a10480acb3902f4e166e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7796abdd348d04e2bd5135eee5dbd156b29d8099ef3eba712cf7bcca0ed111d\"" Mar 17 17:33:31.051312 containerd[1477]: time="2025-03-17T17:33:31.051195414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a,Uid:54aa9d03f1f655313a81e05671d6ae93,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a0328a3cab34f478b846bb1c8bde04abb7c7f680e2a1e6ce86134708b7affe2\"" Mar 17 17:33:31.057197 containerd[1477]: time="2025-03-17T17:33:31.056306077Z" level=info msg="CreateContainer within sandbox \"c7796abdd348d04e2bd5135eee5dbd156b29d8099ef3eba712cf7bcca0ed111d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:33:31.059059 containerd[1477]: time="2025-03-17T17:33:31.058973312Z" level=info msg="CreateContainer within sandbox \"2a0328a3cab34f478b846bb1c8bde04abb7c7f680e2a1e6ce86134708b7affe2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:33:31.072295 containerd[1477]: time="2025-03-17T17:33:31.072132521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-2-0-5dd1d5cf3a,Uid:dd5283fcf1bea7eb1b3e3a107600d58b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4342219d82960077af97f05ec1b3a3f418c939ed8288c9486aa2251a325b23cf\"" Mar 17 17:33:31.076656 containerd[1477]: time="2025-03-17T17:33:31.076601006Z" level=info msg="CreateContainer within sandbox \"4342219d82960077af97f05ec1b3a3f418c939ed8288c9486aa2251a325b23cf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:33:31.083808 containerd[1477]: time="2025-03-17T17:33:31.083752847Z" level=info msg="CreateContainer within sandbox \"2a0328a3cab34f478b846bb1c8bde04abb7c7f680e2a1e6ce86134708b7affe2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d661fdb42fdc1ccbd40767f73ee2eb2bb8498922232319426b763f3b430376d9\"" Mar 17 17:33:31.084405 containerd[1477]: time="2025-03-17T17:33:31.084378184Z" level=info msg="StartContainer for \"d661fdb42fdc1ccbd40767f73ee2eb2bb8498922232319426b763f3b430376d9\"" Mar 17 17:33:31.087649 containerd[1477]: time="2025-03-17T17:33:31.087588554Z" level=info msg="CreateContainer within sandbox \"c7796abdd348d04e2bd5135eee5dbd156b29d8099ef3eba712cf7bcca0ed111d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9a61b2c2e2cc84f0b4e29e7997d4c047972fa0801516cb176facc0398548abc3\"" Mar 17 17:33:31.088252 containerd[1477]: time="2025-03-17T17:33:31.088216172Z" level=info msg="StartContainer for \"9a61b2c2e2cc84f0b4e29e7997d4c047972fa0801516cb176facc0398548abc3\"" Mar 17 17:33:31.096532 containerd[1477]: time="2025-03-17T17:33:31.095983069Z" level=info msg="CreateContainer within sandbox \"4342219d82960077af97f05ec1b3a3f418c939ed8288c9486aa2251a325b23cf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9e97d77b59e64e9cce0a99b8e10ed0c6cdfb0288249bdccdc7d5fea70b1dfa38\"" Mar 17 17:33:31.097588 containerd[1477]: time="2025-03-17T17:33:31.097112181Z" level=info msg="StartContainer for \"9e97d77b59e64e9cce0a99b8e10ed0c6cdfb0288249bdccdc7d5fea70b1dfa38\"" Mar 17 17:33:31.110641 kubelet[2444]: W0317 17:33:31.110552 2444 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.148.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:33:31.110641 kubelet[2444]: E0317 17:33:31.110638 2444 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.148.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.148.212:6443: connect: connection refused Mar 17 17:33:31.114396 systemd[1]: Started cri-containerd-d661fdb42fdc1ccbd40767f73ee2eb2bb8498922232319426b763f3b430376d9.scope - libcontainer container d661fdb42fdc1ccbd40767f73ee2eb2bb8498922232319426b763f3b430376d9. Mar 17 17:33:31.132313 systemd[1]: Started cri-containerd-9a61b2c2e2cc84f0b4e29e7997d4c047972fa0801516cb176facc0398548abc3.scope - libcontainer container 9a61b2c2e2cc84f0b4e29e7997d4c047972fa0801516cb176facc0398548abc3. Mar 17 17:33:31.145429 systemd[1]: Started cri-containerd-9e97d77b59e64e9cce0a99b8e10ed0c6cdfb0288249bdccdc7d5fea70b1dfa38.scope - libcontainer container 9e97d77b59e64e9cce0a99b8e10ed0c6cdfb0288249bdccdc7d5fea70b1dfa38. Mar 17 17:33:31.187890 containerd[1477]: time="2025-03-17T17:33:31.187672040Z" level=info msg="StartContainer for \"d661fdb42fdc1ccbd40767f73ee2eb2bb8498922232319426b763f3b430376d9\" returns successfully" Mar 17 17:33:31.199325 containerd[1477]: time="2025-03-17T17:33:31.199242404Z" level=info msg="StartContainer for \"9a61b2c2e2cc84f0b4e29e7997d4c047972fa0801516cb176facc0398548abc3\" returns successfully" Mar 17 17:33:31.208671 containerd[1477]: time="2025-03-17T17:33:31.208587626Z" level=info msg="StartContainer for \"9e97d77b59e64e9cce0a99b8e10ed0c6cdfb0288249bdccdc7d5fea70b1dfa38\" returns successfully" Mar 17 17:33:31.252172 kubelet[2444]: E0317 17:33:31.251794 2444 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.148.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-2-0-5dd1d5cf3a?timeout=10s\": dial tcp 138.199.148.212:6443: connect: connection refused" interval="1.6s" Mar 17 17:33:31.358441 kubelet[2444]: I0317 17:33:31.358338 2444 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:33.387167 kubelet[2444]: E0317 17:33:33.386987 2444 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-2-0-5dd1d5cf3a\" not found" node="ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:33.436233 kubelet[2444]: I0317 17:33:33.436193 2444 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:33.464272 kubelet[2444]: E0317 17:33:33.464234 2444 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-0-5dd1d5cf3a\" not found" Mar 17 17:33:33.564551 kubelet[2444]: E0317 17:33:33.564352 2444 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-0-5dd1d5cf3a\" not found" Mar 17 17:33:33.664843 kubelet[2444]: E0317 17:33:33.664796 2444 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-2-0-5dd1d5cf3a\" not found" Mar 17 17:33:33.831878 kubelet[2444]: I0317 17:33:33.831565 2444 apiserver.go:52] "Watching apiserver" Mar 17 17:33:33.847226 kubelet[2444]: I0317 17:33:33.847182 2444 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:33:35.431842 systemd[1]: Reloading requested from client PID 2718 ('systemctl') (unit session-7.scope)... Mar 17 17:33:35.432287 systemd[1]: Reloading... Mar 17 17:33:35.509192 zram_generator::config[2758]: No configuration found. Mar 17 17:33:35.610824 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:33:35.692387 systemd[1]: Reloading finished in 259 ms. Mar 17 17:33:35.731554 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:33:35.732233 kubelet[2444]: I0317 17:33:35.731757 2444 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:33:35.744799 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:33:35.745139 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:33:35.745250 systemd[1]: kubelet.service: Consumed 1.203s CPU time, 111.2M memory peak, 0B memory swap peak. Mar 17 17:33:35.753761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:33:35.856392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:33:35.860929 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:33:35.905689 kubelet[2803]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:33:35.906033 kubelet[2803]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:33:35.906073 kubelet[2803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:33:35.906215 kubelet[2803]: I0317 17:33:35.906185 2803 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:33:35.910408 kubelet[2803]: I0317 17:33:35.910381 2803 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:33:35.910526 kubelet[2803]: I0317 17:33:35.910514 2803 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:33:35.911197 kubelet[2803]: I0317 17:33:35.910909 2803 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:33:35.913872 kubelet[2803]: I0317 17:33:35.913842 2803 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:33:35.915538 kubelet[2803]: I0317 17:33:35.915518 2803 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:33:35.924403 kubelet[2803]: I0317 17:33:35.924378 2803 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:33:35.924734 kubelet[2803]: I0317 17:33:35.924694 2803 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:33:35.924945 kubelet[2803]: I0317 17:33:35.924789 2803 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-2-0-5dd1d5cf3a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:33:35.925077 kubelet[2803]: I0317 17:33:35.925064 2803 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:33:35.925132 kubelet[2803]: I0317 17:33:35.925123 2803 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:33:35.925240 kubelet[2803]: I0317 17:33:35.925230 2803 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:33:35.925405 kubelet[2803]: I0317 17:33:35.925395 2803 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:33:35.925921 kubelet[2803]: I0317 17:33:35.925904 2803 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:33:35.927179 kubelet[2803]: I0317 17:33:35.926028 2803 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:33:35.927179 kubelet[2803]: I0317 17:33:35.926057 2803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:33:35.930600 kubelet[2803]: I0317 17:33:35.930575 2803 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:33:35.930911 kubelet[2803]: I0317 17:33:35.930890 2803 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:33:35.931358 kubelet[2803]: I0317 17:33:35.931339 2803 server.go:1264] "Started kubelet" Mar 17 17:33:35.931797 kubelet[2803]: I0317 17:33:35.931767 2803 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:33:35.932764 kubelet[2803]: I0317 17:33:35.932732 2803 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:33:35.933694 kubelet[2803]: I0317 17:33:35.933644 2803 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:33:35.933984 kubelet[2803]: I0317 17:33:35.933964 2803 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:33:35.934830 kubelet[2803]: I0317 17:33:35.934790 2803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:33:35.944069 kubelet[2803]: I0317 17:33:35.943119 2803 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:33:35.946112 kubelet[2803]: I0317 17:33:35.946080 2803 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:33:35.946264 kubelet[2803]: I0317 17:33:35.946248 2803 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:33:35.949679 kubelet[2803]: I0317 17:33:35.949629 2803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:33:35.951958 kubelet[2803]: I0317 17:33:35.951927 2803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:33:35.952023 kubelet[2803]: I0317 17:33:35.951967 2803 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:33:35.952023 kubelet[2803]: I0317 17:33:35.951990 2803 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:33:35.952065 kubelet[2803]: E0317 17:33:35.952026 2803 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:33:35.966509 kubelet[2803]: I0317 17:33:35.966481 2803 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:33:35.966774 kubelet[2803]: I0317 17:33:35.966568 2803 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:33:35.974523 kubelet[2803]: I0317 17:33:35.974496 2803 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:33:36.034334 kubelet[2803]: I0317 17:33:36.033924 2803 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:33:36.034334 kubelet[2803]: I0317 17:33:36.033941 2803 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:33:36.034334 kubelet[2803]: I0317 17:33:36.033961 2803 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:33:36.034334 kubelet[2803]: I0317 17:33:36.034260 2803 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:33:36.034334 kubelet[2803]: I0317 17:33:36.034272 2803 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:33:36.034334 kubelet[2803]: I0317 17:33:36.034290 2803 policy_none.go:49] "None policy: Start" Mar 17 17:33:36.035624 kubelet[2803]: I0317 17:33:36.035605 2803 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:33:36.035875 kubelet[2803]: I0317 17:33:36.035863 2803 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:33:36.036239 kubelet[2803]: I0317 17:33:36.036132 2803 state_mem.go:75] "Updated machine memory state" Mar 17 17:33:36.041080 kubelet[2803]: I0317 17:33:36.041037 2803 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:33:36.041461 kubelet[2803]: I0317 17:33:36.041207 2803 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:33:36.041461 kubelet[2803]: I0317 17:33:36.041299 2803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:33:36.050467 kubelet[2803]: I0317 17:33:36.050371 2803 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:36.053001 kubelet[2803]: I0317 17:33:36.052426 2803 topology_manager.go:215] "Topology Admit Handler" podUID="dd5283fcf1bea7eb1b3e3a107600d58b" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:36.053001 kubelet[2803]: I0317 17:33:36.052521 2803 topology_manager.go:215] "Topology Admit Handler" podUID="21457cf4fdd4a10480acb3902f4e166e" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:36.053001 kubelet[2803]: I0317 17:33:36.052554 2803 topology_manager.go:215] "Topology Admit Handler" podUID="54aa9d03f1f655313a81e05671d6ae93" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:36.065096 kubelet[2803]: I0317 17:33:36.065064 2803 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:36.066022 kubelet[2803]: I0317 17:33:36.065990 2803 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:36.147445 kubelet[2803]: I0317 17:33:36.147377 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21457cf4fdd4a10480acb3902f4e166e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"21457cf4fdd4a10480acb3902f4e166e\") " pod="kube-system/kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:36.147875 kubelet[2803]: I0317 17:33:36.147452 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54aa9d03f1f655313a81e05671d6ae93-ca-certs\") pod \"kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"54aa9d03f1f655313a81e05671d6ae93\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:36.147875 kubelet[2803]: I0317 17:33:36.147494 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/54aa9d03f1f655313a81e05671d6ae93-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"54aa9d03f1f655313a81e05671d6ae93\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:36.147875 kubelet[2803]: I0317 17:33:36.147553 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54aa9d03f1f655313a81e05671d6ae93-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"54aa9d03f1f655313a81e05671d6ae93\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:36.147875 kubelet[2803]: I0317 17:33:36.147588 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54aa9d03f1f655313a81e05671d6ae93-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"54aa9d03f1f655313a81e05671d6ae93\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:36.147875 kubelet[2803]: I0317 17:33:36.147629 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd5283fcf1bea7eb1b3e3a107600d58b-kubeconfig\") pod \"kube-scheduler-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"dd5283fcf1bea7eb1b3e3a107600d58b\") " pod="kube-system/kube-scheduler-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:36.148321 kubelet[2803]: I0317 17:33:36.147659 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21457cf4fdd4a10480acb3902f4e166e-ca-certs\") pod \"kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"21457cf4fdd4a10480acb3902f4e166e\") " pod="kube-system/kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:36.148321 kubelet[2803]: I0317 17:33:36.147686 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21457cf4fdd4a10480acb3902f4e166e-k8s-certs\") pod \"kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"21457cf4fdd4a10480acb3902f4e166e\") " pod="kube-system/kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:36.249865 kubelet[2803]: I0317 17:33:36.248869 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54aa9d03f1f655313a81e05671d6ae93-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a\" (UID: \"54aa9d03f1f655313a81e05671d6ae93\") " pod="kube-system/kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:36.429823 sudo[2836]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:33:36.430327 sudo[2836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:33:36.863119 sudo[2836]: pam_unix(sudo:session): session closed for user root Mar 17 17:33:36.927702 kubelet[2803]: I0317 17:33:36.927666 2803 apiserver.go:52] "Watching apiserver" Mar 17 17:33:36.946623 kubelet[2803]: I0317 17:33:36.946553 2803 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:33:37.026880 kubelet[2803]: E0317 17:33:37.026624 2803 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a" Mar 17 17:33:37.058404 kubelet[2803]: I0317 17:33:37.058326 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-2-0-5dd1d5cf3a" podStartSLOduration=1.058305003 podStartE2EDuration="1.058305003s" podCreationTimestamp="2025-03-17 17:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:33:37.044590106 +0000 UTC m=+1.180205353" watchObservedRunningTime="2025-03-17 17:33:37.058305003 +0000 UTC m=+1.193920250" Mar 17 17:33:37.070889 kubelet[2803]: I0317 17:33:37.070825 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-2-0-5dd1d5cf3a" podStartSLOduration=1.070805271 podStartE2EDuration="1.070805271s" podCreationTimestamp="2025-03-17 17:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:33:37.058918058 +0000 UTC m=+1.194533305" watchObservedRunningTime="2025-03-17 17:33:37.070805271 +0000 UTC m=+1.206420518" Mar 17 17:33:39.177926 sudo[1865]: pam_unix(sudo:session): session closed for user root Mar 17 17:33:39.335711 sshd[1864]: Connection closed by 139.178.89.65 port 39966 Mar 17 17:33:39.336411 sshd-session[1862]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:39.340066 systemd[1]: sshd@6-138.199.148.212:22-139.178.89.65:39966.service: Deactivated successfully. Mar 17 17:33:39.342319 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:33:39.342654 systemd[1]: session-7.scope: Consumed 8.434s CPU time, 191.6M memory peak, 0B memory swap peak. Mar 17 17:33:39.345356 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:33:39.347823 systemd-logind[1457]: Removed session 7. Mar 17 17:33:41.701060 kubelet[2803]: I0317 17:33:41.700992 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a" podStartSLOduration=5.700956271 podStartE2EDuration="5.700956271s" podCreationTimestamp="2025-03-17 17:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:33:37.071705293 +0000 UTC m=+1.207320540" watchObservedRunningTime="2025-03-17 17:33:41.700956271 +0000 UTC m=+5.836571518" Mar 17 17:33:51.400630 kubelet[2803]: I0317 17:33:51.400528 2803 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:33:51.402523 containerd[1477]: time="2025-03-17T17:33:51.401505925Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:33:51.402756 kubelet[2803]: I0317 17:33:51.401706 2803 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:33:52.314568 kubelet[2803]: I0317 17:33:52.314508 2803 topology_manager.go:215] "Topology Admit Handler" podUID="35dd1e01-5668-49d9-afd7-7b2f4849d7b0" podNamespace="kube-system" podName="kube-proxy-nbkk6" Mar 17 17:33:52.326539 systemd[1]: Created slice kubepods-besteffort-pod35dd1e01_5668_49d9_afd7_7b2f4849d7b0.slice - libcontainer container kubepods-besteffort-pod35dd1e01_5668_49d9_afd7_7b2f4849d7b0.slice. Mar 17 17:33:52.328691 kubelet[2803]: I0317 17:33:52.328585 2803 topology_manager.go:215] "Topology Admit Handler" podUID="69be127a-3bf0-4e81-87b1-ecb88934a4bc" podNamespace="kube-system" podName="cilium-mx2qf" Mar 17 17:33:52.345388 systemd[1]: Created slice kubepods-burstable-pod69be127a_3bf0_4e81_87b1_ecb88934a4bc.slice - libcontainer container kubepods-burstable-pod69be127a_3bf0_4e81_87b1_ecb88934a4bc.slice. Mar 17 17:33:52.351535 kubelet[2803]: I0317 17:33:52.351072 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-hostproc\") pod \"cilium-mx2qf\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " pod="kube-system/cilium-mx2qf" Mar 17 17:33:52.351535 kubelet[2803]: I0317 17:33:52.351110 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-etc-cni-netd\") pod \"cilium-mx2qf\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " pod="kube-system/cilium-mx2qf" Mar 17 17:33:52.351535 kubelet[2803]: I0317 17:33:52.351129 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95xmk\" (UniqueName: \"kubernetes.io/projected/69be127a-3bf0-4e81-87b1-ecb88934a4bc-kube-api-access-95xmk\") pod \"cilium-mx2qf\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " pod="kube-system/cilium-mx2qf" Mar 17 17:33:52.351535 kubelet[2803]: I0317 17:33:52.351161 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35dd1e01-5668-49d9-afd7-7b2f4849d7b0-xtables-lock\") pod \"kube-proxy-nbkk6\" (UID: \"35dd1e01-5668-49d9-afd7-7b2f4849d7b0\") " pod="kube-system/kube-proxy-nbkk6" Mar 17 17:33:52.351535 kubelet[2803]: I0317 17:33:52.351178 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-cilium-cgroup\") pod \"cilium-mx2qf\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " pod="kube-system/cilium-mx2qf" Mar 17 17:33:52.351535 kubelet[2803]: I0317 17:33:52.351194 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-cni-path\") pod \"cilium-mx2qf\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " pod="kube-system/cilium-mx2qf" Mar 17 17:33:52.351760 kubelet[2803]: I0317 17:33:52.351208 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-host-proc-sys-kernel\") pod \"cilium-mx2qf\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " pod="kube-system/cilium-mx2qf" Mar 17 17:33:52.351760 kubelet[2803]: I0317 17:33:52.351226 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7ltp\" (UniqueName: \"kubernetes.io/projected/35dd1e01-5668-49d9-afd7-7b2f4849d7b0-kube-api-access-w7ltp\") pod \"kube-proxy-nbkk6\" (UID: \"35dd1e01-5668-49d9-afd7-7b2f4849d7b0\") " pod="kube-system/kube-proxy-nbkk6" Mar 17 17:33:52.351760 kubelet[2803]: I0317 17:33:52.351241 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-lib-modules\") pod \"cilium-mx2qf\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " pod="kube-system/cilium-mx2qf" Mar 17 17:33:52.351760 kubelet[2803]: I0317 17:33:52.351259 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-bpf-maps\") pod \"cilium-mx2qf\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " pod="kube-system/cilium-mx2qf" Mar 17 17:33:52.351760 kubelet[2803]: I0317 17:33:52.351275 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69be127a-3bf0-4e81-87b1-ecb88934a4bc-clustermesh-secrets\") pod \"cilium-mx2qf\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " pod="kube-system/cilium-mx2qf" Mar 17 17:33:52.351878 kubelet[2803]: I0317 17:33:52.351289 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/35dd1e01-5668-49d9-afd7-7b2f4849d7b0-kube-proxy\") pod \"kube-proxy-nbkk6\" (UID: \"35dd1e01-5668-49d9-afd7-7b2f4849d7b0\") " pod="kube-system/kube-proxy-nbkk6" Mar 17 17:33:52.351878 kubelet[2803]: I0317 17:33:52.351306 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69be127a-3bf0-4e81-87b1-ecb88934a4bc-cilium-config-path\") pod \"cilium-mx2qf\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " pod="kube-system/cilium-mx2qf" Mar 17 17:33:52.351878 kubelet[2803]: I0317 17:33:52.351322 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-host-proc-sys-net\") pod \"cilium-mx2qf\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " pod="kube-system/cilium-mx2qf" Mar 17 17:33:52.351878 kubelet[2803]: I0317 17:33:52.351337 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-cilium-run\") pod \"cilium-mx2qf\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " pod="kube-system/cilium-mx2qf" Mar 17 17:33:52.351878 kubelet[2803]: I0317 17:33:52.351350 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-xtables-lock\") pod \"cilium-mx2qf\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " pod="kube-system/cilium-mx2qf" Mar 17 17:33:52.351878 kubelet[2803]: I0317 17:33:52.351365 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35dd1e01-5668-49d9-afd7-7b2f4849d7b0-lib-modules\") pod \"kube-proxy-nbkk6\" (UID: \"35dd1e01-5668-49d9-afd7-7b2f4849d7b0\") " pod="kube-system/kube-proxy-nbkk6" Mar 17 17:33:52.352021 kubelet[2803]: I0317 17:33:52.351383 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69be127a-3bf0-4e81-87b1-ecb88934a4bc-hubble-tls\") pod \"cilium-mx2qf\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " pod="kube-system/cilium-mx2qf" Mar 17 17:33:52.369635 kubelet[2803]: I0317 17:33:52.369186 2803 topology_manager.go:215] "Topology Admit Handler" podUID="6cc949b3-f508-4d68-a13b-27189884c607" podNamespace="kube-system" podName="cilium-operator-599987898-xfrlb" Mar 17 17:33:52.377021 systemd[1]: Created slice kubepods-besteffort-pod6cc949b3_f508_4d68_a13b_27189884c607.slice - libcontainer container kubepods-besteffort-pod6cc949b3_f508_4d68_a13b_27189884c607.slice. Mar 17 17:33:52.456192 kubelet[2803]: I0317 17:33:52.455325 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cc949b3-f508-4d68-a13b-27189884c607-cilium-config-path\") pod \"cilium-operator-599987898-xfrlb\" (UID: \"6cc949b3-f508-4d68-a13b-27189884c607\") " pod="kube-system/cilium-operator-599987898-xfrlb" Mar 17 17:33:52.456192 kubelet[2803]: I0317 17:33:52.455370 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv6x4\" (UniqueName: \"kubernetes.io/projected/6cc949b3-f508-4d68-a13b-27189884c607-kube-api-access-lv6x4\") pod \"cilium-operator-599987898-xfrlb\" (UID: \"6cc949b3-f508-4d68-a13b-27189884c607\") " pod="kube-system/cilium-operator-599987898-xfrlb" Mar 17 17:33:52.638846 containerd[1477]: time="2025-03-17T17:33:52.638684104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nbkk6,Uid:35dd1e01-5668-49d9-afd7-7b2f4849d7b0,Namespace:kube-system,Attempt:0,}" Mar 17 17:33:52.653186 containerd[1477]: time="2025-03-17T17:33:52.653059728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mx2qf,Uid:69be127a-3bf0-4e81-87b1-ecb88934a4bc,Namespace:kube-system,Attempt:0,}" Mar 17 17:33:52.665520 containerd[1477]: time="2025-03-17T17:33:52.665391235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:33:52.665520 containerd[1477]: time="2025-03-17T17:33:52.665446916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:33:52.665520 containerd[1477]: time="2025-03-17T17:33:52.665463237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:33:52.666483 containerd[1477]: time="2025-03-17T17:33:52.665622960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:33:52.679976 containerd[1477]: time="2025-03-17T17:33:52.679741580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:33:52.681224 containerd[1477]: time="2025-03-17T17:33:52.680973402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:33:52.681224 containerd[1477]: time="2025-03-17T17:33:52.681157406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:33:52.682538 containerd[1477]: time="2025-03-17T17:33:52.682431269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:33:52.685053 containerd[1477]: time="2025-03-17T17:33:52.684938755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xfrlb,Uid:6cc949b3-f508-4d68-a13b-27189884c607,Namespace:kube-system,Attempt:0,}" Mar 17 17:33:52.687459 systemd[1]: Started cri-containerd-21a949d1a4ff653b7949b05b364ad4fcff15634e57b3e67828a15609e7511f2c.scope - libcontainer container 21a949d1a4ff653b7949b05b364ad4fcff15634e57b3e67828a15609e7511f2c. Mar 17 17:33:52.704338 systemd[1]: Started cri-containerd-9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b.scope - libcontainer container 9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b. Mar 17 17:33:52.734405 containerd[1477]: time="2025-03-17T17:33:52.734136061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nbkk6,Uid:35dd1e01-5668-49d9-afd7-7b2f4849d7b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"21a949d1a4ff653b7949b05b364ad4fcff15634e57b3e67828a15609e7511f2c\"" Mar 17 17:33:52.737863 containerd[1477]: time="2025-03-17T17:33:52.737732007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:33:52.737863 containerd[1477]: time="2025-03-17T17:33:52.737796489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:33:52.737863 containerd[1477]: time="2025-03-17T17:33:52.737817129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:33:52.738689 containerd[1477]: time="2025-03-17T17:33:52.737898970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:33:52.741325 containerd[1477]: time="2025-03-17T17:33:52.741209591Z" level=info msg="CreateContainer within sandbox \"21a949d1a4ff653b7949b05b364ad4fcff15634e57b3e67828a15609e7511f2c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:33:52.750862 containerd[1477]: time="2025-03-17T17:33:52.750813648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mx2qf,Uid:69be127a-3bf0-4e81-87b1-ecb88934a4bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\"" Mar 17 17:33:52.756352 containerd[1477]: time="2025-03-17T17:33:52.756107706Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:33:52.772725 containerd[1477]: time="2025-03-17T17:33:52.772246003Z" level=info msg="CreateContainer within sandbox \"21a949d1a4ff653b7949b05b364ad4fcff15634e57b3e67828a15609e7511f2c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f8792bb39e1b1a8d9762e5e09f6721c07e7dd4f8b7a62441cda20c3af938b57b\"" Mar 17 17:33:52.773650 containerd[1477]: time="2025-03-17T17:33:52.773624268Z" level=info msg="StartContainer for \"f8792bb39e1b1a8d9762e5e09f6721c07e7dd4f8b7a62441cda20c3af938b57b\"" Mar 17 17:33:52.776954 systemd[1]: Started cri-containerd-1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6.scope - libcontainer container 1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6. Mar 17 17:33:52.812362 systemd[1]: Started cri-containerd-f8792bb39e1b1a8d9762e5e09f6721c07e7dd4f8b7a62441cda20c3af938b57b.scope - libcontainer container f8792bb39e1b1a8d9762e5e09f6721c07e7dd4f8b7a62441cda20c3af938b57b. Mar 17 17:33:52.827879 containerd[1477]: time="2025-03-17T17:33:52.827508781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xfrlb,Uid:6cc949b3-f508-4d68-a13b-27189884c607,Namespace:kube-system,Attempt:0,} returns sandbox id \"1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6\"" Mar 17 17:33:52.859309 containerd[1477]: time="2025-03-17T17:33:52.859235125Z" level=info msg="StartContainer for \"f8792bb39e1b1a8d9762e5e09f6721c07e7dd4f8b7a62441cda20c3af938b57b\" returns successfully" Mar 17 17:33:57.616560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount786352851.mount: Deactivated successfully. Mar 17 17:33:59.065215 containerd[1477]: time="2025-03-17T17:33:59.064620823Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:59.066229 containerd[1477]: time="2025-03-17T17:33:59.066163888Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 17 17:33:59.068470 containerd[1477]: time="2025-03-17T17:33:59.068393124Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:33:59.072747 containerd[1477]: time="2025-03-17T17:33:59.072700515Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.316500088s" Mar 17 17:33:59.073041 containerd[1477]: time="2025-03-17T17:33:59.072845277Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 17:33:59.075440 containerd[1477]: time="2025-03-17T17:33:59.075385159Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:33:59.078492 containerd[1477]: time="2025-03-17T17:33:59.078457929Z" level=info msg="CreateContainer within sandbox \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:33:59.091131 containerd[1477]: time="2025-03-17T17:33:59.091080496Z" level=info msg="CreateContainer within sandbox \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7\"" Mar 17 17:33:59.091861 containerd[1477]: time="2025-03-17T17:33:59.091693626Z" level=info msg="StartContainer for \"04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7\"" Mar 17 17:33:59.121439 systemd[1]: Started cri-containerd-04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7.scope - libcontainer container 04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7. Mar 17 17:33:59.151920 containerd[1477]: time="2025-03-17T17:33:59.151339324Z" level=info msg="StartContainer for \"04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7\" returns successfully" Mar 17 17:33:59.167547 systemd[1]: cri-containerd-04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7.scope: Deactivated successfully. Mar 17 17:33:59.291226 containerd[1477]: time="2025-03-17T17:33:59.290893292Z" level=info msg="shim disconnected" id=04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7 namespace=k8s.io Mar 17 17:33:59.291226 containerd[1477]: time="2025-03-17T17:33:59.291068815Z" level=warning msg="cleaning up after shim disconnected" id=04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7 namespace=k8s.io Mar 17 17:33:59.291226 containerd[1477]: time="2025-03-17T17:33:59.291079015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:34:00.083502 containerd[1477]: time="2025-03-17T17:34:00.083360261Z" level=info msg="CreateContainer within sandbox \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:34:00.087395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7-rootfs.mount: Deactivated successfully. Mar 17 17:34:00.104843 containerd[1477]: time="2025-03-17T17:34:00.104802087Z" level=info msg="CreateContainer within sandbox \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98\"" Mar 17 17:34:00.107260 kubelet[2803]: I0317 17:34:00.105662 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nbkk6" podStartSLOduration=8.105644021 podStartE2EDuration="8.105644021s" podCreationTimestamp="2025-03-17 17:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:33:53.072066422 +0000 UTC m=+17.207681669" watchObservedRunningTime="2025-03-17 17:34:00.105644021 +0000 UTC m=+24.241259228" Mar 17 17:34:00.107658 containerd[1477]: time="2025-03-17T17:34:00.105723422Z" level=info msg="StartContainer for \"71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98\"" Mar 17 17:34:00.137312 systemd[1]: Started cri-containerd-71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98.scope - libcontainer container 71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98. Mar 17 17:34:00.170929 containerd[1477]: time="2025-03-17T17:34:00.170881873Z" level=info msg="StartContainer for \"71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98\" returns successfully" Mar 17 17:34:00.184775 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:34:00.185022 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:34:00.185091 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:34:00.194580 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:34:00.194792 systemd[1]: cri-containerd-71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98.scope: Deactivated successfully. Mar 17 17:34:00.211834 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:34:00.224717 containerd[1477]: time="2025-03-17T17:34:00.224634821Z" level=info msg="shim disconnected" id=71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98 namespace=k8s.io Mar 17 17:34:00.225093 containerd[1477]: time="2025-03-17T17:34:00.224909025Z" level=warning msg="cleaning up after shim disconnected" id=71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98 namespace=k8s.io Mar 17 17:34:00.225093 containerd[1477]: time="2025-03-17T17:34:00.224924266Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:34:01.087007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98-rootfs.mount: Deactivated successfully. Mar 17 17:34:01.092544 containerd[1477]: time="2025-03-17T17:34:01.092499884Z" level=info msg="CreateContainer within sandbox \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:34:01.117298 containerd[1477]: time="2025-03-17T17:34:01.116804870Z" level=info msg="CreateContainer within sandbox \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702\"" Mar 17 17:34:01.118693 containerd[1477]: time="2025-03-17T17:34:01.118653380Z" level=info msg="StartContainer for \"ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702\"" Mar 17 17:34:01.146436 systemd[1]: Started cri-containerd-ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702.scope - libcontainer container ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702. Mar 17 17:34:01.181522 containerd[1477]: time="2025-03-17T17:34:01.181472258Z" level=info msg="StartContainer for \"ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702\" returns successfully" Mar 17 17:34:01.187456 systemd[1]: cri-containerd-ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702.scope: Deactivated successfully. Mar 17 17:34:01.213737 containerd[1477]: time="2025-03-17T17:34:01.213365365Z" level=info msg="shim disconnected" id=ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702 namespace=k8s.io Mar 17 17:34:01.213737 containerd[1477]: time="2025-03-17T17:34:01.213538088Z" level=warning msg="cleaning up after shim disconnected" id=ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702 namespace=k8s.io Mar 17 17:34:01.213737 containerd[1477]: time="2025-03-17T17:34:01.213564688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:34:02.088031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702-rootfs.mount: Deactivated successfully. Mar 17 17:34:02.095632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount503056641.mount: Deactivated successfully. Mar 17 17:34:02.103425 containerd[1477]: time="2025-03-17T17:34:02.103345164Z" level=info msg="CreateContainer within sandbox \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:34:02.123299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount763851836.mount: Deactivated successfully. Mar 17 17:34:02.128248 containerd[1477]: time="2025-03-17T17:34:02.128113712Z" level=info msg="CreateContainer within sandbox \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624\"" Mar 17 17:34:02.133220 containerd[1477]: time="2025-03-17T17:34:02.133115070Z" level=info msg="StartContainer for \"684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624\"" Mar 17 17:34:02.171468 systemd[1]: Started cri-containerd-684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624.scope - libcontainer container 684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624. Mar 17 17:34:02.199800 systemd[1]: cri-containerd-684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624.scope: Deactivated successfully. Mar 17 17:34:02.203662 containerd[1477]: time="2025-03-17T17:34:02.203264768Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69be127a_3bf0_4e81_87b1_ecb88934a4bc.slice/cri-containerd-684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624.scope/memory.events\": no such file or directory" Mar 17 17:34:02.206191 containerd[1477]: time="2025-03-17T17:34:02.206039012Z" level=info msg="StartContainer for \"684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624\" returns successfully" Mar 17 17:34:02.233087 containerd[1477]: time="2025-03-17T17:34:02.232790311Z" level=info msg="shim disconnected" id=684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624 namespace=k8s.io Mar 17 17:34:02.233087 containerd[1477]: time="2025-03-17T17:34:02.232861272Z" level=warning msg="cleaning up after shim disconnected" id=684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624 namespace=k8s.io Mar 17 17:34:02.233087 containerd[1477]: time="2025-03-17T17:34:02.232876312Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:34:03.090571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624-rootfs.mount: Deactivated successfully. Mar 17 17:34:03.111863 containerd[1477]: time="2025-03-17T17:34:03.111809645Z" level=info msg="CreateContainer within sandbox \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:34:03.133732 containerd[1477]: time="2025-03-17T17:34:03.133663822Z" level=info msg="CreateContainer within sandbox \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f\"" Mar 17 17:34:03.135425 containerd[1477]: time="2025-03-17T17:34:03.135335488Z" level=info msg="StartContainer for \"c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f\"" Mar 17 17:34:03.172553 systemd[1]: Started cri-containerd-c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f.scope - libcontainer container c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f. Mar 17 17:34:03.205608 containerd[1477]: time="2025-03-17T17:34:03.205515451Z" level=info msg="StartContainer for \"c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f\" returns successfully" Mar 17 17:34:03.300429 kubelet[2803]: I0317 17:34:03.300094 2803 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:34:03.339627 kubelet[2803]: I0317 17:34:03.339566 2803 topology_manager.go:215] "Topology Admit Handler" podUID="708a7941-6c20-4e03-91ae-2fc1a2f1b02c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-d97mm" Mar 17 17:34:03.344656 kubelet[2803]: I0317 17:34:03.343907 2803 topology_manager.go:215] "Topology Admit Handler" podUID="890d614f-1aa1-483d-90ae-d3c4f0c87f2a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vrvcr" Mar 17 17:34:03.357230 systemd[1]: Created slice kubepods-burstable-pod708a7941_6c20_4e03_91ae_2fc1a2f1b02c.slice - libcontainer container kubepods-burstable-pod708a7941_6c20_4e03_91ae_2fc1a2f1b02c.slice. Mar 17 17:34:03.365392 systemd[1]: Created slice kubepods-burstable-pod890d614f_1aa1_483d_90ae_d3c4f0c87f2a.slice - libcontainer container kubepods-burstable-pod890d614f_1aa1_483d_90ae_d3c4f0c87f2a.slice. Mar 17 17:34:03.432734 kubelet[2803]: I0317 17:34:03.432443 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbs24\" (UniqueName: \"kubernetes.io/projected/890d614f-1aa1-483d-90ae-d3c4f0c87f2a-kube-api-access-rbs24\") pod \"coredns-7db6d8ff4d-vrvcr\" (UID: \"890d614f-1aa1-483d-90ae-d3c4f0c87f2a\") " pod="kube-system/coredns-7db6d8ff4d-vrvcr" Mar 17 17:34:03.432734 kubelet[2803]: I0317 17:34:03.432519 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/890d614f-1aa1-483d-90ae-d3c4f0c87f2a-config-volume\") pod \"coredns-7db6d8ff4d-vrvcr\" (UID: \"890d614f-1aa1-483d-90ae-d3c4f0c87f2a\") " pod="kube-system/coredns-7db6d8ff4d-vrvcr" Mar 17 17:34:03.432734 kubelet[2803]: I0317 17:34:03.432564 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/708a7941-6c20-4e03-91ae-2fc1a2f1b02c-config-volume\") pod \"coredns-7db6d8ff4d-d97mm\" (UID: \"708a7941-6c20-4e03-91ae-2fc1a2f1b02c\") " pod="kube-system/coredns-7db6d8ff4d-d97mm" Mar 17 17:34:03.432734 kubelet[2803]: I0317 17:34:03.432600 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcsfd\" (UniqueName: \"kubernetes.io/projected/708a7941-6c20-4e03-91ae-2fc1a2f1b02c-kube-api-access-qcsfd\") pod \"coredns-7db6d8ff4d-d97mm\" (UID: \"708a7941-6c20-4e03-91ae-2fc1a2f1b02c\") " pod="kube-system/coredns-7db6d8ff4d-d97mm" Mar 17 17:34:03.671177 containerd[1477]: time="2025-03-17T17:34:03.671102391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d97mm,Uid:708a7941-6c20-4e03-91ae-2fc1a2f1b02c,Namespace:kube-system,Attempt:0,}" Mar 17 17:34:03.672113 containerd[1477]: time="2025-03-17T17:34:03.672083286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vrvcr,Uid:890d614f-1aa1-483d-90ae-d3c4f0c87f2a,Namespace:kube-system,Attempt:0,}" Mar 17 17:34:04.141482 kubelet[2803]: I0317 17:34:04.138953 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mx2qf" podStartSLOduration=5.818883066 podStartE2EDuration="12.138936536s" podCreationTimestamp="2025-03-17 17:33:52 +0000 UTC" firstStartedPulling="2025-03-17 17:33:52.754107949 +0000 UTC m=+16.889723196" lastFinishedPulling="2025-03-17 17:33:59.074161419 +0000 UTC m=+23.209776666" observedRunningTime="2025-03-17 17:34:04.138660412 +0000 UTC m=+28.274275659" watchObservedRunningTime="2025-03-17 17:34:04.138936536 +0000 UTC m=+28.274551743" Mar 17 17:34:04.270800 containerd[1477]: time="2025-03-17T17:34:04.270737939Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:34:04.271879 containerd[1477]: time="2025-03-17T17:34:04.271374949Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 17 17:34:04.273664 containerd[1477]: time="2025-03-17T17:34:04.273636503Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:34:04.275309 containerd[1477]: time="2025-03-17T17:34:04.274848562Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.199419441s" Mar 17 17:34:04.275309 containerd[1477]: time="2025-03-17T17:34:04.274883482Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 17:34:04.280372 containerd[1477]: time="2025-03-17T17:34:04.280322125Z" level=info msg="CreateContainer within sandbox \"1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:34:04.295443 containerd[1477]: time="2025-03-17T17:34:04.295314633Z" level=info msg="CreateContainer within sandbox \"1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c\"" Mar 17 17:34:04.296251 containerd[1477]: time="2025-03-17T17:34:04.296208286Z" level=info msg="StartContainer for \"fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c\"" Mar 17 17:34:04.329450 systemd[1]: Started cri-containerd-fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c.scope - libcontainer container fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c. Mar 17 17:34:04.359213 containerd[1477]: time="2025-03-17T17:34:04.359116962Z" level=info msg="StartContainer for \"fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c\" returns successfully" Mar 17 17:34:08.397869 systemd-networkd[1376]: cilium_host: Link UP Mar 17 17:34:08.397995 systemd-networkd[1376]: cilium_net: Link UP Mar 17 17:34:08.400432 systemd-networkd[1376]: cilium_net: Gained carrier Mar 17 17:34:08.401645 systemd-networkd[1376]: cilium_host: Gained carrier Mar 17 17:34:08.515400 systemd-networkd[1376]: cilium_vxlan: Link UP Mar 17 17:34:08.515407 systemd-networkd[1376]: cilium_vxlan: Gained carrier Mar 17 17:34:08.769578 systemd-networkd[1376]: cilium_host: Gained IPv6LL Mar 17 17:34:08.795488 kernel: NET: Registered PF_ALG protocol family Mar 17 17:34:09.010131 systemd-networkd[1376]: cilium_net: Gained IPv6LL Mar 17 17:34:09.509350 systemd-networkd[1376]: lxc_health: Link UP Mar 17 17:34:09.518356 systemd-networkd[1376]: lxc_health: Gained carrier Mar 17 17:34:09.738162 systemd-networkd[1376]: lxccc4c4204ce28: Link UP Mar 17 17:34:09.743192 kernel: eth0: renamed from tmpde2d8 Mar 17 17:34:09.749919 systemd-networkd[1376]: lxccc4c4204ce28: Gained carrier Mar 17 17:34:09.760512 systemd-networkd[1376]: lxcca4f1d59d7fe: Link UP Mar 17 17:34:09.763244 kernel: eth0: renamed from tmp70ff9 Mar 17 17:34:09.771294 systemd-networkd[1376]: lxcca4f1d59d7fe: Gained carrier Mar 17 17:34:09.905343 systemd-networkd[1376]: cilium_vxlan: Gained IPv6LL Mar 17 17:34:10.676217 kubelet[2803]: I0317 17:34:10.675894 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-xfrlb" podStartSLOduration=7.227181569 podStartE2EDuration="18.675878405s" podCreationTimestamp="2025-03-17 17:33:52 +0000 UTC" firstStartedPulling="2025-03-17 17:33:52.829218852 +0000 UTC m=+16.964834099" lastFinishedPulling="2025-03-17 17:34:04.277915688 +0000 UTC m=+28.413530935" observedRunningTime="2025-03-17 17:34:05.129120678 +0000 UTC m=+29.264735965" watchObservedRunningTime="2025-03-17 17:34:10.675878405 +0000 UTC m=+34.811493652" Mar 17 17:34:10.801466 systemd-networkd[1376]: lxcca4f1d59d7fe: Gained IPv6LL Mar 17 17:34:11.185324 systemd-networkd[1376]: lxc_health: Gained IPv6LL Mar 17 17:34:11.762237 systemd-networkd[1376]: lxccc4c4204ce28: Gained IPv6LL Mar 17 17:34:13.566318 containerd[1477]: time="2025-03-17T17:34:13.565964152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:34:13.566318 containerd[1477]: time="2025-03-17T17:34:13.566254115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:34:13.566318 containerd[1477]: time="2025-03-17T17:34:13.566267116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:34:13.568435 containerd[1477]: time="2025-03-17T17:34:13.568359904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:34:13.604489 systemd[1]: Started cri-containerd-70ff9493b4103f7ff6187df77f49486795914bb61970cfe0314387df331e1d83.scope - libcontainer container 70ff9493b4103f7ff6187df77f49486795914bb61970cfe0314387df331e1d83. Mar 17 17:34:13.608168 containerd[1477]: time="2025-03-17T17:34:13.608024078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:34:13.608510 containerd[1477]: time="2025-03-17T17:34:13.608351923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:34:13.608510 containerd[1477]: time="2025-03-17T17:34:13.608404723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:34:13.608718 containerd[1477]: time="2025-03-17T17:34:13.608638607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:34:13.644311 systemd[1]: Started cri-containerd-de2d8c59f0fada7ff56f9742ffa71389ed66ee73cf38f582454c93efafba5fb8.scope - libcontainer container de2d8c59f0fada7ff56f9742ffa71389ed66ee73cf38f582454c93efafba5fb8. Mar 17 17:34:13.662630 containerd[1477]: time="2025-03-17T17:34:13.662515253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d97mm,Uid:708a7941-6c20-4e03-91ae-2fc1a2f1b02c,Namespace:kube-system,Attempt:0,} returns sandbox id \"70ff9493b4103f7ff6187df77f49486795914bb61970cfe0314387df331e1d83\"" Mar 17 17:34:13.668865 containerd[1477]: time="2025-03-17T17:34:13.668715736Z" level=info msg="CreateContainer within sandbox \"70ff9493b4103f7ff6187df77f49486795914bb61970cfe0314387df331e1d83\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:34:13.690516 containerd[1477]: time="2025-03-17T17:34:13.689998943Z" level=info msg="CreateContainer within sandbox \"70ff9493b4103f7ff6187df77f49486795914bb61970cfe0314387df331e1d83\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ef0f712314094c8bb3ee18cd506fda3d3e35501df6eda94b73a1dd4c7db1049\"" Mar 17 17:34:13.693213 containerd[1477]: time="2025-03-17T17:34:13.690944956Z" level=info msg="StartContainer for \"3ef0f712314094c8bb3ee18cd506fda3d3e35501df6eda94b73a1dd4c7db1049\"" Mar 17 17:34:13.723985 containerd[1477]: time="2025-03-17T17:34:13.723852759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vrvcr,Uid:890d614f-1aa1-483d-90ae-d3c4f0c87f2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"de2d8c59f0fada7ff56f9742ffa71389ed66ee73cf38f582454c93efafba5fb8\"" Mar 17 17:34:13.732240 containerd[1477]: time="2025-03-17T17:34:13.732190512Z" level=info msg="CreateContainer within sandbox \"de2d8c59f0fada7ff56f9742ffa71389ed66ee73cf38f582454c93efafba5fb8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:34:13.747314 systemd[1]: Started cri-containerd-3ef0f712314094c8bb3ee18cd506fda3d3e35501df6eda94b73a1dd4c7db1049.scope - libcontainer container 3ef0f712314094c8bb3ee18cd506fda3d3e35501df6eda94b73a1dd4c7db1049. Mar 17 17:34:13.755610 containerd[1477]: time="2025-03-17T17:34:13.755574027Z" level=info msg="CreateContainer within sandbox \"de2d8c59f0fada7ff56f9742ffa71389ed66ee73cf38f582454c93efafba5fb8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"37afc097cd6815d01fe9be11745eb913a5a58966e9bd7b2e4686f030c08bfa10\"" Mar 17 17:34:13.758189 containerd[1477]: time="2025-03-17T17:34:13.757386051Z" level=info msg="StartContainer for \"37afc097cd6815d01fe9be11745eb913a5a58966e9bd7b2e4686f030c08bfa10\"" Mar 17 17:34:13.787463 systemd[1]: Started cri-containerd-37afc097cd6815d01fe9be11745eb913a5a58966e9bd7b2e4686f030c08bfa10.scope - libcontainer container 37afc097cd6815d01fe9be11745eb913a5a58966e9bd7b2e4686f030c08bfa10. Mar 17 17:34:13.793332 containerd[1477]: time="2025-03-17T17:34:13.793293895Z" level=info msg="StartContainer for \"3ef0f712314094c8bb3ee18cd506fda3d3e35501df6eda94b73a1dd4c7db1049\" returns successfully" Mar 17 17:34:13.818998 containerd[1477]: time="2025-03-17T17:34:13.818894480Z" level=info msg="StartContainer for \"37afc097cd6815d01fe9be11745eb913a5a58966e9bd7b2e4686f030c08bfa10\" returns successfully" Mar 17 17:34:14.155711 kubelet[2803]: I0317 17:34:14.155448 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vrvcr" podStartSLOduration=22.15543071 podStartE2EDuration="22.15543071s" podCreationTimestamp="2025-03-17 17:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:34:14.15469978 +0000 UTC m=+38.290315027" watchObservedRunningTime="2025-03-17 17:34:14.15543071 +0000 UTC m=+38.291045917" Mar 17 17:34:14.170466 kubelet[2803]: I0317 17:34:14.170404 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-d97mm" podStartSLOduration=22.170377589 podStartE2EDuration="22.170377589s" podCreationTimestamp="2025-03-17 17:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:34:14.169608579 +0000 UTC m=+38.305223866" watchObservedRunningTime="2025-03-17 17:34:14.170377589 +0000 UTC m=+38.305992796" Mar 17 17:34:14.577562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1596811150.mount: Deactivated successfully. Mar 17 17:34:22.430620 kubelet[2803]: I0317 17:34:22.430383 2803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:38:25.096565 systemd[1]: Started sshd@7-138.199.148.212:22-139.178.89.65:34400.service - OpenSSH per-connection server daemon (139.178.89.65:34400). Mar 17 17:38:26.086678 sshd[4218]: Accepted publickey for core from 139.178.89.65 port 34400 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:38:26.089371 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:26.097192 systemd-logind[1457]: New session 8 of user core. Mar 17 17:38:26.106539 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:38:26.861227 sshd[4220]: Connection closed by 139.178.89.65 port 34400 Mar 17 17:38:26.862022 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:26.865944 systemd[1]: sshd@7-138.199.148.212:22-139.178.89.65:34400.service: Deactivated successfully. Mar 17 17:38:26.870398 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:38:26.872246 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:38:26.873892 systemd-logind[1457]: Removed session 8. Mar 17 17:38:32.037519 systemd[1]: Started sshd@8-138.199.148.212:22-139.178.89.65:37426.service - OpenSSH per-connection server daemon (139.178.89.65:37426). Mar 17 17:38:33.046669 sshd[4232]: Accepted publickey for core from 139.178.89.65 port 37426 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:38:33.049563 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:33.055038 systemd-logind[1457]: New session 9 of user core. Mar 17 17:38:33.059772 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:38:33.812083 sshd[4234]: Connection closed by 139.178.89.65 port 37426 Mar 17 17:38:33.812823 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:33.818089 systemd[1]: sshd@8-138.199.148.212:22-139.178.89.65:37426.service: Deactivated successfully. Mar 17 17:38:33.819886 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:38:33.821307 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:38:33.822459 systemd-logind[1457]: Removed session 9. Mar 17 17:38:38.991635 systemd[1]: Started sshd@9-138.199.148.212:22-139.178.89.65:37432.service - OpenSSH per-connection server daemon (139.178.89.65:37432). Mar 17 17:38:39.978636 sshd[4248]: Accepted publickey for core from 139.178.89.65 port 37432 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:38:39.980766 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:39.987937 systemd-logind[1457]: New session 10 of user core. Mar 17 17:38:39.997398 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:38:40.741502 sshd[4250]: Connection closed by 139.178.89.65 port 37432 Mar 17 17:38:40.742898 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:40.747770 systemd[1]: sshd@9-138.199.148.212:22-139.178.89.65:37432.service: Deactivated successfully. Mar 17 17:38:40.750991 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:38:40.751942 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:38:40.753044 systemd-logind[1457]: Removed session 10. Mar 17 17:38:40.919418 systemd[1]: Started sshd@10-138.199.148.212:22-139.178.89.65:37442.service - OpenSSH per-connection server daemon (139.178.89.65:37442). Mar 17 17:38:41.907067 sshd[4262]: Accepted publickey for core from 139.178.89.65 port 37442 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:38:41.909686 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:41.917435 systemd-logind[1457]: New session 11 of user core. Mar 17 17:38:41.924425 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:38:42.708281 sshd[4264]: Connection closed by 139.178.89.65 port 37442 Mar 17 17:38:42.710502 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:42.714378 systemd[1]: sshd@10-138.199.148.212:22-139.178.89.65:37442.service: Deactivated successfully. Mar 17 17:38:42.716370 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:38:42.718758 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:38:42.720249 systemd-logind[1457]: Removed session 11. Mar 17 17:38:42.882640 systemd[1]: Started sshd@11-138.199.148.212:22-139.178.89.65:34910.service - OpenSSH per-connection server daemon (139.178.89.65:34910). Mar 17 17:38:43.878251 sshd[4273]: Accepted publickey for core from 139.178.89.65 port 34910 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:38:43.880785 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:43.887049 systemd-logind[1457]: New session 12 of user core. Mar 17 17:38:43.894671 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:38:44.633644 sshd[4275]: Connection closed by 139.178.89.65 port 34910 Mar 17 17:38:44.634660 sshd-session[4273]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:44.639043 systemd[1]: sshd@11-138.199.148.212:22-139.178.89.65:34910.service: Deactivated successfully. Mar 17 17:38:44.642677 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:38:44.645583 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:38:44.646763 systemd-logind[1457]: Removed session 12. Mar 17 17:38:49.808609 systemd[1]: Started sshd@12-138.199.148.212:22-139.178.89.65:34918.service - OpenSSH per-connection server daemon (139.178.89.65:34918). Mar 17 17:38:50.798181 sshd[4286]: Accepted publickey for core from 139.178.89.65 port 34918 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:38:50.799990 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:50.804860 systemd-logind[1457]: New session 13 of user core. Mar 17 17:38:50.813856 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:38:51.548933 sshd[4288]: Connection closed by 139.178.89.65 port 34918 Mar 17 17:38:51.550899 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:51.556959 systemd[1]: sshd@12-138.199.148.212:22-139.178.89.65:34918.service: Deactivated successfully. Mar 17 17:38:51.560812 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:38:51.562292 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:38:51.563744 systemd-logind[1457]: Removed session 13. Mar 17 17:38:51.724542 systemd[1]: Started sshd@13-138.199.148.212:22-139.178.89.65:50646.service - OpenSSH per-connection server daemon (139.178.89.65:50646). Mar 17 17:38:52.710050 sshd[4299]: Accepted publickey for core from 139.178.89.65 port 50646 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:38:52.712109 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:52.717825 systemd-logind[1457]: New session 14 of user core. Mar 17 17:38:52.723483 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:38:53.506222 sshd[4302]: Connection closed by 139.178.89.65 port 50646 Mar 17 17:38:53.507205 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:53.512238 systemd[1]: sshd@13-138.199.148.212:22-139.178.89.65:50646.service: Deactivated successfully. Mar 17 17:38:53.516888 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:38:53.517589 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:38:53.518598 systemd-logind[1457]: Removed session 14. Mar 17 17:38:53.687569 systemd[1]: Started sshd@14-138.199.148.212:22-139.178.89.65:50660.service - OpenSSH per-connection server daemon (139.178.89.65:50660). Mar 17 17:38:54.683874 sshd[4314]: Accepted publickey for core from 139.178.89.65 port 50660 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:38:54.686544 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:54.691487 systemd-logind[1457]: New session 15 of user core. Mar 17 17:38:54.701489 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:38:56.987378 sshd[4316]: Connection closed by 139.178.89.65 port 50660 Mar 17 17:38:56.986718 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:56.990704 systemd[1]: sshd@14-138.199.148.212:22-139.178.89.65:50660.service: Deactivated successfully. Mar 17 17:38:56.993233 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:38:56.994853 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:38:56.996605 systemd-logind[1457]: Removed session 15. Mar 17 17:38:57.158566 systemd[1]: Started sshd@15-138.199.148.212:22-139.178.89.65:50674.service - OpenSSH per-connection server daemon (139.178.89.65:50674). Mar 17 17:38:58.135969 sshd[4333]: Accepted publickey for core from 139.178.89.65 port 50674 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:38:58.138539 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:58.146322 systemd-logind[1457]: New session 16 of user core. Mar 17 17:38:58.151527 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:38:59.011243 sshd[4335]: Connection closed by 139.178.89.65 port 50674 Mar 17 17:38:59.011958 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:59.017729 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:38:59.018110 systemd[1]: sshd@15-138.199.148.212:22-139.178.89.65:50674.service: Deactivated successfully. Mar 17 17:38:59.020998 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:38:59.022076 systemd-logind[1457]: Removed session 16. Mar 17 17:38:59.185480 systemd[1]: Started sshd@16-138.199.148.212:22-139.178.89.65:50684.service - OpenSSH per-connection server daemon (139.178.89.65:50684). Mar 17 17:39:00.175625 sshd[4344]: Accepted publickey for core from 139.178.89.65 port 50684 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:39:00.178072 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:00.183517 systemd-logind[1457]: New session 17 of user core. Mar 17 17:39:00.188373 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:39:00.929617 sshd[4346]: Connection closed by 139.178.89.65 port 50684 Mar 17 17:39:00.931338 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:00.935508 systemd[1]: sshd@16-138.199.148.212:22-139.178.89.65:50684.service: Deactivated successfully. Mar 17 17:39:00.937864 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:39:00.940061 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:39:00.941738 systemd-logind[1457]: Removed session 17. Mar 17 17:39:06.109544 systemd[1]: Started sshd@17-138.199.148.212:22-139.178.89.65:33282.service - OpenSSH per-connection server daemon (139.178.89.65:33282). Mar 17 17:39:07.094200 sshd[4360]: Accepted publickey for core from 139.178.89.65 port 33282 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:39:07.096442 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:07.102943 systemd-logind[1457]: New session 18 of user core. Mar 17 17:39:07.114515 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:39:07.835365 sshd[4362]: Connection closed by 139.178.89.65 port 33282 Mar 17 17:39:07.836514 sshd-session[4360]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:07.842597 systemd[1]: sshd@17-138.199.148.212:22-139.178.89.65:33282.service: Deactivated successfully. Mar 17 17:39:07.845773 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:39:07.846834 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:39:07.848065 systemd-logind[1457]: Removed session 18. Mar 17 17:39:13.011586 systemd[1]: Started sshd@18-138.199.148.212:22-139.178.89.65:43822.service - OpenSSH per-connection server daemon (139.178.89.65:43822). Mar 17 17:39:13.990869 sshd[4373]: Accepted publickey for core from 139.178.89.65 port 43822 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:39:13.993441 sshd-session[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:13.998670 systemd-logind[1457]: New session 19 of user core. Mar 17 17:39:14.006454 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:39:14.745221 sshd[4375]: Connection closed by 139.178.89.65 port 43822 Mar 17 17:39:14.746200 sshd-session[4373]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:14.751929 systemd[1]: sshd@18-138.199.148.212:22-139.178.89.65:43822.service: Deactivated successfully. Mar 17 17:39:14.755362 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:39:14.756294 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:39:14.757458 systemd-logind[1457]: Removed session 19. Mar 17 17:39:14.930441 systemd[1]: Started sshd@19-138.199.148.212:22-139.178.89.65:43826.service - OpenSSH per-connection server daemon (139.178.89.65:43826). Mar 17 17:39:15.923994 sshd[4385]: Accepted publickey for core from 139.178.89.65 port 43826 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:39:15.925776 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:15.933684 systemd-logind[1457]: New session 20 of user core. Mar 17 17:39:15.936397 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:39:18.113727 containerd[1477]: time="2025-03-17T17:39:18.113608377Z" level=info msg="StopContainer for \"fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c\" with timeout 30 (s)" Mar 17 17:39:18.114789 containerd[1477]: time="2025-03-17T17:39:18.114697306Z" level=info msg="Stop container \"fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c\" with signal terminated" Mar 17 17:39:18.120352 containerd[1477]: time="2025-03-17T17:39:18.120314875Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:39:18.127364 systemd[1]: cri-containerd-fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c.scope: Deactivated successfully. Mar 17 17:39:18.130076 containerd[1477]: time="2025-03-17T17:39:18.129817317Z" level=info msg="StopContainer for \"c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f\" with timeout 2 (s)" Mar 17 17:39:18.130830 containerd[1477]: time="2025-03-17T17:39:18.130744885Z" level=info msg="Stop container \"c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f\" with signal terminated" Mar 17 17:39:18.143605 systemd-networkd[1376]: lxc_health: Link DOWN Mar 17 17:39:18.143612 systemd-networkd[1376]: lxc_health: Lost carrier Mar 17 17:39:18.161539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c-rootfs.mount: Deactivated successfully. Mar 17 17:39:18.166578 systemd[1]: cri-containerd-c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f.scope: Deactivated successfully. Mar 17 17:39:18.167681 systemd[1]: cri-containerd-c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f.scope: Consumed 7.586s CPU time. Mar 17 17:39:18.174125 containerd[1477]: time="2025-03-17T17:39:18.173903497Z" level=info msg="shim disconnected" id=fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c namespace=k8s.io Mar 17 17:39:18.174125 containerd[1477]: time="2025-03-17T17:39:18.174123499Z" level=warning msg="cleaning up after shim disconnected" id=fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c namespace=k8s.io Mar 17 17:39:18.174125 containerd[1477]: time="2025-03-17T17:39:18.174133379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:39:18.194826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f-rootfs.mount: Deactivated successfully. Mar 17 17:39:18.199768 containerd[1477]: time="2025-03-17T17:39:18.199730200Z" level=info msg="StopContainer for \"fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c\" returns successfully" Mar 17 17:39:18.200561 containerd[1477]: time="2025-03-17T17:39:18.200432086Z" level=info msg="StopPodSandbox for \"1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6\"" Mar 17 17:39:18.200715 containerd[1477]: time="2025-03-17T17:39:18.200572767Z" level=info msg="Container to stop \"fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:39:18.200715 containerd[1477]: time="2025-03-17T17:39:18.200536247Z" level=info msg="shim disconnected" id=c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f namespace=k8s.io Mar 17 17:39:18.200715 containerd[1477]: time="2025-03-17T17:39:18.200682648Z" level=warning msg="cleaning up after shim disconnected" id=c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f namespace=k8s.io Mar 17 17:39:18.200715 containerd[1477]: time="2025-03-17T17:39:18.200690288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:39:18.202737 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6-shm.mount: Deactivated successfully. Mar 17 17:39:18.211605 systemd[1]: cri-containerd-1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6.scope: Deactivated successfully. Mar 17 17:39:18.223744 containerd[1477]: time="2025-03-17T17:39:18.223703367Z" level=info msg="StopContainer for \"c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f\" returns successfully" Mar 17 17:39:18.224713 containerd[1477]: time="2025-03-17T17:39:18.224532214Z" level=info msg="StopPodSandbox for \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\"" Mar 17 17:39:18.224713 containerd[1477]: time="2025-03-17T17:39:18.224573014Z" level=info msg="Container to stop \"ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:39:18.224713 containerd[1477]: time="2025-03-17T17:39:18.224584535Z" level=info msg="Container to stop \"04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:39:18.224713 containerd[1477]: time="2025-03-17T17:39:18.224594655Z" level=info msg="Container to stop \"684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:39:18.224713 containerd[1477]: time="2025-03-17T17:39:18.224603975Z" level=info msg="Container to stop \"c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:39:18.224713 containerd[1477]: time="2025-03-17T17:39:18.224613255Z" level=info msg="Container to stop \"71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:39:18.226164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b-shm.mount: Deactivated successfully. Mar 17 17:39:18.237329 systemd[1]: cri-containerd-9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b.scope: Deactivated successfully. Mar 17 17:39:18.249360 containerd[1477]: time="2025-03-17T17:39:18.249106506Z" level=info msg="shim disconnected" id=1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6 namespace=k8s.io Mar 17 17:39:18.249360 containerd[1477]: time="2025-03-17T17:39:18.249285188Z" level=warning msg="cleaning up after shim disconnected" id=1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6 namespace=k8s.io Mar 17 17:39:18.249360 containerd[1477]: time="2025-03-17T17:39:18.249296188Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:39:18.268537 containerd[1477]: time="2025-03-17T17:39:18.268458753Z" level=info msg="shim disconnected" id=9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b namespace=k8s.io Mar 17 17:39:18.268537 containerd[1477]: time="2025-03-17T17:39:18.268524994Z" level=warning msg="cleaning up after shim disconnected" id=9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b namespace=k8s.io Mar 17 17:39:18.268537 containerd[1477]: time="2025-03-17T17:39:18.268534594Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:39:18.269028 containerd[1477]: time="2025-03-17T17:39:18.268993598Z" level=info msg="TearDown network for sandbox \"1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6\" successfully" Mar 17 17:39:18.269028 containerd[1477]: time="2025-03-17T17:39:18.269020038Z" level=info msg="StopPodSandbox for \"1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6\" returns successfully" Mar 17 17:39:18.284208 containerd[1477]: time="2025-03-17T17:39:18.283571124Z" level=info msg="TearDown network for sandbox \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\" successfully" Mar 17 17:39:18.284208 containerd[1477]: time="2025-03-17T17:39:18.283607124Z" level=info msg="StopPodSandbox for \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\" returns successfully" Mar 17 17:39:18.356200 kubelet[2803]: I0317 17:39:18.354137 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69be127a-3bf0-4e81-87b1-ecb88934a4bc-hubble-tls\") pod \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " Mar 17 17:39:18.356200 kubelet[2803]: I0317 17:39:18.354377 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-bpf-maps\") pod \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " Mar 17 17:39:18.356200 kubelet[2803]: I0317 17:39:18.354422 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-cilium-cgroup\") pod \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " Mar 17 17:39:18.356200 kubelet[2803]: I0317 17:39:18.354531 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-cilium-run\") pod \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " Mar 17 17:39:18.356200 kubelet[2803]: I0317 17:39:18.354582 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-lib-modules\") pod \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " Mar 17 17:39:18.356200 kubelet[2803]: I0317 17:39:18.354623 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-xtables-lock\") pod \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " Mar 17 17:39:18.357085 kubelet[2803]: I0317 17:39:18.354684 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-host-proc-sys-net\") pod \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " Mar 17 17:39:18.357085 kubelet[2803]: I0317 17:39:18.354736 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-cni-path\") pod \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " Mar 17 17:39:18.357085 kubelet[2803]: I0317 17:39:18.354778 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95xmk\" (UniqueName: \"kubernetes.io/projected/69be127a-3bf0-4e81-87b1-ecb88934a4bc-kube-api-access-95xmk\") pod \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " Mar 17 17:39:18.357085 kubelet[2803]: I0317 17:39:18.354821 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69be127a-3bf0-4e81-87b1-ecb88934a4bc-clustermesh-secrets\") pod \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " Mar 17 17:39:18.357085 kubelet[2803]: I0317 17:39:18.354855 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-etc-cni-netd\") pod \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " Mar 17 17:39:18.357085 kubelet[2803]: I0317 17:39:18.354953 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-host-proc-sys-kernel\") pod \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " Mar 17 17:39:18.357575 kubelet[2803]: I0317 17:39:18.354996 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv6x4\" (UniqueName: \"kubernetes.io/projected/6cc949b3-f508-4d68-a13b-27189884c607-kube-api-access-lv6x4\") pod \"6cc949b3-f508-4d68-a13b-27189884c607\" (UID: \"6cc949b3-f508-4d68-a13b-27189884c607\") " Mar 17 17:39:18.357575 kubelet[2803]: I0317 17:39:18.355030 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-hostproc\") pod \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " Mar 17 17:39:18.357575 kubelet[2803]: I0317 17:39:18.355070 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69be127a-3bf0-4e81-87b1-ecb88934a4bc-cilium-config-path\") pod \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\" (UID: \"69be127a-3bf0-4e81-87b1-ecb88934a4bc\") " Mar 17 17:39:18.357575 kubelet[2803]: I0317 17:39:18.355109 2803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cc949b3-f508-4d68-a13b-27189884c607-cilium-config-path\") pod \"6cc949b3-f508-4d68-a13b-27189884c607\" (UID: \"6cc949b3-f508-4d68-a13b-27189884c607\") " Mar 17 17:39:18.357575 kubelet[2803]: I0317 17:39:18.356450 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "69be127a-3bf0-4e81-87b1-ecb88934a4bc" (UID: "69be127a-3bf0-4e81-87b1-ecb88934a4bc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:18.357851 kubelet[2803]: I0317 17:39:18.356533 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "69be127a-3bf0-4e81-87b1-ecb88934a4bc" (UID: "69be127a-3bf0-4e81-87b1-ecb88934a4bc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:18.357851 kubelet[2803]: I0317 17:39:18.356567 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "69be127a-3bf0-4e81-87b1-ecb88934a4bc" (UID: "69be127a-3bf0-4e81-87b1-ecb88934a4bc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:18.357851 kubelet[2803]: I0317 17:39:18.356595 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "69be127a-3bf0-4e81-87b1-ecb88934a4bc" (UID: "69be127a-3bf0-4e81-87b1-ecb88934a4bc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:18.357851 kubelet[2803]: I0317 17:39:18.356620 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "69be127a-3bf0-4e81-87b1-ecb88934a4bc" (UID: "69be127a-3bf0-4e81-87b1-ecb88934a4bc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:18.357851 kubelet[2803]: I0317 17:39:18.356645 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "69be127a-3bf0-4e81-87b1-ecb88934a4bc" (UID: "69be127a-3bf0-4e81-87b1-ecb88934a4bc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:18.358239 kubelet[2803]: I0317 17:39:18.356671 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-cni-path" (OuterVolumeSpecName: "cni-path") pod "69be127a-3bf0-4e81-87b1-ecb88934a4bc" (UID: "69be127a-3bf0-4e81-87b1-ecb88934a4bc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:18.358239 kubelet[2803]: I0317 17:39:18.357061 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "69be127a-3bf0-4e81-87b1-ecb88934a4bc" (UID: "69be127a-3bf0-4e81-87b1-ecb88934a4bc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:18.359755 kubelet[2803]: I0317 17:39:18.359701 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "69be127a-3bf0-4e81-87b1-ecb88934a4bc" (UID: "69be127a-3bf0-4e81-87b1-ecb88934a4bc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:18.360107 kubelet[2803]: I0317 17:39:18.360072 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-hostproc" (OuterVolumeSpecName: "hostproc") pod "69be127a-3bf0-4e81-87b1-ecb88934a4bc" (UID: "69be127a-3bf0-4e81-87b1-ecb88934a4bc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:39:18.361794 kubelet[2803]: I0317 17:39:18.361747 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc949b3-f508-4d68-a13b-27189884c607-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6cc949b3-f508-4d68-a13b-27189884c607" (UID: "6cc949b3-f508-4d68-a13b-27189884c607"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:39:18.361927 kubelet[2803]: I0317 17:39:18.361874 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69be127a-3bf0-4e81-87b1-ecb88934a4bc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "69be127a-3bf0-4e81-87b1-ecb88934a4bc" (UID: "69be127a-3bf0-4e81-87b1-ecb88934a4bc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:39:18.366185 kubelet[2803]: I0317 17:39:18.365645 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69be127a-3bf0-4e81-87b1-ecb88934a4bc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "69be127a-3bf0-4e81-87b1-ecb88934a4bc" (UID: "69be127a-3bf0-4e81-87b1-ecb88934a4bc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:39:18.366185 kubelet[2803]: I0317 17:39:18.365670 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc949b3-f508-4d68-a13b-27189884c607-kube-api-access-lv6x4" (OuterVolumeSpecName: "kube-api-access-lv6x4") pod "6cc949b3-f508-4d68-a13b-27189884c607" (UID: "6cc949b3-f508-4d68-a13b-27189884c607"). InnerVolumeSpecName "kube-api-access-lv6x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:39:18.366185 kubelet[2803]: I0317 17:39:18.365745 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69be127a-3bf0-4e81-87b1-ecb88934a4bc-kube-api-access-95xmk" (OuterVolumeSpecName: "kube-api-access-95xmk") pod "69be127a-3bf0-4e81-87b1-ecb88934a4bc" (UID: "69be127a-3bf0-4e81-87b1-ecb88934a4bc"). InnerVolumeSpecName "kube-api-access-95xmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:39:18.366881 kubelet[2803]: I0317 17:39:18.366413 2803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69be127a-3bf0-4e81-87b1-ecb88934a4bc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "69be127a-3bf0-4e81-87b1-ecb88934a4bc" (UID: "69be127a-3bf0-4e81-87b1-ecb88934a4bc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 17:39:18.455808 kubelet[2803]: I0317 17:39:18.455745 2803 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-host-proc-sys-kernel\") on node \"ci-4152-2-2-0-5dd1d5cf3a\" DevicePath \"\"" Mar 17 17:39:18.455808 kubelet[2803]: I0317 17:39:18.455784 2803 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lv6x4\" (UniqueName: \"kubernetes.io/projected/6cc949b3-f508-4d68-a13b-27189884c607-kube-api-access-lv6x4\") on node \"ci-4152-2-2-0-5dd1d5cf3a\" DevicePath \"\"" Mar 17 17:39:18.455808 kubelet[2803]: I0317 17:39:18.455797 2803 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-etc-cni-netd\") on node \"ci-4152-2-2-0-5dd1d5cf3a\" DevicePath \"\"" Mar 17 17:39:18.455808 kubelet[2803]: I0317 17:39:18.455809 2803 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69be127a-3bf0-4e81-87b1-ecb88934a4bc-cilium-config-path\") on node \"ci-4152-2-2-0-5dd1d5cf3a\" DevicePath \"\"" Mar 17 17:39:18.455808 kubelet[2803]: I0317 17:39:18.455821 2803 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-hostproc\") on node \"ci-4152-2-2-0-5dd1d5cf3a\" DevicePath \"\"" Mar 17 17:39:18.456236 kubelet[2803]: I0317 17:39:18.455833 2803 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cc949b3-f508-4d68-a13b-27189884c607-cilium-config-path\") on node \"ci-4152-2-2-0-5dd1d5cf3a\" DevicePath \"\"" Mar 17 17:39:18.456236 kubelet[2803]: I0317 17:39:18.455846 2803 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69be127a-3bf0-4e81-87b1-ecb88934a4bc-hubble-tls\") on node \"ci-4152-2-2-0-5dd1d5cf3a\" DevicePath \"\"" Mar 17 17:39:18.456236 kubelet[2803]: I0317 17:39:18.455856 2803 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-bpf-maps\") on node \"ci-4152-2-2-0-5dd1d5cf3a\" DevicePath \"\"" Mar 17 17:39:18.456236 kubelet[2803]: I0317 17:39:18.455866 2803 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-cilium-run\") on node \"ci-4152-2-2-0-5dd1d5cf3a\" DevicePath \"\"" Mar 17 17:39:18.456236 kubelet[2803]: I0317 17:39:18.455876 2803 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-lib-modules\") on node \"ci-4152-2-2-0-5dd1d5cf3a\" DevicePath \"\"" Mar 17 17:39:18.456236 kubelet[2803]: I0317 17:39:18.455887 2803 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-cilium-cgroup\") on node \"ci-4152-2-2-0-5dd1d5cf3a\" DevicePath \"\"" Mar 17 17:39:18.456236 kubelet[2803]: I0317 17:39:18.455897 2803 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-xtables-lock\") on node \"ci-4152-2-2-0-5dd1d5cf3a\" DevicePath \"\"" Mar 17 17:39:18.456236 kubelet[2803]: I0317 17:39:18.455907 2803 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-host-proc-sys-net\") on node \"ci-4152-2-2-0-5dd1d5cf3a\" DevicePath \"\"" Mar 17 17:39:18.456629 kubelet[2803]: I0317 17:39:18.455917 2803 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69be127a-3bf0-4e81-87b1-ecb88934a4bc-cni-path\") on node \"ci-4152-2-2-0-5dd1d5cf3a\" DevicePath \"\"" Mar 17 17:39:18.456629 kubelet[2803]: I0317 17:39:18.455928 2803 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-95xmk\" (UniqueName: \"kubernetes.io/projected/69be127a-3bf0-4e81-87b1-ecb88934a4bc-kube-api-access-95xmk\") on node \"ci-4152-2-2-0-5dd1d5cf3a\" DevicePath \"\"" Mar 17 17:39:18.456629 kubelet[2803]: I0317 17:39:18.455939 2803 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69be127a-3bf0-4e81-87b1-ecb88934a4bc-clustermesh-secrets\") on node \"ci-4152-2-2-0-5dd1d5cf3a\" DevicePath \"\"" Mar 17 17:39:18.901319 kubelet[2803]: I0317 17:39:18.901260 2803 scope.go:117] "RemoveContainer" containerID="fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c" Mar 17 17:39:18.909341 containerd[1477]: time="2025-03-17T17:39:18.906857981Z" level=info msg="RemoveContainer for \"fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c\"" Mar 17 17:39:18.913402 systemd[1]: Removed slice kubepods-besteffort-pod6cc949b3_f508_4d68_a13b_27189884c607.slice - libcontainer container kubepods-besteffort-pod6cc949b3_f508_4d68_a13b_27189884c607.slice. Mar 17 17:39:18.917187 containerd[1477]: time="2025-03-17T17:39:18.917021509Z" level=info msg="RemoveContainer for \"fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c\" returns successfully" Mar 17 17:39:18.917514 kubelet[2803]: I0317 17:39:18.917482 2803 scope.go:117] "RemoveContainer" containerID="fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c" Mar 17 17:39:18.918048 containerd[1477]: time="2025-03-17T17:39:18.917999797Z" level=error msg="ContainerStatus for \"fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c\": not found" Mar 17 17:39:18.918338 kubelet[2803]: E0317 17:39:18.918312 2803 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c\": not found" containerID="fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c" Mar 17 17:39:18.918547 kubelet[2803]: I0317 17:39:18.918416 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c"} err="failed to get container status \"fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c\": rpc error: code = NotFound desc = an error occurred when try to find container \"fed901f589d61448bda6a9e4956c65b40c66814acddc373e89275d9a93db5d1c\": not found" Mar 17 17:39:18.920011 kubelet[2803]: I0317 17:39:18.919985 2803 scope.go:117] "RemoveContainer" containerID="c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f" Mar 17 17:39:18.922943 containerd[1477]: time="2025-03-17T17:39:18.922821839Z" level=info msg="RemoveContainer for \"c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f\"" Mar 17 17:39:18.928680 containerd[1477]: time="2025-03-17T17:39:18.928629929Z" level=info msg="RemoveContainer for \"c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f\" returns successfully" Mar 17 17:39:18.929309 kubelet[2803]: I0317 17:39:18.928876 2803 scope.go:117] "RemoveContainer" containerID="684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624" Mar 17 17:39:18.931041 systemd[1]: Removed slice kubepods-burstable-pod69be127a_3bf0_4e81_87b1_ecb88934a4bc.slice - libcontainer container kubepods-burstable-pod69be127a_3bf0_4e81_87b1_ecb88934a4bc.slice. Mar 17 17:39:18.931319 systemd[1]: kubepods-burstable-pod69be127a_3bf0_4e81_87b1_ecb88934a4bc.slice: Consumed 7.674s CPU time. Mar 17 17:39:18.935004 containerd[1477]: time="2025-03-17T17:39:18.934689061Z" level=info msg="RemoveContainer for \"684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624\"" Mar 17 17:39:18.940186 containerd[1477]: time="2025-03-17T17:39:18.940135988Z" level=info msg="RemoveContainer for \"684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624\" returns successfully" Mar 17 17:39:18.940747 kubelet[2803]: I0317 17:39:18.940612 2803 scope.go:117] "RemoveContainer" containerID="ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702" Mar 17 17:39:18.944692 containerd[1477]: time="2025-03-17T17:39:18.943877701Z" level=info msg="RemoveContainer for \"ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702\"" Mar 17 17:39:18.947788 containerd[1477]: time="2025-03-17T17:39:18.947751734Z" level=info msg="RemoveContainer for \"ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702\" returns successfully" Mar 17 17:39:18.948184 kubelet[2803]: I0317 17:39:18.948134 2803 scope.go:117] "RemoveContainer" containerID="71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98" Mar 17 17:39:18.952089 containerd[1477]: time="2025-03-17T17:39:18.951995891Z" level=info msg="RemoveContainer for \"71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98\"" Mar 17 17:39:18.957389 containerd[1477]: time="2025-03-17T17:39:18.957341337Z" level=info msg="RemoveContainer for \"71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98\" returns successfully" Mar 17 17:39:18.957678 kubelet[2803]: I0317 17:39:18.957649 2803 scope.go:117] "RemoveContainer" containerID="04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7" Mar 17 17:39:18.959470 containerd[1477]: time="2025-03-17T17:39:18.959440195Z" level=info msg="RemoveContainer for \"04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7\"" Mar 17 17:39:18.962874 containerd[1477]: time="2025-03-17T17:39:18.962845824Z" level=info msg="RemoveContainer for \"04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7\" returns successfully" Mar 17 17:39:18.963087 kubelet[2803]: I0317 17:39:18.963054 2803 scope.go:117] "RemoveContainer" containerID="c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f" Mar 17 17:39:18.963389 containerd[1477]: time="2025-03-17T17:39:18.963359229Z" level=error msg="ContainerStatus for \"c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f\": not found" Mar 17 17:39:18.963560 kubelet[2803]: E0317 17:39:18.963485 2803 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f\": not found" containerID="c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f" Mar 17 17:39:18.963560 kubelet[2803]: I0317 17:39:18.963517 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f"} err="failed to get container status \"c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1dbd6bcac4cdc75c4deac449b9ca003fbfc9af3a33c375e451ce8eaf357032f\": not found" Mar 17 17:39:18.963560 kubelet[2803]: I0317 17:39:18.963537 2803 scope.go:117] "RemoveContainer" containerID="684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624" Mar 17 17:39:18.964044 containerd[1477]: time="2025-03-17T17:39:18.963802753Z" level=error msg="ContainerStatus for \"684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624\": not found" Mar 17 17:39:18.964106 kubelet[2803]: E0317 17:39:18.963936 2803 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624\": not found" containerID="684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624" Mar 17 17:39:18.964106 kubelet[2803]: I0317 17:39:18.963958 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624"} err="failed to get container status \"684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624\": rpc error: code = NotFound desc = an error occurred when try to find container \"684711e0fdda1955105a873db470315ec5bb925da703cac06736b42712347624\": not found" Mar 17 17:39:18.964106 kubelet[2803]: I0317 17:39:18.963974 2803 scope.go:117] "RemoveContainer" containerID="ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702" Mar 17 17:39:18.964343 containerd[1477]: time="2025-03-17T17:39:18.964275437Z" level=error msg="ContainerStatus for \"ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702\": not found" Mar 17 17:39:18.964424 kubelet[2803]: E0317 17:39:18.964396 2803 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702\": not found" containerID="ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702" Mar 17 17:39:18.964424 kubelet[2803]: I0317 17:39:18.964418 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702"} err="failed to get container status \"ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca2a094446402186b898d6a7e7bc9434aaf707cb398cce7fb5cb1bd199b28702\": not found" Mar 17 17:39:18.964634 kubelet[2803]: I0317 17:39:18.964435 2803 scope.go:117] "RemoveContainer" containerID="71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98" Mar 17 17:39:18.964828 containerd[1477]: time="2025-03-17T17:39:18.964749081Z" level=error msg="ContainerStatus for \"71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98\": not found" Mar 17 17:39:18.965056 kubelet[2803]: E0317 17:39:18.964952 2803 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98\": not found" containerID="71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98" Mar 17 17:39:18.965056 kubelet[2803]: I0317 17:39:18.964976 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98"} err="failed to get container status \"71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98\": rpc error: code = NotFound desc = an error occurred when try to find container \"71409b3a62d7901b63df93358b6350e3681a4c4a063587a8ab441100f1c7fa98\": not found" Mar 17 17:39:18.965056 kubelet[2803]: I0317 17:39:18.964992 2803 scope.go:117] "RemoveContainer" containerID="04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7" Mar 17 17:39:18.965333 containerd[1477]: time="2025-03-17T17:39:18.965123804Z" level=error msg="ContainerStatus for \"04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7\": not found" Mar 17 17:39:18.965548 kubelet[2803]: E0317 17:39:18.965425 2803 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7\": not found" containerID="04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7" Mar 17 17:39:18.965548 kubelet[2803]: I0317 17:39:18.965515 2803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7"} err="failed to get container status \"04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7\": rpc error: code = NotFound desc = an error occurred when try to find container \"04e9441958db962cd90c967769c854d0a04c9750517932dba9b700a8b22ddad7\": not found" Mar 17 17:39:19.104831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6-rootfs.mount: Deactivated successfully. Mar 17 17:39:19.105621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b-rootfs.mount: Deactivated successfully. Mar 17 17:39:19.105694 systemd[1]: var-lib-kubelet-pods-6cc949b3\x2df508\x2d4d68\x2da13b\x2d27189884c607-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlv6x4.mount: Deactivated successfully. Mar 17 17:39:19.105758 systemd[1]: var-lib-kubelet-pods-69be127a\x2d3bf0\x2d4e81\x2d87b1\x2decb88934a4bc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d95xmk.mount: Deactivated successfully. Mar 17 17:39:19.105815 systemd[1]: var-lib-kubelet-pods-69be127a\x2d3bf0\x2d4e81\x2d87b1\x2decb88934a4bc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:39:19.105870 systemd[1]: var-lib-kubelet-pods-69be127a\x2d3bf0\x2d4e81\x2d87b1\x2decb88934a4bc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:39:19.958298 kubelet[2803]: I0317 17:39:19.957476 2803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69be127a-3bf0-4e81-87b1-ecb88934a4bc" path="/var/lib/kubelet/pods/69be127a-3bf0-4e81-87b1-ecb88934a4bc/volumes" Mar 17 17:39:19.958298 kubelet[2803]: I0317 17:39:19.958001 2803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc949b3-f508-4d68-a13b-27189884c607" path="/var/lib/kubelet/pods/6cc949b3-f508-4d68-a13b-27189884c607/volumes" Mar 17 17:39:20.206657 sshd[4387]: Connection closed by 139.178.89.65 port 43826 Mar 17 17:39:20.207957 sshd-session[4385]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:20.212499 systemd[1]: sshd@19-138.199.148.212:22-139.178.89.65:43826.service: Deactivated successfully. Mar 17 17:39:20.214692 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:39:20.214863 systemd[1]: session-20.scope: Consumed 1.032s CPU time. Mar 17 17:39:20.216861 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:39:20.218425 systemd-logind[1457]: Removed session 20. Mar 17 17:39:20.387680 systemd[1]: Started sshd@20-138.199.148.212:22-139.178.89.65:43830.service - OpenSSH per-connection server daemon (139.178.89.65:43830). Mar 17 17:39:21.145421 kubelet[2803]: E0317 17:39:21.145361 2803 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:39:21.375562 sshd[4550]: Accepted publickey for core from 139.178.89.65 port 43830 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:39:21.377651 sshd-session[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:21.383740 systemd-logind[1457]: New session 21 of user core. Mar 17 17:39:21.388347 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:39:23.151364 kubelet[2803]: I0317 17:39:23.150935 2803 setters.go:580] "Node became not ready" node="ci-4152-2-2-0-5dd1d5cf3a" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:39:23Z","lastTransitionTime":"2025-03-17T17:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:39:23.749440 kubelet[2803]: I0317 17:39:23.747723 2803 topology_manager.go:215] "Topology Admit Handler" podUID="63429518-5e2a-4129-bf44-65632dc437aa" podNamespace="kube-system" podName="cilium-s8ct4" Mar 17 17:39:23.749440 kubelet[2803]: E0317 17:39:23.747796 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69be127a-3bf0-4e81-87b1-ecb88934a4bc" containerName="cilium-agent" Mar 17 17:39:23.749440 kubelet[2803]: E0317 17:39:23.747807 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69be127a-3bf0-4e81-87b1-ecb88934a4bc" containerName="mount-cgroup" Mar 17 17:39:23.749440 kubelet[2803]: E0317 17:39:23.747814 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69be127a-3bf0-4e81-87b1-ecb88934a4bc" containerName="apply-sysctl-overwrites" Mar 17 17:39:23.749440 kubelet[2803]: E0317 17:39:23.747819 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69be127a-3bf0-4e81-87b1-ecb88934a4bc" containerName="mount-bpf-fs" Mar 17 17:39:23.749440 kubelet[2803]: E0317 17:39:23.747825 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6cc949b3-f508-4d68-a13b-27189884c607" containerName="cilium-operator" Mar 17 17:39:23.749440 kubelet[2803]: E0317 17:39:23.747833 2803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69be127a-3bf0-4e81-87b1-ecb88934a4bc" containerName="clean-cilium-state" Mar 17 17:39:23.749440 kubelet[2803]: I0317 17:39:23.747853 2803 memory_manager.go:354] "RemoveStaleState removing state" podUID="69be127a-3bf0-4e81-87b1-ecb88934a4bc" containerName="cilium-agent" Mar 17 17:39:23.749440 kubelet[2803]: I0317 17:39:23.747859 2803 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc949b3-f508-4d68-a13b-27189884c607" containerName="cilium-operator" Mar 17 17:39:23.757970 systemd[1]: Created slice kubepods-burstable-pod63429518_5e2a_4129_bf44_65632dc437aa.slice - libcontainer container kubepods-burstable-pod63429518_5e2a_4129_bf44_65632dc437aa.slice. Mar 17 17:39:23.765803 kubelet[2803]: W0317 17:39:23.765285 2803 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4152-2-2-0-5dd1d5cf3a" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-2-0-5dd1d5cf3a' and this object Mar 17 17:39:23.765803 kubelet[2803]: E0317 17:39:23.765341 2803 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4152-2-2-0-5dd1d5cf3a" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-2-0-5dd1d5cf3a' and this object Mar 17 17:39:23.765803 kubelet[2803]: W0317 17:39:23.765434 2803 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4152-2-2-0-5dd1d5cf3a" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-2-0-5dd1d5cf3a' and this object Mar 17 17:39:23.765803 kubelet[2803]: E0317 17:39:23.765446 2803 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4152-2-2-0-5dd1d5cf3a" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-2-0-5dd1d5cf3a' and this object Mar 17 17:39:23.765803 kubelet[2803]: W0317 17:39:23.765479 2803 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4152-2-2-0-5dd1d5cf3a" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-2-0-5dd1d5cf3a' and this object Mar 17 17:39:23.766032 kubelet[2803]: E0317 17:39:23.765490 2803 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4152-2-2-0-5dd1d5cf3a" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-2-0-5dd1d5cf3a' and this object Mar 17 17:39:23.766032 kubelet[2803]: W0317 17:39:23.765518 2803 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4152-2-2-0-5dd1d5cf3a" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-2-0-5dd1d5cf3a' and this object Mar 17 17:39:23.766032 kubelet[2803]: E0317 17:39:23.765527 2803 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4152-2-2-0-5dd1d5cf3a" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-2-0-5dd1d5cf3a' and this object Mar 17 17:39:23.789592 kubelet[2803]: I0317 17:39:23.789519 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/63429518-5e2a-4129-bf44-65632dc437aa-cilium-cgroup\") pod \"cilium-s8ct4\" (UID: \"63429518-5e2a-4129-bf44-65632dc437aa\") " pod="kube-system/cilium-s8ct4" Mar 17 17:39:23.789592 kubelet[2803]: I0317 17:39:23.789567 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63429518-5e2a-4129-bf44-65632dc437aa-cilium-config-path\") pod \"cilium-s8ct4\" (UID: \"63429518-5e2a-4129-bf44-65632dc437aa\") " pod="kube-system/cilium-s8ct4" Mar 17 17:39:23.789592 kubelet[2803]: I0317 17:39:23.789587 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/63429518-5e2a-4129-bf44-65632dc437aa-host-proc-sys-net\") pod \"cilium-s8ct4\" (UID: \"63429518-5e2a-4129-bf44-65632dc437aa\") " pod="kube-system/cilium-s8ct4" Mar 17 17:39:23.789592 kubelet[2803]: I0317 17:39:23.789602 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/63429518-5e2a-4129-bf44-65632dc437aa-bpf-maps\") pod \"cilium-s8ct4\" (UID: \"63429518-5e2a-4129-bf44-65632dc437aa\") " pod="kube-system/cilium-s8ct4" Mar 17 17:39:23.789592 kubelet[2803]: I0317 17:39:23.789618 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/63429518-5e2a-4129-bf44-65632dc437aa-cni-path\") pod \"cilium-s8ct4\" (UID: \"63429518-5e2a-4129-bf44-65632dc437aa\") " pod="kube-system/cilium-s8ct4" Mar 17 17:39:23.789592 kubelet[2803]: I0317 17:39:23.789634 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63429518-5e2a-4129-bf44-65632dc437aa-xtables-lock\") pod \"cilium-s8ct4\" (UID: \"63429518-5e2a-4129-bf44-65632dc437aa\") " pod="kube-system/cilium-s8ct4" Mar 17 17:39:23.790430 kubelet[2803]: I0317 17:39:23.789649 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/63429518-5e2a-4129-bf44-65632dc437aa-hostproc\") pod \"cilium-s8ct4\" (UID: \"63429518-5e2a-4129-bf44-65632dc437aa\") " pod="kube-system/cilium-s8ct4" Mar 17 17:39:23.790430 kubelet[2803]: I0317 17:39:23.789667 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63429518-5e2a-4129-bf44-65632dc437aa-lib-modules\") pod \"cilium-s8ct4\" (UID: \"63429518-5e2a-4129-bf44-65632dc437aa\") " pod="kube-system/cilium-s8ct4" Mar 17 17:39:23.790430 kubelet[2803]: I0317 17:39:23.789700 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/63429518-5e2a-4129-bf44-65632dc437aa-clustermesh-secrets\") pod \"cilium-s8ct4\" (UID: \"63429518-5e2a-4129-bf44-65632dc437aa\") " pod="kube-system/cilium-s8ct4" Mar 17 17:39:23.790430 kubelet[2803]: I0317 17:39:23.789803 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/63429518-5e2a-4129-bf44-65632dc437aa-cilium-run\") pod \"cilium-s8ct4\" (UID: \"63429518-5e2a-4129-bf44-65632dc437aa\") " pod="kube-system/cilium-s8ct4" Mar 17 17:39:23.790430 kubelet[2803]: I0317 17:39:23.789845 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/63429518-5e2a-4129-bf44-65632dc437aa-etc-cni-netd\") pod \"cilium-s8ct4\" (UID: \"63429518-5e2a-4129-bf44-65632dc437aa\") " pod="kube-system/cilium-s8ct4" Mar 17 17:39:23.790430 kubelet[2803]: I0317 17:39:23.789876 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/63429518-5e2a-4129-bf44-65632dc437aa-hubble-tls\") pod \"cilium-s8ct4\" (UID: \"63429518-5e2a-4129-bf44-65632dc437aa\") " pod="kube-system/cilium-s8ct4" Mar 17 17:39:23.790657 kubelet[2803]: I0317 17:39:23.789901 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwdtt\" (UniqueName: \"kubernetes.io/projected/63429518-5e2a-4129-bf44-65632dc437aa-kube-api-access-bwdtt\") pod \"cilium-s8ct4\" (UID: \"63429518-5e2a-4129-bf44-65632dc437aa\") " pod="kube-system/cilium-s8ct4" Mar 17 17:39:23.790657 kubelet[2803]: I0317 17:39:23.789928 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/63429518-5e2a-4129-bf44-65632dc437aa-cilium-ipsec-secrets\") pod \"cilium-s8ct4\" (UID: \"63429518-5e2a-4129-bf44-65632dc437aa\") " pod="kube-system/cilium-s8ct4" Mar 17 17:39:23.790657 kubelet[2803]: I0317 17:39:23.789951 2803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/63429518-5e2a-4129-bf44-65632dc437aa-host-proc-sys-kernel\") pod \"cilium-s8ct4\" (UID: \"63429518-5e2a-4129-bf44-65632dc437aa\") " pod="kube-system/cilium-s8ct4" Mar 17 17:39:23.933015 sshd[4552]: Connection closed by 139.178.89.65 port 43830 Mar 17 17:39:23.934318 sshd-session[4550]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:23.940239 systemd[1]: sshd@20-138.199.148.212:22-139.178.89.65:43830.service: Deactivated successfully. Mar 17 17:39:23.943123 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:39:23.943779 systemd[1]: session-21.scope: Consumed 1.752s CPU time. Mar 17 17:39:23.948230 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:39:23.950879 systemd-logind[1457]: Removed session 21. Mar 17 17:39:24.116795 systemd[1]: Started sshd@21-138.199.148.212:22-139.178.89.65:58658.service - OpenSSH per-connection server daemon (139.178.89.65:58658). Mar 17 17:39:24.894297 kubelet[2803]: E0317 17:39:24.892767 2803 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Mar 17 17:39:24.894297 kubelet[2803]: E0317 17:39:24.892866 2803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63429518-5e2a-4129-bf44-65632dc437aa-clustermesh-secrets podName:63429518-5e2a-4129-bf44-65632dc437aa nodeName:}" failed. No retries permitted until 2025-03-17 17:39:25.392841129 +0000 UTC m=+349.528456376 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/63429518-5e2a-4129-bf44-65632dc437aa-clustermesh-secrets") pod "cilium-s8ct4" (UID: "63429518-5e2a-4129-bf44-65632dc437aa") : failed to sync secret cache: timed out waiting for the condition Mar 17 17:39:24.894297 kubelet[2803]: E0317 17:39:24.893797 2803 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Mar 17 17:39:24.894297 kubelet[2803]: E0317 17:39:24.893870 2803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/63429518-5e2a-4129-bf44-65632dc437aa-cilium-config-path podName:63429518-5e2a-4129-bf44-65632dc437aa nodeName:}" failed. No retries permitted until 2025-03-17 17:39:25.393855018 +0000 UTC m=+349.529470265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/63429518-5e2a-4129-bf44-65632dc437aa-cilium-config-path") pod "cilium-s8ct4" (UID: "63429518-5e2a-4129-bf44-65632dc437aa") : failed to sync configmap cache: timed out waiting for the condition Mar 17 17:39:25.111186 sshd[4565]: Accepted publickey for core from 139.178.89.65 port 58658 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:39:25.113873 sshd-session[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:25.120506 systemd-logind[1457]: New session 22 of user core. Mar 17 17:39:25.125472 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:39:25.561916 containerd[1477]: time="2025-03-17T17:39:25.561816377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s8ct4,Uid:63429518-5e2a-4129-bf44-65632dc437aa,Namespace:kube-system,Attempt:0,}" Mar 17 17:39:25.586814 containerd[1477]: time="2025-03-17T17:39:25.586283668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:39:25.586814 containerd[1477]: time="2025-03-17T17:39:25.586342468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:39:25.586814 containerd[1477]: time="2025-03-17T17:39:25.586359428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:25.586814 containerd[1477]: time="2025-03-17T17:39:25.586460149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:39:25.609422 systemd[1]: Started cri-containerd-fb87faad21fcf08b8068cd03a74ac4ccd58bba4322c78b4a54750b5678cac5b9.scope - libcontainer container fb87faad21fcf08b8068cd03a74ac4ccd58bba4322c78b4a54750b5678cac5b9. Mar 17 17:39:25.643867 containerd[1477]: time="2025-03-17T17:39:25.643825004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s8ct4,Uid:63429518-5e2a-4129-bf44-65632dc437aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb87faad21fcf08b8068cd03a74ac4ccd58bba4322c78b4a54750b5678cac5b9\"" Mar 17 17:39:25.648630 containerd[1477]: time="2025-03-17T17:39:25.648513044Z" level=info msg="CreateContainer within sandbox \"fb87faad21fcf08b8068cd03a74ac4ccd58bba4322c78b4a54750b5678cac5b9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:39:25.661855 containerd[1477]: time="2025-03-17T17:39:25.661803759Z" level=info msg="CreateContainer within sandbox \"fb87faad21fcf08b8068cd03a74ac4ccd58bba4322c78b4a54750b5678cac5b9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ee600ad2c4a7f1f9732dd83a09bb96ba93ce5fb0e76ca021598a8c5430fe479d\"" Mar 17 17:39:25.664188 containerd[1477]: time="2025-03-17T17:39:25.662747607Z" level=info msg="StartContainer for \"ee600ad2c4a7f1f9732dd83a09bb96ba93ce5fb0e76ca021598a8c5430fe479d\"" Mar 17 17:39:25.687504 systemd[1]: Started cri-containerd-ee600ad2c4a7f1f9732dd83a09bb96ba93ce5fb0e76ca021598a8c5430fe479d.scope - libcontainer container ee600ad2c4a7f1f9732dd83a09bb96ba93ce5fb0e76ca021598a8c5430fe479d. Mar 17 17:39:25.718972 containerd[1477]: time="2025-03-17T17:39:25.718909811Z" level=info msg="StartContainer for \"ee600ad2c4a7f1f9732dd83a09bb96ba93ce5fb0e76ca021598a8c5430fe479d\" returns successfully" Mar 17 17:39:25.730563 systemd[1]: cri-containerd-ee600ad2c4a7f1f9732dd83a09bb96ba93ce5fb0e76ca021598a8c5430fe479d.scope: Deactivated successfully. Mar 17 17:39:25.765750 containerd[1477]: time="2025-03-17T17:39:25.765666334Z" level=info msg="shim disconnected" id=ee600ad2c4a7f1f9732dd83a09bb96ba93ce5fb0e76ca021598a8c5430fe479d namespace=k8s.io Mar 17 17:39:25.765750 containerd[1477]: time="2025-03-17T17:39:25.765736695Z" level=warning msg="cleaning up after shim disconnected" id=ee600ad2c4a7f1f9732dd83a09bb96ba93ce5fb0e76ca021598a8c5430fe479d namespace=k8s.io Mar 17 17:39:25.765750 containerd[1477]: time="2025-03-17T17:39:25.765750535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:39:25.798759 sshd[4569]: Connection closed by 139.178.89.65 port 58658 Mar 17 17:39:25.801166 sshd-session[4565]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:25.804933 systemd[1]: sshd@21-138.199.148.212:22-139.178.89.65:58658.service: Deactivated successfully. Mar 17 17:39:25.808710 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:39:25.809547 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:39:25.812235 systemd-logind[1457]: Removed session 22. Mar 17 17:39:25.947093 containerd[1477]: time="2025-03-17T17:39:25.946694535Z" level=info msg="CreateContainer within sandbox \"fb87faad21fcf08b8068cd03a74ac4ccd58bba4322c78b4a54750b5678cac5b9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:39:25.960934 containerd[1477]: time="2025-03-17T17:39:25.960837057Z" level=info msg="CreateContainer within sandbox \"fb87faad21fcf08b8068cd03a74ac4ccd58bba4322c78b4a54750b5678cac5b9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8b4acbdf622cc64c3e7f961ddfd2ac08b6ba22427b066b96215e3c914db7cb58\"" Mar 17 17:39:25.962213 containerd[1477]: time="2025-03-17T17:39:25.961621584Z" level=info msg="StartContainer for \"8b4acbdf622cc64c3e7f961ddfd2ac08b6ba22427b066b96215e3c914db7cb58\"" Mar 17 17:39:25.981578 systemd[1]: Started sshd@22-138.199.148.212:22-139.178.89.65:58674.service - OpenSSH per-connection server daemon (139.178.89.65:58674). Mar 17 17:39:25.993374 systemd[1]: Started cri-containerd-8b4acbdf622cc64c3e7f961ddfd2ac08b6ba22427b066b96215e3c914db7cb58.scope - libcontainer container 8b4acbdf622cc64c3e7f961ddfd2ac08b6ba22427b066b96215e3c914db7cb58. Mar 17 17:39:26.022593 containerd[1477]: time="2025-03-17T17:39:26.022544949Z" level=info msg="StartContainer for \"8b4acbdf622cc64c3e7f961ddfd2ac08b6ba22427b066b96215e3c914db7cb58\" returns successfully" Mar 17 17:39:26.029582 systemd[1]: cri-containerd-8b4acbdf622cc64c3e7f961ddfd2ac08b6ba22427b066b96215e3c914db7cb58.scope: Deactivated successfully. Mar 17 17:39:26.050808 containerd[1477]: time="2025-03-17T17:39:26.050740392Z" level=info msg="shim disconnected" id=8b4acbdf622cc64c3e7f961ddfd2ac08b6ba22427b066b96215e3c914db7cb58 namespace=k8s.io Mar 17 17:39:26.050808 containerd[1477]: time="2025-03-17T17:39:26.050806832Z" level=warning msg="cleaning up after shim disconnected" id=8b4acbdf622cc64c3e7f961ddfd2ac08b6ba22427b066b96215e3c914db7cb58 namespace=k8s.io Mar 17 17:39:26.051173 containerd[1477]: time="2025-03-17T17:39:26.050820913Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:39:26.147834 kubelet[2803]: E0317 17:39:26.147706 2803 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:39:26.951995 containerd[1477]: time="2025-03-17T17:39:26.951306155Z" level=info msg="CreateContainer within sandbox \"fb87faad21fcf08b8068cd03a74ac4ccd58bba4322c78b4a54750b5678cac5b9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:39:26.972971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2342132275.mount: Deactivated successfully. Mar 17 17:39:26.976171 containerd[1477]: time="2025-03-17T17:39:26.975973008Z" level=info msg="CreateContainer within sandbox \"fb87faad21fcf08b8068cd03a74ac4ccd58bba4322c78b4a54750b5678cac5b9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3075e5f153f07e22b489c2613cf537b9aa292f4e3de7866ea198ac36602558f0\"" Mar 17 17:39:26.976729 containerd[1477]: time="2025-03-17T17:39:26.976679934Z" level=info msg="StartContainer for \"3075e5f153f07e22b489c2613cf537b9aa292f4e3de7866ea198ac36602558f0\"" Mar 17 17:39:26.982645 sshd[4684]: Accepted publickey for core from 139.178.89.65 port 58674 ssh2: RSA SHA256:yEfyMaTgIJmxetknx1adjW4XZ6N3FKubniO5Q8/A/ug Mar 17 17:39:26.986822 sshd-session[4684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:26.996452 systemd-logind[1457]: New session 23 of user core. Mar 17 17:39:27.004485 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:39:27.026351 systemd[1]: Started cri-containerd-3075e5f153f07e22b489c2613cf537b9aa292f4e3de7866ea198ac36602558f0.scope - libcontainer container 3075e5f153f07e22b489c2613cf537b9aa292f4e3de7866ea198ac36602558f0. Mar 17 17:39:27.067738 containerd[1477]: time="2025-03-17T17:39:27.067686999Z" level=info msg="StartContainer for \"3075e5f153f07e22b489c2613cf537b9aa292f4e3de7866ea198ac36602558f0\" returns successfully" Mar 17 17:39:27.068537 systemd[1]: cri-containerd-3075e5f153f07e22b489c2613cf537b9aa292f4e3de7866ea198ac36602558f0.scope: Deactivated successfully. Mar 17 17:39:27.095879 containerd[1477]: time="2025-03-17T17:39:27.095780601Z" level=info msg="shim disconnected" id=3075e5f153f07e22b489c2613cf537b9aa292f4e3de7866ea198ac36602558f0 namespace=k8s.io Mar 17 17:39:27.095879 containerd[1477]: time="2025-03-17T17:39:27.095849801Z" level=warning msg="cleaning up after shim disconnected" id=3075e5f153f07e22b489c2613cf537b9aa292f4e3de7866ea198ac36602558f0 namespace=k8s.io Mar 17 17:39:27.095879 containerd[1477]: time="2025-03-17T17:39:27.095858081Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:39:27.411062 systemd[1]: run-containerd-runc-k8s.io-3075e5f153f07e22b489c2613cf537b9aa292f4e3de7866ea198ac36602558f0-runc.JdV10g.mount: Deactivated successfully. Mar 17 17:39:27.411317 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3075e5f153f07e22b489c2613cf537b9aa292f4e3de7866ea198ac36602558f0-rootfs.mount: Deactivated successfully. Mar 17 17:39:27.953080 kubelet[2803]: E0317 17:39:27.952385 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-d97mm" podUID="708a7941-6c20-4e03-91ae-2fc1a2f1b02c" Mar 17 17:39:27.960363 containerd[1477]: time="2025-03-17T17:39:27.960117331Z" level=info msg="CreateContainer within sandbox \"fb87faad21fcf08b8068cd03a74ac4ccd58bba4322c78b4a54750b5678cac5b9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:39:27.974816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2031972895.mount: Deactivated successfully. Mar 17 17:39:27.978309 containerd[1477]: time="2025-03-17T17:39:27.977918805Z" level=info msg="CreateContainer within sandbox \"fb87faad21fcf08b8068cd03a74ac4ccd58bba4322c78b4a54750b5678cac5b9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ca5e270787600aad252b330742099a8c4dd08201be8c9cf7913d58a5de70f025\"" Mar 17 17:39:27.979272 containerd[1477]: time="2025-03-17T17:39:27.979117175Z" level=info msg="StartContainer for \"ca5e270787600aad252b330742099a8c4dd08201be8c9cf7913d58a5de70f025\"" Mar 17 17:39:28.011440 systemd[1]: Started cri-containerd-ca5e270787600aad252b330742099a8c4dd08201be8c9cf7913d58a5de70f025.scope - libcontainer container ca5e270787600aad252b330742099a8c4dd08201be8c9cf7913d58a5de70f025. Mar 17 17:39:28.033500 systemd[1]: cri-containerd-ca5e270787600aad252b330742099a8c4dd08201be8c9cf7913d58a5de70f025.scope: Deactivated successfully. Mar 17 17:39:28.039949 containerd[1477]: time="2025-03-17T17:39:28.039734577Z" level=info msg="StartContainer for \"ca5e270787600aad252b330742099a8c4dd08201be8c9cf7913d58a5de70f025\" returns successfully" Mar 17 17:39:28.063754 containerd[1477]: time="2025-03-17T17:39:28.063671904Z" level=info msg="shim disconnected" id=ca5e270787600aad252b330742099a8c4dd08201be8c9cf7913d58a5de70f025 namespace=k8s.io Mar 17 17:39:28.063978 containerd[1477]: time="2025-03-17T17:39:28.063756025Z" level=warning msg="cleaning up after shim disconnected" id=ca5e270787600aad252b330742099a8c4dd08201be8c9cf7913d58a5de70f025 namespace=k8s.io Mar 17 17:39:28.063978 containerd[1477]: time="2025-03-17T17:39:28.063774345Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:39:28.410944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca5e270787600aad252b330742099a8c4dd08201be8c9cf7913d58a5de70f025-rootfs.mount: Deactivated successfully. Mar 17 17:39:28.963185 containerd[1477]: time="2025-03-17T17:39:28.962413930Z" level=info msg="CreateContainer within sandbox \"fb87faad21fcf08b8068cd03a74ac4ccd58bba4322c78b4a54750b5678cac5b9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:39:28.979955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3340858948.mount: Deactivated successfully. Mar 17 17:39:28.985102 containerd[1477]: time="2025-03-17T17:39:28.984960324Z" level=info msg="CreateContainer within sandbox \"fb87faad21fcf08b8068cd03a74ac4ccd58bba4322c78b4a54750b5678cac5b9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"899fab113709d0820129f90dbd4f5d64feb2fb3f37e5aa4513b91b09abca38bc\"" Mar 17 17:39:28.987113 containerd[1477]: time="2025-03-17T17:39:28.985686811Z" level=info msg="StartContainer for \"899fab113709d0820129f90dbd4f5d64feb2fb3f37e5aa4513b91b09abca38bc\"" Mar 17 17:39:29.014376 systemd[1]: Started cri-containerd-899fab113709d0820129f90dbd4f5d64feb2fb3f37e5aa4513b91b09abca38bc.scope - libcontainer container 899fab113709d0820129f90dbd4f5d64feb2fb3f37e5aa4513b91b09abca38bc. Mar 17 17:39:29.047752 containerd[1477]: time="2025-03-17T17:39:29.047680705Z" level=info msg="StartContainer for \"899fab113709d0820129f90dbd4f5d64feb2fb3f37e5aa4513b91b09abca38bc\" returns successfully" Mar 17 17:39:29.344170 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 17 17:39:29.953820 kubelet[2803]: E0317 17:39:29.953533 2803 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-d97mm" podUID="708a7941-6c20-4e03-91ae-2fc1a2f1b02c" Mar 17 17:39:32.310467 systemd-networkd[1376]: lxc_health: Link UP Mar 17 17:39:32.314714 systemd-networkd[1376]: lxc_health: Gained carrier Mar 17 17:39:33.589398 kubelet[2803]: I0317 17:39:33.589016 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s8ct4" podStartSLOduration=10.588997276 podStartE2EDuration="10.588997276s" podCreationTimestamp="2025-03-17 17:39:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:39:29.987883128 +0000 UTC m=+354.123498375" watchObservedRunningTime="2025-03-17 17:39:33.588997276 +0000 UTC m=+357.724612563" Mar 17 17:39:33.874259 systemd-networkd[1376]: lxc_health: Gained IPv6LL Mar 17 17:39:33.897449 systemd[1]: run-containerd-runc-k8s.io-899fab113709d0820129f90dbd4f5d64feb2fb3f37e5aa4513b91b09abca38bc-runc.FuPxWL.mount: Deactivated successfully. Mar 17 17:39:35.961751 containerd[1477]: time="2025-03-17T17:39:35.961638395Z" level=info msg="StopPodSandbox for \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\"" Mar 17 17:39:35.961751 containerd[1477]: time="2025-03-17T17:39:35.961728395Z" level=info msg="TearDown network for sandbox \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\" successfully" Mar 17 17:39:35.961751 containerd[1477]: time="2025-03-17T17:39:35.961738315Z" level=info msg="StopPodSandbox for \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\" returns successfully" Mar 17 17:39:35.962916 containerd[1477]: time="2025-03-17T17:39:35.962390161Z" level=info msg="RemovePodSandbox for \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\"" Mar 17 17:39:35.962916 containerd[1477]: time="2025-03-17T17:39:35.962436521Z" level=info msg="Forcibly stopping sandbox \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\"" Mar 17 17:39:35.962916 containerd[1477]: time="2025-03-17T17:39:35.962490002Z" level=info msg="TearDown network for sandbox \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\" successfully" Mar 17 17:39:35.967599 containerd[1477]: time="2025-03-17T17:39:35.966548717Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:35.967599 containerd[1477]: time="2025-03-17T17:39:35.966618077Z" level=info msg="RemovePodSandbox \"9634b2280513f647d9cceacf18f01ebe6be75683716e46276fb5df5b7405c92b\" returns successfully" Mar 17 17:39:35.968113 containerd[1477]: time="2025-03-17T17:39:35.967933929Z" level=info msg="StopPodSandbox for \"1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6\"" Mar 17 17:39:35.968113 containerd[1477]: time="2025-03-17T17:39:35.968017249Z" level=info msg="TearDown network for sandbox \"1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6\" successfully" Mar 17 17:39:35.968113 containerd[1477]: time="2025-03-17T17:39:35.968026530Z" level=info msg="StopPodSandbox for \"1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6\" returns successfully" Mar 17 17:39:35.968553 containerd[1477]: time="2025-03-17T17:39:35.968532494Z" level=info msg="RemovePodSandbox for \"1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6\"" Mar 17 17:39:35.968711 containerd[1477]: time="2025-03-17T17:39:35.968646135Z" level=info msg="Forcibly stopping sandbox \"1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6\"" Mar 17 17:39:35.968774 containerd[1477]: time="2025-03-17T17:39:35.968760296Z" level=info msg="TearDown network for sandbox \"1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6\" successfully" Mar 17 17:39:35.972022 containerd[1477]: time="2025-03-17T17:39:35.971970764Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:39:35.972559 containerd[1477]: time="2025-03-17T17:39:35.972365247Z" level=info msg="RemovePodSandbox \"1daedfdcc3423ab44f23d6b75f80855c8f89a8c4efe13e9332d69f3caeca7ae6\" returns successfully" Mar 17 17:39:38.326425 systemd[1]: run-containerd-runc-k8s.io-899fab113709d0820129f90dbd4f5d64feb2fb3f37e5aa4513b91b09abca38bc-runc.yQHGFK.mount: Deactivated successfully. Mar 17 17:39:38.547623 sshd[4761]: Connection closed by 139.178.89.65 port 58674 Mar 17 17:39:38.548781 sshd-session[4684]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:38.552987 systemd[1]: sshd@22-138.199.148.212:22-139.178.89.65:58674.service: Deactivated successfully. Mar 17 17:39:38.556713 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:39:38.560877 systemd-logind[1457]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:39:38.561835 systemd-logind[1457]: Removed session 23. Mar 17 17:39:53.107728 systemd[1]: cri-containerd-d661fdb42fdc1ccbd40767f73ee2eb2bb8498922232319426b763f3b430376d9.scope: Deactivated successfully. Mar 17 17:39:53.108030 systemd[1]: cri-containerd-d661fdb42fdc1ccbd40767f73ee2eb2bb8498922232319426b763f3b430376d9.scope: Consumed 6.151s CPU time, 21.8M memory peak, 0B memory swap peak. Mar 17 17:39:53.132154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d661fdb42fdc1ccbd40767f73ee2eb2bb8498922232319426b763f3b430376d9-rootfs.mount: Deactivated successfully. Mar 17 17:39:53.143089 containerd[1477]: time="2025-03-17T17:39:53.143011787Z" level=info msg="shim disconnected" id=d661fdb42fdc1ccbd40767f73ee2eb2bb8498922232319426b763f3b430376d9 namespace=k8s.io Mar 17 17:39:53.143089 containerd[1477]: time="2025-03-17T17:39:53.143068747Z" level=warning msg="cleaning up after shim disconnected" id=d661fdb42fdc1ccbd40767f73ee2eb2bb8498922232319426b763f3b430376d9 namespace=k8s.io Mar 17 17:39:53.143089 containerd[1477]: time="2025-03-17T17:39:53.143079948Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:39:53.276776 kubelet[2803]: E0317 17:39:53.276713 2803 request.go:1116] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) Mar 17 17:39:53.277196 kubelet[2803]: E0317 17:39:53.276783 2803 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Mar 17 17:39:53.459496 kubelet[2803]: E0317 17:39:53.459435 2803 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48348->10.0.0.2:2379: read: connection timed out" Mar 17 17:39:54.034995 kubelet[2803]: I0317 17:39:54.034945 2803 scope.go:117] "RemoveContainer" containerID="d661fdb42fdc1ccbd40767f73ee2eb2bb8498922232319426b763f3b430376d9" Mar 17 17:39:54.037579 containerd[1477]: time="2025-03-17T17:39:54.037546155Z" level=info msg="CreateContainer within sandbox \"2a0328a3cab34f478b846bb1c8bde04abb7c7f680e2a1e6ce86134708b7affe2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 17 17:39:54.051799 containerd[1477]: time="2025-03-17T17:39:54.051722380Z" level=info msg="CreateContainer within sandbox \"2a0328a3cab34f478b846bb1c8bde04abb7c7f680e2a1e6ce86134708b7affe2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"34c6bdda243ff5b48e1cd4205d479b16c377f73146a294b0b177ff3a39c907d3\"" Mar 17 17:39:54.052770 containerd[1477]: time="2025-03-17T17:39:54.052333265Z" level=info msg="StartContainer for \"34c6bdda243ff5b48e1cd4205d479b16c377f73146a294b0b177ff3a39c907d3\"" Mar 17 17:39:54.084403 systemd[1]: Started cri-containerd-34c6bdda243ff5b48e1cd4205d479b16c377f73146a294b0b177ff3a39c907d3.scope - libcontainer container 34c6bdda243ff5b48e1cd4205d479b16c377f73146a294b0b177ff3a39c907d3. Mar 17 17:39:54.123917 containerd[1477]: time="2025-03-17T17:39:54.123871677Z" level=info msg="StartContainer for \"34c6bdda243ff5b48e1cd4205d479b16c377f73146a294b0b177ff3a39c907d3\" returns successfully" Mar 17 17:39:55.762529 kubelet[2803]: E0317 17:39:55.762353 2803 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48152->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a.182da7d9f01f9b0a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4152-2-2-0-5dd1d5cf3a,UID:21457cf4fdd4a10480acb3902f4e166e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4152-2-2-0-5dd1d5cf3a,},FirstTimestamp:2025-03-17 17:39:45.340594954 +0000 UTC m=+369.476210201,LastTimestamp:2025-03-17 17:39:45.340594954 +0000 UTC m=+369.476210201,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-2-0-5dd1d5cf3a,}" Mar 17 17:39:59.546404 systemd[1]: cri-containerd-9e97d77b59e64e9cce0a99b8e10ed0c6cdfb0288249bdccdc7d5fea70b1dfa38.scope: Deactivated successfully. Mar 17 17:39:59.546670 systemd[1]: cri-containerd-9e97d77b59e64e9cce0a99b8e10ed0c6cdfb0288249bdccdc7d5fea70b1dfa38.scope: Consumed 3.366s CPU time, 18.3M memory peak, 0B memory swap peak. Mar 17 17:39:59.570106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e97d77b59e64e9cce0a99b8e10ed0c6cdfb0288249bdccdc7d5fea70b1dfa38-rootfs.mount: Deactivated successfully. Mar 17 17:39:59.575903 containerd[1477]: time="2025-03-17T17:39:59.575818131Z" level=info msg="shim disconnected" id=9e97d77b59e64e9cce0a99b8e10ed0c6cdfb0288249bdccdc7d5fea70b1dfa38 namespace=k8s.io Mar 17 17:39:59.575903 containerd[1477]: time="2025-03-17T17:39:59.575899972Z" level=warning msg="cleaning up after shim disconnected" id=9e97d77b59e64e9cce0a99b8e10ed0c6cdfb0288249bdccdc7d5fea70b1dfa38 namespace=k8s.io Mar 17 17:39:59.576517 containerd[1477]: time="2025-03-17T17:39:59.575911532Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:40:00.054483 kubelet[2803]: I0317 17:40:00.054436 2803 scope.go:117] "RemoveContainer" containerID="9e97d77b59e64e9cce0a99b8e10ed0c6cdfb0288249bdccdc7d5fea70b1dfa38" Mar 17 17:40:00.057889 containerd[1477]: time="2025-03-17T17:40:00.057836416Z" level=info msg="CreateContainer within sandbox \"4342219d82960077af97f05ec1b3a3f418c939ed8288c9486aa2251a325b23cf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 17 17:40:00.071525 containerd[1477]: time="2025-03-17T17:40:00.071453357Z" level=info msg="CreateContainer within sandbox \"4342219d82960077af97f05ec1b3a3f418c939ed8288c9486aa2251a325b23cf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"001a8a8ba66195657a1fdecb9939f25c96140fa0f7ab2b0b03ab5f3641974887\"" Mar 17 17:40:00.073415 containerd[1477]: time="2025-03-17T17:40:00.072232483Z" level=info msg="StartContainer for \"001a8a8ba66195657a1fdecb9939f25c96140fa0f7ab2b0b03ab5f3641974887\"" Mar 17 17:40:00.106390 systemd[1]: Started cri-containerd-001a8a8ba66195657a1fdecb9939f25c96140fa0f7ab2b0b03ab5f3641974887.scope - libcontainer container 001a8a8ba66195657a1fdecb9939f25c96140fa0f7ab2b0b03ab5f3641974887. Mar 17 17:40:00.148027 containerd[1477]: time="2025-03-17T17:40:00.147979050Z" level=info msg="StartContainer for \"001a8a8ba66195657a1fdecb9939f25c96140fa0f7ab2b0b03ab5f3641974887\" returns successfully"