Mar 7 00:53:06.892574 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 7 00:53:06.892596 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 6 22:59:59 -00 2026 Mar 7 00:53:06.892607 kernel: KASLR enabled Mar 7 00:53:06.892613 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 7 00:53:06.892619 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Mar 7 00:53:06.892624 kernel: random: crng init done Mar 7 00:53:06.892632 kernel: ACPI: Early table checksum verification disabled Mar 7 00:53:06.892638 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Mar 7 00:53:06.892644 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Mar 7 00:53:06.892652 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:53:06.892658 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:53:06.892665 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:53:06.892671 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:53:06.892677 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:53:06.892685 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:53:06.892693 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:53:06.892699 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:53:06.892706 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:53:06.892712 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Mar 7 00:53:06.892719 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Mar 7 00:53:06.892725 kernel: NUMA: Failed to initialise from firmware Mar 7 00:53:06.892732 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Mar 7 00:53:06.892738 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Mar 7 00:53:06.892745 kernel: Zone ranges: Mar 7 00:53:06.892751 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 7 00:53:06.892759 kernel: DMA32 empty Mar 7 00:53:06.892765 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Mar 7 00:53:06.892772 kernel: Movable zone start for each node Mar 7 00:53:06.892778 kernel: Early memory node ranges Mar 7 00:53:06.892785 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Mar 7 00:53:06.892791 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Mar 7 00:53:06.892798 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Mar 7 00:53:06.892811 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Mar 7 00:53:06.892818 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Mar 7 00:53:06.892824 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Mar 7 00:53:06.892831 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Mar 7 00:53:06.892837 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Mar 7 00:53:06.892846 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 7 00:53:06.892852 kernel: psci: probing for conduit method from ACPI. Mar 7 00:53:06.892859 kernel: psci: PSCIv1.1 detected in firmware. Mar 7 00:53:06.892868 kernel: psci: Using standard PSCI v0.2 function IDs Mar 7 00:53:06.892875 kernel: psci: Trusted OS migration not required Mar 7 00:53:06.892883 kernel: psci: SMC Calling Convention v1.1 Mar 7 00:53:06.892891 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 7 00:53:06.892898 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 7 00:53:06.892905 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 7 00:53:06.892912 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 7 00:53:06.892918 kernel: Detected PIPT I-cache on CPU0 Mar 7 00:53:06.892925 kernel: CPU features: detected: GIC system register CPU interface Mar 7 00:53:06.892932 kernel: CPU features: detected: Hardware dirty bit management Mar 7 00:53:06.894998 kernel: CPU features: detected: Spectre-v4 Mar 7 00:53:06.895014 kernel: CPU features: detected: Spectre-BHB Mar 7 00:53:06.895022 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 7 00:53:06.895034 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 7 00:53:06.895041 kernel: CPU features: detected: ARM erratum 1418040 Mar 7 00:53:06.895048 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 7 00:53:06.895055 kernel: alternatives: applying boot alternatives Mar 7 00:53:06.895064 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 00:53:06.895072 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 00:53:06.895079 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 00:53:06.895086 kernel: Fallback order for Node 0: 0 Mar 7 00:53:06.895092 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Mar 7 00:53:06.895099 kernel: Policy zone: Normal Mar 7 00:53:06.895106 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 00:53:06.895115 kernel: software IO TLB: area num 2. Mar 7 00:53:06.895122 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Mar 7 00:53:06.895129 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Mar 7 00:53:06.895136 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 00:53:06.895143 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 00:53:06.895151 kernel: rcu: RCU event tracing is enabled. Mar 7 00:53:06.895158 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 00:53:06.895165 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 00:53:06.895172 kernel: Tracing variant of Tasks RCU enabled. Mar 7 00:53:06.895179 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 00:53:06.895186 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 00:53:06.895193 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 7 00:53:06.895202 kernel: GICv3: 256 SPIs implemented Mar 7 00:53:06.895209 kernel: GICv3: 0 Extended SPIs implemented Mar 7 00:53:06.895216 kernel: Root IRQ handler: gic_handle_irq Mar 7 00:53:06.895224 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 7 00:53:06.895231 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 7 00:53:06.895238 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 7 00:53:06.895245 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Mar 7 00:53:06.895252 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Mar 7 00:53:06.895260 kernel: GICv3: using LPI property table @0x00000001000e0000 Mar 7 00:53:06.895267 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Mar 7 00:53:06.895274 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 00:53:06.895283 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 7 00:53:06.895290 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 7 00:53:06.895297 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 7 00:53:06.895304 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 7 00:53:06.895312 kernel: Console: colour dummy device 80x25 Mar 7 00:53:06.895319 kernel: ACPI: Core revision 20230628 Mar 7 00:53:06.895327 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 7 00:53:06.895335 kernel: pid_max: default: 32768 minimum: 301 Mar 7 00:53:06.895342 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 00:53:06.895350 kernel: landlock: Up and running. Mar 7 00:53:06.895358 kernel: SELinux: Initializing. Mar 7 00:53:06.895366 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:53:06.895373 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:53:06.895380 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:53:06.895388 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:53:06.895395 kernel: rcu: Hierarchical SRCU implementation. Mar 7 00:53:06.895403 kernel: rcu: Max phase no-delay instances is 400. Mar 7 00:53:06.895410 kernel: Platform MSI: ITS@0x8080000 domain created Mar 7 00:53:06.895417 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 7 00:53:06.895426 kernel: Remapping and enabling EFI services. Mar 7 00:53:06.895433 kernel: smp: Bringing up secondary CPUs ... Mar 7 00:53:06.895441 kernel: Detected PIPT I-cache on CPU1 Mar 7 00:53:06.895448 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 7 00:53:06.895466 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Mar 7 00:53:06.895475 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 7 00:53:06.895482 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 7 00:53:06.895489 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 00:53:06.895496 kernel: SMP: Total of 2 processors activated. Mar 7 00:53:06.895506 kernel: CPU features: detected: 32-bit EL0 Support Mar 7 00:53:06.895513 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 7 00:53:06.895521 kernel: CPU features: detected: Common not Private translations Mar 7 00:53:06.895534 kernel: CPU features: detected: CRC32 instructions Mar 7 00:53:06.895543 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 7 00:53:06.895551 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 7 00:53:06.895558 kernel: CPU features: detected: LSE atomic instructions Mar 7 00:53:06.895566 kernel: CPU features: detected: Privileged Access Never Mar 7 00:53:06.895574 kernel: CPU features: detected: RAS Extension Support Mar 7 00:53:06.895583 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 7 00:53:06.895591 kernel: CPU: All CPU(s) started at EL1 Mar 7 00:53:06.895598 kernel: alternatives: applying system-wide alternatives Mar 7 00:53:06.895606 kernel: devtmpfs: initialized Mar 7 00:53:06.895614 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 00:53:06.895621 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 00:53:06.895629 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 00:53:06.895636 kernel: SMBIOS 3.0.0 present. Mar 7 00:53:06.895646 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Mar 7 00:53:06.895653 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 00:53:06.895661 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 7 00:53:06.895669 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 7 00:53:06.895676 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 7 00:53:06.895684 kernel: audit: initializing netlink subsys (disabled) Mar 7 00:53:06.895691 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Mar 7 00:53:06.895699 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 00:53:06.895706 kernel: cpuidle: using governor menu Mar 7 00:53:06.895715 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 7 00:53:06.895723 kernel: ASID allocator initialised with 32768 entries Mar 7 00:53:06.895731 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 00:53:06.895739 kernel: Serial: AMBA PL011 UART driver Mar 7 00:53:06.895746 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 7 00:53:06.895754 kernel: Modules: 0 pages in range for non-PLT usage Mar 7 00:53:06.895761 kernel: Modules: 509008 pages in range for PLT usage Mar 7 00:53:06.895769 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 00:53:06.895777 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 00:53:06.895786 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 7 00:53:06.895794 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 7 00:53:06.895802 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 00:53:06.895809 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 00:53:06.895817 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 7 00:53:06.895825 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 7 00:53:06.895832 kernel: ACPI: Added _OSI(Module Device) Mar 7 00:53:06.895840 kernel: ACPI: Added _OSI(Processor Device) Mar 7 00:53:06.895847 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 00:53:06.895856 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 00:53:06.895864 kernel: ACPI: Interpreter enabled Mar 7 00:53:06.895871 kernel: ACPI: Using GIC for interrupt routing Mar 7 00:53:06.895879 kernel: ACPI: MCFG table detected, 1 entries Mar 7 00:53:06.895887 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 7 00:53:06.895894 kernel: printk: console [ttyAMA0] enabled Mar 7 00:53:06.895902 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 00:53:06.897099 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 00:53:06.897189 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 7 00:53:06.897258 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 7 00:53:06.897324 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 7 00:53:06.897391 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 7 00:53:06.897400 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 7 00:53:06.897408 kernel: PCI host bridge to bus 0000:00 Mar 7 00:53:06.897525 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 7 00:53:06.897598 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 7 00:53:06.897661 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 7 00:53:06.897723 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 00:53:06.897812 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 7 00:53:06.897892 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Mar 7 00:53:06.900262 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Mar 7 00:53:06.900366 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Mar 7 00:53:06.900510 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 7 00:53:06.900597 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Mar 7 00:53:06.900694 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 7 00:53:06.900764 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Mar 7 00:53:06.901011 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 7 00:53:06.901097 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Mar 7 00:53:06.901178 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 7 00:53:06.901248 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Mar 7 00:53:06.901322 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 7 00:53:06.901391 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Mar 7 00:53:06.901481 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 7 00:53:06.901555 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Mar 7 00:53:06.901635 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 7 00:53:06.901712 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Mar 7 00:53:06.901790 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 7 00:53:06.901859 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Mar 7 00:53:06.904032 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 7 00:53:06.904139 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Mar 7 00:53:06.904229 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Mar 7 00:53:06.904300 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Mar 7 00:53:06.904381 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 7 00:53:06.904484 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Mar 7 00:53:06.904569 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 7 00:53:06.904640 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 7 00:53:06.904720 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 7 00:53:06.904794 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Mar 7 00:53:06.904872 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 7 00:53:06.904958 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Mar 7 00:53:06.905701 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Mar 7 00:53:06.905804 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 7 00:53:06.905877 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Mar 7 00:53:06.906044 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 7 00:53:06.906125 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Mar 7 00:53:06.906195 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Mar 7 00:53:06.906272 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 7 00:53:06.906342 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Mar 7 00:53:06.906411 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Mar 7 00:53:06.906541 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 7 00:53:06.906620 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Mar 7 00:53:06.906691 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Mar 7 00:53:06.906762 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 7 00:53:06.906835 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Mar 7 00:53:06.906911 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Mar 7 00:53:06.906991 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Mar 7 00:53:06.907069 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Mar 7 00:53:06.907139 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Mar 7 00:53:06.907208 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Mar 7 00:53:06.907278 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 7 00:53:06.907346 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Mar 7 00:53:06.907414 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Mar 7 00:53:06.907496 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 7 00:53:06.907565 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Mar 7 00:53:06.907636 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Mar 7 00:53:06.907707 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 7 00:53:06.907775 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Mar 7 00:53:06.907843 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Mar 7 00:53:06.907913 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 7 00:53:06.908480 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Mar 7 00:53:06.908567 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Mar 7 00:53:06.908653 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 7 00:53:06.908723 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Mar 7 00:53:06.908792 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Mar 7 00:53:06.908863 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 7 00:53:06.908932 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Mar 7 00:53:06.909047 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Mar 7 00:53:06.909118 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 7 00:53:06.909361 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Mar 7 00:53:06.909437 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Mar 7 00:53:06.909560 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Mar 7 00:53:06.909633 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Mar 7 00:53:06.909704 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Mar 7 00:53:06.909772 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Mar 7 00:53:06.909840 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Mar 7 00:53:06.909913 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Mar 7 00:53:06.910020 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Mar 7 00:53:06.910093 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Mar 7 00:53:06.910163 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Mar 7 00:53:06.910232 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Mar 7 00:53:06.910311 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Mar 7 00:53:06.910382 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 7 00:53:06.910467 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Mar 7 00:53:06.910541 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 7 00:53:06.910611 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Mar 7 00:53:06.910680 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 7 00:53:06.910748 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Mar 7 00:53:06.910817 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Mar 7 00:53:06.910890 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Mar 7 00:53:06.911053 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Mar 7 00:53:06.911126 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Mar 7 00:53:06.911193 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 7 00:53:06.911261 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Mar 7 00:53:06.911327 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 7 00:53:06.911394 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Mar 7 00:53:06.911493 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 7 00:53:06.911569 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Mar 7 00:53:06.911644 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 7 00:53:06.911714 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Mar 7 00:53:06.911790 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 7 00:53:06.911860 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Mar 7 00:53:06.911929 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 7 00:53:06.912036 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Mar 7 00:53:06.912108 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 7 00:53:06.912177 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Mar 7 00:53:06.912247 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 7 00:53:06.912315 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Mar 7 00:53:06.912381 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Mar 7 00:53:06.912460 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Mar 7 00:53:06.912543 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Mar 7 00:53:06.912615 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 7 00:53:06.912685 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Mar 7 00:53:06.912753 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 7 00:53:06.912825 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 7 00:53:06.912896 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Mar 7 00:53:06.914083 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Mar 7 00:53:06.914177 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Mar 7 00:53:06.914255 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 7 00:53:06.914323 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 7 00:53:06.914390 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Mar 7 00:53:06.914474 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Mar 7 00:53:06.914562 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Mar 7 00:53:06.914638 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Mar 7 00:53:06.914709 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 7 00:53:06.914779 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 7 00:53:06.914854 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Mar 7 00:53:06.914929 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Mar 7 00:53:06.916134 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Mar 7 00:53:06.916212 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 7 00:53:06.916281 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 7 00:53:06.916349 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Mar 7 00:53:06.916417 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Mar 7 00:53:06.917061 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Mar 7 00:53:06.917153 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Mar 7 00:53:06.917225 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 7 00:53:06.917307 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 7 00:53:06.917377 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 7 00:53:06.917445 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Mar 7 00:53:06.917540 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Mar 7 00:53:06.917613 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Mar 7 00:53:06.917684 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 7 00:53:06.917757 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 7 00:53:06.917826 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Mar 7 00:53:06.917893 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 7 00:53:06.919051 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Mar 7 00:53:06.919154 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Mar 7 00:53:06.919227 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Mar 7 00:53:06.919299 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 7 00:53:06.919369 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 7 00:53:06.919443 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Mar 7 00:53:06.919556 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 7 00:53:06.919631 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 7 00:53:06.919699 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 7 00:53:06.919769 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Mar 7 00:53:06.919838 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 7 00:53:06.919910 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 7 00:53:06.921365 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Mar 7 00:53:06.921493 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Mar 7 00:53:06.921576 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Mar 7 00:53:06.921647 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 7 00:53:06.921708 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 7 00:53:06.921769 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 7 00:53:06.921852 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 7 00:53:06.921916 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Mar 7 00:53:06.922041 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Mar 7 00:53:06.922117 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Mar 7 00:53:06.922180 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Mar 7 00:53:06.922241 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Mar 7 00:53:06.922312 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Mar 7 00:53:06.922376 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Mar 7 00:53:06.922442 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Mar 7 00:53:06.922529 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Mar 7 00:53:06.922593 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Mar 7 00:53:06.922670 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Mar 7 00:53:06.922740 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Mar 7 00:53:06.922805 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Mar 7 00:53:06.922868 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Mar 7 00:53:06.922983 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Mar 7 00:53:06.923058 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Mar 7 00:53:06.923126 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 7 00:53:06.923195 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Mar 7 00:53:06.923266 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Mar 7 00:53:06.923329 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 7 00:53:06.923398 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Mar 7 00:53:06.923473 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Mar 7 00:53:06.923540 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 7 00:53:06.923615 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Mar 7 00:53:06.923683 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Mar 7 00:53:06.923757 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Mar 7 00:53:06.923768 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 7 00:53:06.923777 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 7 00:53:06.923785 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 7 00:53:06.923793 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 7 00:53:06.923801 kernel: iommu: Default domain type: Translated Mar 7 00:53:06.923809 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 7 00:53:06.923817 kernel: efivars: Registered efivars operations Mar 7 00:53:06.923827 kernel: vgaarb: loaded Mar 7 00:53:06.923835 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 7 00:53:06.923843 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 00:53:06.923851 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 00:53:06.923859 kernel: pnp: PnP ACPI init Mar 7 00:53:06.923946 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 7 00:53:06.923959 kernel: pnp: PnP ACPI: found 1 devices Mar 7 00:53:06.923967 kernel: NET: Registered PF_INET protocol family Mar 7 00:53:06.923975 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 00:53:06.923986 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 00:53:06.923997 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 00:53:06.924005 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 00:53:06.924013 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 00:53:06.924021 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 00:53:06.924029 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:53:06.924037 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:53:06.924045 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 00:53:06.924125 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Mar 7 00:53:06.924139 kernel: PCI: CLS 0 bytes, default 64 Mar 7 00:53:06.924147 kernel: kvm [1]: HYP mode not available Mar 7 00:53:06.924155 kernel: Initialise system trusted keyrings Mar 7 00:53:06.924163 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 00:53:06.924171 kernel: Key type asymmetric registered Mar 7 00:53:06.924178 kernel: Asymmetric key parser 'x509' registered Mar 7 00:53:06.924186 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 7 00:53:06.924194 kernel: io scheduler mq-deadline registered Mar 7 00:53:06.924204 kernel: io scheduler kyber registered Mar 7 00:53:06.924213 kernel: io scheduler bfq registered Mar 7 00:53:06.924222 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 7 00:53:06.924295 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Mar 7 00:53:06.924367 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Mar 7 00:53:06.924438 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:53:06.924552 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Mar 7 00:53:06.924632 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Mar 7 00:53:06.924713 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:53:06.924786 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Mar 7 00:53:06.924857 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Mar 7 00:53:06.924928 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:53:06.925015 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Mar 7 00:53:06.925097 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Mar 7 00:53:06.925167 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:53:06.925239 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Mar 7 00:53:06.925311 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Mar 7 00:53:06.925381 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:53:06.925465 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Mar 7 00:53:06.925546 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Mar 7 00:53:06.925619 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:53:06.925693 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Mar 7 00:53:06.925765 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Mar 7 00:53:06.925835 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:53:06.925908 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Mar 7 00:53:06.925996 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Mar 7 00:53:06.926071 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:53:06.926082 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Mar 7 00:53:06.926152 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Mar 7 00:53:06.926227 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Mar 7 00:53:06.926295 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:53:06.926306 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 7 00:53:06.926318 kernel: ACPI: button: Power Button [PWRB] Mar 7 00:53:06.926326 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 7 00:53:06.926400 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Mar 7 00:53:06.926487 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Mar 7 00:53:06.926499 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 00:53:06.926507 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 7 00:53:06.926579 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Mar 7 00:53:06.926591 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Mar 7 00:53:06.926598 kernel: thunder_xcv, ver 1.0 Mar 7 00:53:06.926609 kernel: thunder_bgx, ver 1.0 Mar 7 00:53:06.926617 kernel: nicpf, ver 1.0 Mar 7 00:53:06.926624 kernel: nicvf, ver 1.0 Mar 7 00:53:06.926706 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 7 00:53:06.926772 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-07T00:53:06 UTC (1772844786) Mar 7 00:53:06.926783 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 00:53:06.926795 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 7 00:53:06.926804 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 7 00:53:06.926814 kernel: watchdog: Hard watchdog permanently disabled Mar 7 00:53:06.926822 kernel: NET: Registered PF_INET6 protocol family Mar 7 00:53:06.926830 kernel: Segment Routing with IPv6 Mar 7 00:53:06.926837 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 00:53:06.926845 kernel: NET: Registered PF_PACKET protocol family Mar 7 00:53:06.926853 kernel: Key type dns_resolver registered Mar 7 00:53:06.926861 kernel: registered taskstats version 1 Mar 7 00:53:06.926869 kernel: Loading compiled-in X.509 certificates Mar 7 00:53:06.926877 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: e62b4e4ebcb406beff1271ecc7444548c4ab67e9' Mar 7 00:53:06.926886 kernel: Key type .fscrypt registered Mar 7 00:53:06.926894 kernel: Key type fscrypt-provisioning registered Mar 7 00:53:06.926902 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 00:53:06.926910 kernel: ima: Allocated hash algorithm: sha1 Mar 7 00:53:06.926918 kernel: ima: No architecture policies found Mar 7 00:53:06.926926 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 7 00:53:06.926934 kernel: clk: Disabling unused clocks Mar 7 00:53:06.926969 kernel: Freeing unused kernel memory: 39424K Mar 7 00:53:06.926977 kernel: Run /init as init process Mar 7 00:53:06.926988 kernel: with arguments: Mar 7 00:53:06.926996 kernel: /init Mar 7 00:53:06.927003 kernel: with environment: Mar 7 00:53:06.927011 kernel: HOME=/ Mar 7 00:53:06.927019 kernel: TERM=linux Mar 7 00:53:06.927028 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 00:53:06.927039 systemd[1]: Detected virtualization kvm. Mar 7 00:53:06.927047 systemd[1]: Detected architecture arm64. Mar 7 00:53:06.927056 systemd[1]: Running in initrd. Mar 7 00:53:06.927065 systemd[1]: No hostname configured, using default hostname. Mar 7 00:53:06.927073 systemd[1]: Hostname set to . Mar 7 00:53:06.927081 systemd[1]: Initializing machine ID from VM UUID. Mar 7 00:53:06.927089 systemd[1]: Queued start job for default target initrd.target. Mar 7 00:53:06.927098 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:53:06.927107 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:53:06.927116 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 00:53:06.927125 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:53:06.927134 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 00:53:06.927144 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 00:53:06.927154 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 00:53:06.927163 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 00:53:06.927171 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:53:06.927180 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:53:06.927190 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:53:06.927198 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:53:06.927207 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:53:06.927215 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:53:06.927224 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:53:06.927232 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:53:06.927241 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 00:53:06.927249 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 00:53:06.927259 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:53:06.927267 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:53:06.927276 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:53:06.927284 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:53:06.927292 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 00:53:06.927301 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:53:06.927309 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 00:53:06.927318 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 00:53:06.927326 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:53:06.927336 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:53:06.927344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:53:06.927352 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 00:53:06.927361 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:53:06.927369 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 00:53:06.927401 systemd-journald[236]: Collecting audit messages is disabled. Mar 7 00:53:06.927425 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 00:53:06.927434 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 00:53:06.927443 kernel: Bridge firewalling registered Mar 7 00:53:06.927480 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:53:06.927490 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:53:06.927499 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:53:06.927509 systemd-journald[236]: Journal started Mar 7 00:53:06.927529 systemd-journald[236]: Runtime Journal (/run/log/journal/a593f43bbdaf4f4f8b20b52fd19412cf) is 8.0M, max 76.6M, 68.6M free. Mar 7 00:53:06.888020 systemd-modules-load[237]: Inserted module 'overlay' Mar 7 00:53:06.905731 systemd-modules-load[237]: Inserted module 'br_netfilter' Mar 7 00:53:06.931979 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:53:06.934991 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:53:06.936079 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 00:53:06.947410 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:53:06.950125 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:53:06.952976 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:53:06.961794 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:53:06.965009 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:53:06.973213 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 00:53:06.974138 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:53:06.979143 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:53:06.986991 dracut-cmdline[273]: dracut-dracut-053 Mar 7 00:53:06.989334 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 00:53:07.017338 systemd-resolved[277]: Positive Trust Anchors: Mar 7 00:53:07.017989 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:53:07.018022 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:53:07.027632 systemd-resolved[277]: Defaulting to hostname 'linux'. Mar 7 00:53:07.029540 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:53:07.030809 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:53:07.067959 kernel: SCSI subsystem initialized Mar 7 00:53:07.070988 kernel: Loading iSCSI transport class v2.0-870. Mar 7 00:53:07.079005 kernel: iscsi: registered transport (tcp) Mar 7 00:53:07.096997 kernel: iscsi: registered transport (qla4xxx) Mar 7 00:53:07.097068 kernel: QLogic iSCSI HBA Driver Mar 7 00:53:07.150722 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 00:53:07.160189 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 00:53:07.181196 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 00:53:07.181269 kernel: device-mapper: uevent: version 1.0.3 Mar 7 00:53:07.182233 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 00:53:07.235996 kernel: raid6: neonx8 gen() 15663 MB/s Mar 7 00:53:07.249998 kernel: raid6: neonx4 gen() 15581 MB/s Mar 7 00:53:07.267033 kernel: raid6: neonx2 gen() 13158 MB/s Mar 7 00:53:07.283999 kernel: raid6: neonx1 gen() 10437 MB/s Mar 7 00:53:07.301001 kernel: raid6: int64x8 gen() 6930 MB/s Mar 7 00:53:07.318009 kernel: raid6: int64x4 gen() 7328 MB/s Mar 7 00:53:07.335003 kernel: raid6: int64x2 gen() 6108 MB/s Mar 7 00:53:07.352034 kernel: raid6: int64x1 gen() 5036 MB/s Mar 7 00:53:07.352100 kernel: raid6: using algorithm neonx8 gen() 15663 MB/s Mar 7 00:53:07.368996 kernel: raid6: .... xor() 11896 MB/s, rmw enabled Mar 7 00:53:07.369094 kernel: raid6: using neon recovery algorithm Mar 7 00:53:07.374145 kernel: xor: measuring software checksum speed Mar 7 00:53:07.374202 kernel: 8regs : 19750 MB/sec Mar 7 00:53:07.374222 kernel: 32regs : 19712 MB/sec Mar 7 00:53:07.374978 kernel: arm64_neon : 26963 MB/sec Mar 7 00:53:07.375008 kernel: xor: using function: arm64_neon (26963 MB/sec) Mar 7 00:53:07.427009 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 00:53:07.445236 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:53:07.452243 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:53:07.466271 systemd-udevd[458]: Using default interface naming scheme 'v255'. Mar 7 00:53:07.470407 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:53:07.482260 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 00:53:07.497169 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Mar 7 00:53:07.534162 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:53:07.539228 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:53:07.591634 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:53:07.598619 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 00:53:07.622041 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 00:53:07.623599 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:53:07.625474 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:53:07.627455 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:53:07.637640 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 00:53:07.655746 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:53:07.693852 kernel: scsi host0: Virtio SCSI HBA Mar 7 00:53:07.711166 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 00:53:07.712794 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 7 00:53:07.717986 kernel: ACPI: bus type USB registered Mar 7 00:53:07.720214 kernel: usbcore: registered new interface driver usbfs Mar 7 00:53:07.720265 kernel: usbcore: registered new interface driver hub Mar 7 00:53:07.720276 kernel: usbcore: registered new device driver usb Mar 7 00:53:07.743681 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 00:53:07.743794 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:53:07.745645 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:53:07.748279 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:53:07.748447 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:53:07.749099 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:53:07.761378 kernel: sr 0:0:0:0: Power-on or device reset occurred Mar 7 00:53:07.758227 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:53:07.766971 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Mar 7 00:53:07.767155 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 00:53:07.769537 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Mar 7 00:53:07.773010 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 7 00:53:07.773199 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 7 00:53:07.775984 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 7 00:53:07.776132 kernel: sd 0:0:0:1: Power-on or device reset occurred Mar 7 00:53:07.776546 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 7 00:53:07.776643 kernel: sd 0:0:0:1: [sda] Write Protect is off Mar 7 00:53:07.776737 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Mar 7 00:53:07.776828 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 7 00:53:07.779140 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 7 00:53:07.779286 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 7 00:53:07.783257 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 7 00:53:07.783414 kernel: hub 1-0:1.0: USB hub found Mar 7 00:53:07.783573 kernel: hub 1-0:1.0: 4 ports detected Mar 7 00:53:07.783666 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 7 00:53:07.783688 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 00:53:07.783698 kernel: GPT:17805311 != 80003071 Mar 7 00:53:07.783707 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 00:53:07.783717 kernel: GPT:17805311 != 80003071 Mar 7 00:53:07.783725 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 00:53:07.784072 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:53:07.785202 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Mar 7 00:53:07.785360 kernel: hub 2-0:1.0: USB hub found Mar 7 00:53:07.786024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:53:07.790981 kernel: hub 2-0:1.0: 4 ports detected Mar 7 00:53:07.797204 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:53:07.817815 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:53:07.844541 kernel: BTRFS: device fsid 237c8587-8110-47ef-99f9-37e4ed4d3b31 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (525) Mar 7 00:53:07.846961 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (520) Mar 7 00:53:07.855702 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 7 00:53:07.860636 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 7 00:53:07.867070 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 7 00:53:07.867763 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 7 00:53:07.872806 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 00:53:07.880146 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 00:53:07.888728 disk-uuid[574]: Primary Header is updated. Mar 7 00:53:07.888728 disk-uuid[574]: Secondary Entries is updated. Mar 7 00:53:07.888728 disk-uuid[574]: Secondary Header is updated. Mar 7 00:53:07.903000 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:53:07.908966 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:53:08.023964 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 7 00:53:08.161965 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Mar 7 00:53:08.162050 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 7 00:53:08.162370 kernel: usbcore: registered new interface driver usbhid Mar 7 00:53:08.162396 kernel: usbhid: USB HID core driver Mar 7 00:53:08.270025 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Mar 7 00:53:08.399992 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Mar 7 00:53:08.453984 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Mar 7 00:53:08.912496 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:53:08.912551 disk-uuid[576]: The operation has completed successfully. Mar 7 00:53:08.963634 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 00:53:08.964527 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 00:53:08.978157 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 00:53:08.985452 sh[591]: Success Mar 7 00:53:08.999973 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 7 00:53:09.054969 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 00:53:09.073130 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 00:53:09.078126 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 00:53:09.101340 kernel: BTRFS info (device dm-0): first mount of filesystem 237c8587-8110-47ef-99f9-37e4ed4d3b31 Mar 7 00:53:09.101411 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:53:09.101432 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 00:53:09.101448 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 00:53:09.101463 kernel: BTRFS info (device dm-0): using free space tree Mar 7 00:53:09.110001 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 00:53:09.111788 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 00:53:09.113214 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 00:53:09.120225 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 00:53:09.126273 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 00:53:09.137980 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:53:09.138050 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:53:09.138090 kernel: BTRFS info (device sda6): using free space tree Mar 7 00:53:09.141315 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 00:53:09.141365 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 00:53:09.155034 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 00:53:09.157976 kernel: BTRFS info (device sda6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:53:09.165286 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 00:53:09.170167 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 00:53:09.262821 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:53:09.270045 ignition[669]: Ignition 2.19.0 Mar 7 00:53:09.270055 ignition[669]: Stage: fetch-offline Mar 7 00:53:09.273109 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:53:09.270094 ignition[669]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:53:09.276056 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:53:09.270102 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:53:09.270297 ignition[669]: parsed url from cmdline: "" Mar 7 00:53:09.270300 ignition[669]: no config URL provided Mar 7 00:53:09.270305 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 00:53:09.270313 ignition[669]: no config at "/usr/lib/ignition/user.ign" Mar 7 00:53:09.270317 ignition[669]: failed to fetch config: resource requires networking Mar 7 00:53:09.270645 ignition[669]: Ignition finished successfully Mar 7 00:53:09.293441 systemd-networkd[778]: lo: Link UP Mar 7 00:53:09.293452 systemd-networkd[778]: lo: Gained carrier Mar 7 00:53:09.295022 systemd-networkd[778]: Enumeration completed Mar 7 00:53:09.295159 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:53:09.295704 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:53:09.295707 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:53:09.297227 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:53:09.297234 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:53:09.297795 systemd[1]: Reached target network.target - Network. Mar 7 00:53:09.299903 systemd-networkd[778]: eth0: Link UP Mar 7 00:53:09.299910 systemd-networkd[778]: eth0: Gained carrier Mar 7 00:53:09.299924 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:53:09.304787 systemd-networkd[778]: eth1: Link UP Mar 7 00:53:09.304794 systemd-networkd[778]: eth1: Gained carrier Mar 7 00:53:09.304810 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:53:09.306112 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 00:53:09.335713 ignition[781]: Ignition 2.19.0 Mar 7 00:53:09.335724 ignition[781]: Stage: fetch Mar 7 00:53:09.335900 ignition[781]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:53:09.335909 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:53:09.336040 ignition[781]: parsed url from cmdline: "" Mar 7 00:53:09.336044 ignition[781]: no config URL provided Mar 7 00:53:09.336048 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 00:53:09.336056 ignition[781]: no config at "/usr/lib/ignition/user.ign" Mar 7 00:53:09.336074 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 7 00:53:09.336762 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 00:53:09.350069 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 7 00:53:09.358077 systemd-networkd[778]: eth0: DHCPv4 address 116.202.20.89/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 7 00:53:09.537039 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 7 00:53:09.542639 ignition[781]: GET result: OK Mar 7 00:53:09.542732 ignition[781]: parsing config with SHA512: b15f6f61a2f1ec68e652f3e5770275b117fe9f64fc88cbe2324802abb30751b2d5a5d480346b5a182545d3cc9aaf7d6bb8568d8621c9a8f1bb8392afd6fcf6ae Mar 7 00:53:09.548720 unknown[781]: fetched base config from "system" Mar 7 00:53:09.548750 unknown[781]: fetched base config from "system" Mar 7 00:53:09.548780 unknown[781]: fetched user config from "hetzner" Mar 7 00:53:09.551355 ignition[781]: fetch: fetch complete Mar 7 00:53:09.551372 ignition[781]: fetch: fetch passed Mar 7 00:53:09.553611 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 00:53:09.551448 ignition[781]: Ignition finished successfully Mar 7 00:53:09.562190 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 00:53:09.584831 ignition[788]: Ignition 2.19.0 Mar 7 00:53:09.584840 ignition[788]: Stage: kargs Mar 7 00:53:09.585044 ignition[788]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:53:09.585054 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:53:09.586136 ignition[788]: kargs: kargs passed Mar 7 00:53:09.586192 ignition[788]: Ignition finished successfully Mar 7 00:53:09.588710 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 00:53:09.597209 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 00:53:09.611761 ignition[794]: Ignition 2.19.0 Mar 7 00:53:09.611773 ignition[794]: Stage: disks Mar 7 00:53:09.613037 ignition[794]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:53:09.613053 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:53:09.614065 ignition[794]: disks: disks passed Mar 7 00:53:09.614119 ignition[794]: Ignition finished successfully Mar 7 00:53:09.615991 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 00:53:09.617629 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 00:53:09.618480 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 00:53:09.619133 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:53:09.620012 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:53:09.621008 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:53:09.627168 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 00:53:09.644099 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 7 00:53:09.650369 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 00:53:09.657114 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 00:53:09.709972 kernel: EXT4-fs (sda9): mounted filesystem 596a8ea8-9d3d-4d06-a56e-9d3ebd3cb76d r/w with ordered data mode. Quota mode: none. Mar 7 00:53:09.710839 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 00:53:09.712506 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 00:53:09.720072 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:53:09.723472 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 00:53:09.731703 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 7 00:53:09.733334 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 00:53:09.739693 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (810) Mar 7 00:53:09.739718 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:53:09.733401 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:53:09.741916 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:53:09.741985 kernel: BTRFS info (device sda6): using free space tree Mar 7 00:53:09.741600 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 00:53:09.748579 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 00:53:09.748639 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 00:53:09.754132 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 00:53:09.758670 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:53:09.799037 coreos-metadata[812]: Mar 07 00:53:09.798 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 7 00:53:09.800923 coreos-metadata[812]: Mar 07 00:53:09.800 INFO Fetch successful Mar 7 00:53:09.801453 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 00:53:09.804814 coreos-metadata[812]: Mar 07 00:53:09.801 INFO wrote hostname ci-4081-3-6-n-e1f368ffcb to /sysroot/etc/hostname Mar 7 00:53:09.805974 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 00:53:09.809689 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Mar 7 00:53:09.815503 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 00:53:09.819626 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 00:53:09.927872 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 00:53:09.937119 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 00:53:09.941124 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 00:53:09.949125 kernel: BTRFS info (device sda6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:53:09.973140 ignition[927]: INFO : Ignition 2.19.0 Mar 7 00:53:09.973140 ignition[927]: INFO : Stage: mount Mar 7 00:53:09.974221 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:53:09.974221 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:53:09.976077 ignition[927]: INFO : mount: mount passed Mar 7 00:53:09.976077 ignition[927]: INFO : Ignition finished successfully Mar 7 00:53:09.975774 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 00:53:09.977786 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 00:53:09.984117 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 00:53:10.100743 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 00:53:10.108263 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:53:10.119982 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (938) Mar 7 00:53:10.122059 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:53:10.122114 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:53:10.122139 kernel: BTRFS info (device sda6): using free space tree Mar 7 00:53:10.126001 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 00:53:10.126064 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 00:53:10.129333 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:53:10.151363 ignition[955]: INFO : Ignition 2.19.0 Mar 7 00:53:10.151363 ignition[955]: INFO : Stage: files Mar 7 00:53:10.152523 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:53:10.152523 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:53:10.154452 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Mar 7 00:53:10.155090 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 00:53:10.155090 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 00:53:10.157852 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 00:53:10.158929 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 00:53:10.160120 unknown[955]: wrote ssh authorized keys file for user: core Mar 7 00:53:10.161482 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 00:53:10.162524 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 00:53:10.163611 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 00:53:10.163611 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:53:10.163611 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 7 00:53:10.247016 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 00:53:10.987011 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:53:10.987011 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 00:53:10.987011 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 7 00:53:11.160442 systemd-networkd[778]: eth1: Gained IPv6LL Mar 7 00:53:11.214172 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 7 00:53:11.289280 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 00:53:11.289280 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 7 00:53:11.289280 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 00:53:11.289280 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:53:11.289280 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:53:11.289280 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:53:11.296204 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:53:11.296204 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:53:11.296204 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:53:11.296204 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:53:11.296204 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:53:11.296204 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:53:11.296204 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:53:11.296204 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:53:11.296204 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 7 00:53:11.352430 systemd-networkd[778]: eth0: Gained IPv6LL Mar 7 00:53:11.538149 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 7 00:53:11.742777 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:53:11.742777 ignition[955]: INFO : files: op(d): [started] processing unit "containerd.service" Mar 7 00:53:11.745854 ignition[955]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 00:53:11.745854 ignition[955]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 00:53:11.745854 ignition[955]: INFO : files: op(d): [finished] processing unit "containerd.service" Mar 7 00:53:11.745854 ignition[955]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Mar 7 00:53:11.745854 ignition[955]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:53:11.745854 ignition[955]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:53:11.745854 ignition[955]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Mar 7 00:53:11.745854 ignition[955]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Mar 7 00:53:11.745854 ignition[955]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 00:53:11.745854 ignition[955]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 00:53:11.745854 ignition[955]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Mar 7 00:53:11.745854 ignition[955]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Mar 7 00:53:11.745854 ignition[955]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 00:53:11.745854 ignition[955]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:53:11.762679 ignition[955]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:53:11.762679 ignition[955]: INFO : files: files passed Mar 7 00:53:11.762679 ignition[955]: INFO : Ignition finished successfully Mar 7 00:53:11.750007 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 00:53:11.760225 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 00:53:11.765279 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 00:53:11.768975 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 00:53:11.769100 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 00:53:11.784421 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:53:11.784421 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:53:11.787464 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:53:11.792992 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:53:11.794934 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 00:53:11.809628 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 00:53:11.839002 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 00:53:11.839178 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 00:53:11.841392 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 00:53:11.842877 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 00:53:11.844074 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 00:53:11.854372 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 00:53:11.874476 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:53:11.882230 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 00:53:11.894086 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:53:11.895612 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:53:11.897187 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 00:53:11.897828 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 00:53:11.897986 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:53:11.899611 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 00:53:11.900989 systemd[1]: Stopped target basic.target - Basic System. Mar 7 00:53:11.902091 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 00:53:11.903260 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:53:11.904312 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 00:53:11.905387 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 00:53:11.906401 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:53:11.907500 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 00:53:11.908557 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 00:53:11.909531 systemd[1]: Stopped target swap.target - Swaps. Mar 7 00:53:11.910378 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 00:53:11.910554 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:53:11.911739 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:53:11.912829 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:53:11.913850 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 00:53:11.913969 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:53:11.915037 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 00:53:11.915197 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 00:53:11.916724 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 00:53:11.916880 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:53:11.917842 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 00:53:11.918002 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 00:53:11.918790 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 7 00:53:11.918954 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 00:53:11.930108 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 00:53:11.931015 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 00:53:11.931212 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:53:11.937282 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 00:53:11.937919 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 00:53:11.938149 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:53:11.940279 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 00:53:11.940568 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:53:11.950575 ignition[1007]: INFO : Ignition 2.19.0 Mar 7 00:53:11.954440 ignition[1007]: INFO : Stage: umount Mar 7 00:53:11.954440 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:53:11.954440 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:53:11.954440 ignition[1007]: INFO : umount: umount passed Mar 7 00:53:11.954440 ignition[1007]: INFO : Ignition finished successfully Mar 7 00:53:11.952523 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 00:53:11.953892 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 00:53:11.955328 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 00:53:11.955501 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 00:53:11.959596 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 00:53:11.959676 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 00:53:11.961744 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 00:53:11.961798 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 00:53:11.962511 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 00:53:11.962554 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 00:53:11.963170 systemd[1]: Stopped target network.target - Network. Mar 7 00:53:11.964081 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 00:53:11.964136 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:53:11.965877 systemd[1]: Stopped target paths.target - Path Units. Mar 7 00:53:11.969794 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 00:53:11.974028 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:53:11.975338 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 00:53:11.977570 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 00:53:11.979245 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 00:53:11.979309 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:53:11.981613 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 00:53:11.981675 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:53:11.982673 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 00:53:11.982729 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 00:53:11.983844 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 00:53:11.983893 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 00:53:11.986701 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 00:53:11.991680 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 00:53:11.994048 systemd-networkd[778]: eth0: DHCPv6 lease lost Mar 7 00:53:12.000074 systemd-networkd[778]: eth1: DHCPv6 lease lost Mar 7 00:53:12.004337 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 00:53:12.008022 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 00:53:12.008177 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 00:53:12.010518 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 00:53:12.010629 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 00:53:12.021004 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 00:53:12.021077 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:53:12.028095 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 00:53:12.028596 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 00:53:12.028662 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:53:12.029771 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 00:53:12.029816 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:53:12.031505 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 00:53:12.031552 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 00:53:12.033637 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 00:53:12.033684 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:53:12.035734 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:53:12.038604 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 00:53:12.038694 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 00:53:12.058067 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 00:53:12.058212 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 00:53:12.060823 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 00:53:12.061073 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:53:12.062631 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 00:53:12.062725 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 00:53:12.064250 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 00:53:12.064317 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 00:53:12.065581 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 00:53:12.065616 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:53:12.066625 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 00:53:12.066675 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:53:12.068196 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 00:53:12.068243 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 00:53:12.069645 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 00:53:12.069690 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:53:12.082277 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 00:53:12.083436 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 00:53:12.083540 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:53:12.086414 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:53:12.086477 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:53:12.094051 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 00:53:12.094848 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 00:53:12.096769 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 00:53:12.114537 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 00:53:12.124798 systemd[1]: Switching root. Mar 7 00:53:12.156323 systemd-journald[236]: Journal stopped Mar 7 00:53:13.063550 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Mar 7 00:53:13.063616 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 00:53:13.063628 kernel: SELinux: policy capability open_perms=1 Mar 7 00:53:13.063638 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 00:53:13.063648 kernel: SELinux: policy capability always_check_network=0 Mar 7 00:53:13.063657 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 00:53:13.063667 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 00:53:13.063676 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 00:53:13.063689 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 00:53:13.063699 kernel: audit: type=1403 audit(1772844792.332:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 00:53:13.063710 systemd[1]: Successfully loaded SELinux policy in 35.943ms. Mar 7 00:53:13.063729 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.508ms. Mar 7 00:53:13.063740 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 00:53:13.063751 systemd[1]: Detected virtualization kvm. Mar 7 00:53:13.063761 systemd[1]: Detected architecture arm64. Mar 7 00:53:13.063772 systemd[1]: Detected first boot. Mar 7 00:53:13.063783 systemd[1]: Hostname set to . Mar 7 00:53:13.063796 systemd[1]: Initializing machine ID from VM UUID. Mar 7 00:53:13.063806 zram_generator::config[1066]: No configuration found. Mar 7 00:53:13.063821 systemd[1]: Populated /etc with preset unit settings. Mar 7 00:53:13.063831 systemd[1]: Queued start job for default target multi-user.target. Mar 7 00:53:13.063842 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 00:53:13.063852 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 00:53:13.063863 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 00:53:13.063873 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 00:53:13.063885 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 00:53:13.063895 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 00:53:13.063906 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 00:53:13.063917 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 00:53:13.063927 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 00:53:13.063946 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:53:13.063957 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:53:13.063968 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 00:53:13.063980 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 00:53:13.063991 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 00:53:13.064002 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:53:13.064012 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 7 00:53:13.064022 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:53:13.064034 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 00:53:13.064044 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:53:13.064059 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:53:13.064070 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:53:13.064081 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:53:13.064091 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 00:53:13.064102 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 00:53:13.064112 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 00:53:13.064123 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 00:53:13.064133 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:53:13.064144 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:53:13.064156 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:53:13.064167 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 00:53:13.064177 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 00:53:13.064187 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 00:53:13.066023 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 00:53:13.066045 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 00:53:13.066057 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 00:53:13.066076 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 00:53:13.066088 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 00:53:13.066099 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:53:13.066113 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:53:13.066123 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 00:53:13.066134 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:53:13.066145 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:53:13.066158 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:53:13.066173 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 00:53:13.066184 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:53:13.066195 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 00:53:13.066206 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 7 00:53:13.066217 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 7 00:53:13.066228 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:53:13.066239 kernel: loop: module loaded Mar 7 00:53:13.066252 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:53:13.066262 kernel: ACPI: bus type drm_connector registered Mar 7 00:53:13.066273 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 00:53:13.066283 kernel: fuse: init (API version 7.39) Mar 7 00:53:13.066293 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 00:53:13.066305 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:53:13.066382 systemd-journald[1152]: Collecting audit messages is disabled. Mar 7 00:53:13.066414 systemd-journald[1152]: Journal started Mar 7 00:53:13.066440 systemd-journald[1152]: Runtime Journal (/run/log/journal/a593f43bbdaf4f4f8b20b52fd19412cf) is 8.0M, max 76.6M, 68.6M free. Mar 7 00:53:13.071959 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:53:13.072927 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 00:53:13.074487 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 00:53:13.076172 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 00:53:13.076865 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 00:53:13.078609 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 00:53:13.079750 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 00:53:13.082202 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:53:13.083217 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 00:53:13.083395 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 00:53:13.085352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:53:13.085527 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:53:13.086442 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:53:13.086589 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:53:13.088437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:53:13.088590 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:53:13.090083 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 00:53:13.090358 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 00:53:13.091864 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:53:13.092208 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:53:13.094481 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 00:53:13.096772 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:53:13.099286 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 00:53:13.100403 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 00:53:13.112882 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 00:53:13.120053 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 00:53:13.124065 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 00:53:13.124998 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 00:53:13.139161 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 00:53:13.143772 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 00:53:13.147473 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:53:13.155432 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 00:53:13.156549 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:53:13.159249 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:53:13.180336 systemd-journald[1152]: Time spent on flushing to /var/log/journal/a593f43bbdaf4f4f8b20b52fd19412cf is 39.880ms for 1112 entries. Mar 7 00:53:13.180336 systemd-journald[1152]: System Journal (/var/log/journal/a593f43bbdaf4f4f8b20b52fd19412cf) is 8.0M, max 584.8M, 576.8M free. Mar 7 00:53:13.245361 systemd-journald[1152]: Received client request to flush runtime journal. Mar 7 00:53:13.174129 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 00:53:13.182906 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:53:13.184457 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 00:53:13.186385 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 00:53:13.188227 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 00:53:13.192846 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 00:53:13.203201 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 00:53:13.222212 udevadm[1213]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 7 00:53:13.224720 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:53:13.234591 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Mar 7 00:53:13.234605 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Mar 7 00:53:13.239281 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 00:53:13.251249 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 00:53:13.256895 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 00:53:13.285212 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 00:53:13.296338 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:53:13.313672 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Mar 7 00:53:13.313696 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Mar 7 00:53:13.320405 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:53:13.669812 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 00:53:13.681197 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:53:13.714024 systemd-udevd[1232]: Using default interface naming scheme 'v255'. Mar 7 00:53:13.741702 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:53:13.752205 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:53:13.772260 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 00:53:13.822913 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 00:53:13.872124 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Mar 7 00:53:13.916567 systemd-networkd[1235]: lo: Link UP Mar 7 00:53:13.916577 systemd-networkd[1235]: lo: Gained carrier Mar 7 00:53:13.919161 systemd-networkd[1235]: Enumeration completed Mar 7 00:53:13.919365 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:53:13.919924 systemd-networkd[1235]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:53:13.919928 systemd-networkd[1235]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:53:13.920668 systemd-networkd[1235]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:53:13.920672 systemd-networkd[1235]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:53:13.921130 systemd-networkd[1235]: eth0: Link UP Mar 7 00:53:13.921134 systemd-networkd[1235]: eth0: Gained carrier Mar 7 00:53:13.921148 systemd-networkd[1235]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:53:13.926733 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 00:53:13.935315 systemd-networkd[1235]: eth1: Link UP Mar 7 00:53:13.935328 systemd-networkd[1235]: eth1: Gained carrier Mar 7 00:53:13.935350 systemd-networkd[1235]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:53:13.958117 systemd-networkd[1235]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:53:13.963364 systemd-networkd[1235]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:53:13.970247 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 00:53:13.975864 systemd-networkd[1235]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 7 00:53:13.996039 systemd-networkd[1235]: eth0: DHCPv4 address 116.202.20.89/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 7 00:53:14.016061 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:53:14.031971 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1245) Mar 7 00:53:14.033448 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:53:14.036036 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:53:14.046435 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:53:14.048399 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 00:53:14.048448 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 00:53:14.052188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:53:14.052419 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:53:14.064373 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Mar 7 00:53:14.064485 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 7 00:53:14.064653 kernel: [drm] features: -context_init Mar 7 00:53:14.065985 kernel: [drm] number of scanouts: 1 Mar 7 00:53:14.066280 kernel: [drm] number of cap sets: 0 Mar 7 00:53:14.067340 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 7 00:53:14.077949 kernel: Console: switching to colour frame buffer device 160x50 Mar 7 00:53:14.105233 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 7 00:53:14.102781 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:53:14.103156 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:53:14.126437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:53:14.126652 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:53:14.141820 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 00:53:14.146165 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:53:14.146364 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:53:14.151104 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:53:14.225544 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:53:14.276397 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 00:53:14.290567 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 00:53:14.307969 lvm[1300]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 00:53:14.334315 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 00:53:14.336267 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:53:14.345236 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 00:53:14.351805 lvm[1303]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 00:53:14.382651 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 00:53:14.385433 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 00:53:14.387151 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 00:53:14.387303 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:53:14.388343 systemd[1]: Reached target machines.target - Containers. Mar 7 00:53:14.391729 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 00:53:14.398421 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 00:53:14.403628 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 00:53:14.404675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:53:14.420779 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 00:53:14.426168 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 00:53:14.429881 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 00:53:14.438758 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 00:53:14.447641 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 00:53:14.460191 kernel: loop0: detected capacity change from 0 to 8 Mar 7 00:53:14.468976 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 00:53:14.479498 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 00:53:14.480632 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 00:53:14.488343 kernel: loop1: detected capacity change from 0 to 209336 Mar 7 00:53:14.538996 kernel: loop2: detected capacity change from 0 to 114328 Mar 7 00:53:14.570984 kernel: loop3: detected capacity change from 0 to 114432 Mar 7 00:53:14.620093 kernel: loop4: detected capacity change from 0 to 8 Mar 7 00:53:14.626672 kernel: loop5: detected capacity change from 0 to 209336 Mar 7 00:53:14.644327 kernel: loop6: detected capacity change from 0 to 114328 Mar 7 00:53:14.660164 kernel: loop7: detected capacity change from 0 to 114432 Mar 7 00:53:14.670397 (sd-merge)[1325]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 7 00:53:14.670875 (sd-merge)[1325]: Merged extensions into '/usr'. Mar 7 00:53:14.676645 systemd[1]: Reloading requested from client PID 1311 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 00:53:14.676665 systemd[1]: Reloading... Mar 7 00:53:14.744975 zram_generator::config[1349]: No configuration found. Mar 7 00:53:14.866211 ldconfig[1307]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 00:53:14.910008 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:53:14.972215 systemd[1]: Reloading finished in 295 ms. Mar 7 00:53:14.991507 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 00:53:14.992592 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 00:53:15.000081 systemd[1]: Starting ensure-sysext.service... Mar 7 00:53:15.004302 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:53:15.009929 systemd[1]: Reloading requested from client PID 1397 ('systemctl') (unit ensure-sysext.service)... Mar 7 00:53:15.009963 systemd[1]: Reloading... Mar 7 00:53:15.040749 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 00:53:15.042239 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 00:53:15.043008 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 00:53:15.043227 systemd-tmpfiles[1398]: ACLs are not supported, ignoring. Mar 7 00:53:15.043284 systemd-tmpfiles[1398]: ACLs are not supported, ignoring. Mar 7 00:53:15.046808 systemd-tmpfiles[1398]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:53:15.046824 systemd-tmpfiles[1398]: Skipping /boot Mar 7 00:53:15.056902 systemd-tmpfiles[1398]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:53:15.056919 systemd-tmpfiles[1398]: Skipping /boot Mar 7 00:53:15.092963 zram_generator::config[1426]: No configuration found. Mar 7 00:53:15.128076 systemd-networkd[1235]: eth0: Gained IPv6LL Mar 7 00:53:15.204981 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:53:15.268028 systemd[1]: Reloading finished in 257 ms. Mar 7 00:53:15.286150 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 00:53:15.287613 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:53:15.312538 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 00:53:15.323640 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 00:53:15.329364 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 00:53:15.340317 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:53:15.346077 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 00:53:15.355919 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:53:15.365052 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:53:15.377484 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:53:15.384740 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:53:15.387101 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:53:15.393494 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 00:53:15.398493 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:53:15.398655 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:53:15.420859 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:53:15.423651 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:53:15.425255 augenrules[1504]: No rules Mar 7 00:53:15.426406 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:53:15.429259 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:53:15.432330 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 00:53:15.436689 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 00:53:15.443340 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 00:53:15.456716 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:53:15.462117 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:53:15.468112 systemd-resolved[1478]: Positive Trust Anchors: Mar 7 00:53:15.468134 systemd-resolved[1478]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:53:15.468169 systemd-resolved[1478]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:53:15.470307 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:53:15.478677 systemd-resolved[1478]: Using system hostname 'ci-4081-3-6-n-e1f368ffcb'. Mar 7 00:53:15.479406 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:53:15.486126 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:53:15.487643 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:53:15.489079 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 00:53:15.490602 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 00:53:15.492668 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:53:15.495242 systemd[1]: Finished ensure-sysext.service. Mar 7 00:53:15.496202 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:53:15.497208 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:53:15.498812 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:53:15.499483 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:53:15.502706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:53:15.502853 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:53:15.504812 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:53:15.505168 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:53:15.509822 systemd[1]: Reached target network.target - Network. Mar 7 00:53:15.511017 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 00:53:15.511757 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:53:15.512870 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:53:15.513160 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:53:15.521222 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 00:53:15.523308 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 00:53:15.572434 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 00:53:15.576539 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:53:15.578104 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 00:53:15.578831 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 00:53:15.579835 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 00:53:15.580844 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 00:53:15.580884 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:53:15.581437 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 00:53:15.582169 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 00:53:15.582823 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 00:53:15.583642 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:53:15.585195 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 00:53:15.587656 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 00:53:15.590199 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 00:53:15.592576 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 00:53:15.593306 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:53:15.593893 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:53:15.594833 systemd[1]: System is tainted: cgroupsv1 Mar 7 00:53:15.594882 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:53:15.594906 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:53:15.598098 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 00:53:15.605215 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 00:53:15.611140 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 00:53:15.612886 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 00:53:15.622133 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 00:53:15.626038 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 00:53:15.634012 jq[1543]: false Mar 7 00:53:15.637237 systemd-timesyncd[1533]: Contacted time server 88.198.49.74:123 (0.flatcar.pool.ntp.org). Mar 7 00:53:15.637341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:15.639730 systemd-timesyncd[1533]: Initial clock synchronization to Sat 2026-03-07 00:53:15.710394 UTC. Mar 7 00:53:15.646172 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 00:53:15.648749 coreos-metadata[1540]: Mar 07 00:53:15.647 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 7 00:53:15.651494 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 00:53:15.654993 coreos-metadata[1540]: Mar 07 00:53:15.652 INFO Fetch successful Mar 7 00:53:15.654993 coreos-metadata[1540]: Mar 07 00:53:15.653 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 7 00:53:15.660292 coreos-metadata[1540]: Mar 07 00:53:15.660 INFO Fetch successful Mar 7 00:53:15.661672 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 00:53:15.668635 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 7 00:53:15.678751 dbus-daemon[1541]: [system] SELinux support is enabled Mar 7 00:53:15.679100 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 00:53:15.690482 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 00:53:15.698250 extend-filesystems[1546]: Found loop4 Mar 7 00:53:15.698250 extend-filesystems[1546]: Found loop5 Mar 7 00:53:15.698250 extend-filesystems[1546]: Found loop6 Mar 7 00:53:15.698250 extend-filesystems[1546]: Found loop7 Mar 7 00:53:15.698250 extend-filesystems[1546]: Found sda Mar 7 00:53:15.698250 extend-filesystems[1546]: Found sda1 Mar 7 00:53:15.698250 extend-filesystems[1546]: Found sda2 Mar 7 00:53:15.698250 extend-filesystems[1546]: Found sda3 Mar 7 00:53:15.698250 extend-filesystems[1546]: Found usr Mar 7 00:53:15.698250 extend-filesystems[1546]: Found sda4 Mar 7 00:53:15.698250 extend-filesystems[1546]: Found sda6 Mar 7 00:53:15.698250 extend-filesystems[1546]: Found sda7 Mar 7 00:53:15.698250 extend-filesystems[1546]: Found sda9 Mar 7 00:53:15.698250 extend-filesystems[1546]: Checking size of /dev/sda9 Mar 7 00:53:15.701771 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 00:53:15.704672 systemd-networkd[1235]: eth1: Gained IPv6LL Mar 7 00:53:15.705218 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 00:53:15.711410 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 00:53:15.721650 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 00:53:15.724781 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 00:53:15.750299 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 00:53:15.750553 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 00:53:15.801007 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 7 00:53:15.779477 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 00:53:15.801199 jq[1568]: true Mar 7 00:53:15.801436 extend-filesystems[1546]: Resized partition /dev/sda9 Mar 7 00:53:15.779709 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 00:53:15.825418 extend-filesystems[1589]: resize2fs 1.47.1 (20-May-2024) Mar 7 00:53:15.806579 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 00:53:15.806630 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 00:53:15.810213 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 00:53:15.810236 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 00:53:15.812434 (ntainerd)[1588]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 00:53:15.817438 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 00:53:15.817713 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 00:53:15.829738 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 00:53:15.859299 tar[1583]: linux-arm64/LICENSE Mar 7 00:53:15.859299 tar[1583]: linux-arm64/helm Mar 7 00:53:15.866546 systemd-logind[1563]: New seat seat0. Mar 7 00:53:15.871149 jq[1600]: true Mar 7 00:53:15.886251 systemd-logind[1563]: Watching system buttons on /dev/input/event0 (Power Button) Mar 7 00:53:15.891885 systemd-logind[1563]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Mar 7 00:53:15.893840 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 00:53:15.899977 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1240) Mar 7 00:53:15.932397 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 00:53:15.934358 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 00:53:15.968478 update_engine[1566]: I20260307 00:53:15.959919 1566 main.cc:92] Flatcar Update Engine starting Mar 7 00:53:15.974803 systemd[1]: Started update-engine.service - Update Engine. Mar 7 00:53:15.976857 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 00:53:15.985112 update_engine[1566]: I20260307 00:53:15.980543 1566 update_check_scheduler.cc:74] Next update check in 3m5s Mar 7 00:53:15.987317 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 00:53:16.033976 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 7 00:53:16.052178 extend-filesystems[1589]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 7 00:53:16.052178 extend-filesystems[1589]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 7 00:53:16.052178 extend-filesystems[1589]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 7 00:53:16.057033 extend-filesystems[1546]: Resized filesystem in /dev/sda9 Mar 7 00:53:16.057033 extend-filesystems[1546]: Found sr0 Mar 7 00:53:16.062047 bash[1635]: Updated "/home/core/.ssh/authorized_keys" Mar 7 00:53:16.057840 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 00:53:16.058116 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 00:53:16.060425 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 00:53:16.081455 systemd[1]: Starting sshkeys.service... Mar 7 00:53:16.098284 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 00:53:16.103587 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 00:53:16.157501 coreos-metadata[1646]: Mar 07 00:53:16.157 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 7 00:53:16.160109 coreos-metadata[1646]: Mar 07 00:53:16.159 INFO Fetch successful Mar 7 00:53:16.165350 unknown[1646]: wrote ssh authorized keys file for user: core Mar 7 00:53:16.214011 update-ssh-keys[1654]: Updated "/home/core/.ssh/authorized_keys" Mar 7 00:53:16.218257 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 00:53:16.230459 systemd[1]: Finished sshkeys.service. Mar 7 00:53:16.236352 locksmithd[1634]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 00:53:16.332473 containerd[1588]: time="2026-03-07T00:53:16.332371942Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 00:53:16.423014 containerd[1588]: time="2026-03-07T00:53:16.421350449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:53:16.426633 containerd[1588]: time="2026-03-07T00:53:16.426584788Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:53:16.427969 containerd[1588]: time="2026-03-07T00:53:16.426730793Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 00:53:16.427969 containerd[1588]: time="2026-03-07T00:53:16.426754649Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 00:53:16.427969 containerd[1588]: time="2026-03-07T00:53:16.426916558Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 00:53:16.427969 containerd[1588]: time="2026-03-07T00:53:16.426934077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 00:53:16.427969 containerd[1588]: time="2026-03-07T00:53:16.427039797Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:53:16.427969 containerd[1588]: time="2026-03-07T00:53:16.427054773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:53:16.427969 containerd[1588]: time="2026-03-07T00:53:16.427329021Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:53:16.427969 containerd[1588]: time="2026-03-07T00:53:16.427346298Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 00:53:16.427969 containerd[1588]: time="2026-03-07T00:53:16.427360467Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:53:16.427969 containerd[1588]: time="2026-03-07T00:53:16.427371971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 00:53:16.427969 containerd[1588]: time="2026-03-07T00:53:16.427451170Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:53:16.427969 containerd[1588]: time="2026-03-07T00:53:16.427639801Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:53:16.428245 containerd[1588]: time="2026-03-07T00:53:16.427761546Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:53:16.428245 containerd[1588]: time="2026-03-07T00:53:16.427776926Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 00:53:16.428245 containerd[1588]: time="2026-03-07T00:53:16.427866539Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 00:53:16.428245 containerd[1588]: time="2026-03-07T00:53:16.427907511Z" level=info msg="metadata content store policy set" policy=shared Mar 7 00:53:16.435414 containerd[1588]: time="2026-03-07T00:53:16.435381937Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 00:53:16.435544 containerd[1588]: time="2026-03-07T00:53:16.435528386Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 00:53:16.436049 containerd[1588]: time="2026-03-07T00:53:16.436032682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 00:53:16.436126 containerd[1588]: time="2026-03-07T00:53:16.436112365Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 00:53:16.436183 containerd[1588]: time="2026-03-07T00:53:16.436171905Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 00:53:16.436400 containerd[1588]: time="2026-03-07T00:53:16.436380599Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 00:53:16.437351 containerd[1588]: time="2026-03-07T00:53:16.437326018Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 00:53:16.438967 containerd[1588]: time="2026-03-07T00:53:16.438084703Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 00:53:16.438967 containerd[1588]: time="2026-03-07T00:53:16.438109689Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 00:53:16.438967 containerd[1588]: time="2026-03-07T00:53:16.438123414Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 00:53:16.438967 containerd[1588]: time="2026-03-07T00:53:16.438144929Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 00:53:16.438967 containerd[1588]: time="2026-03-07T00:53:16.438160188Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 00:53:16.438967 containerd[1588]: time="2026-03-07T00:53:16.438185538Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 00:53:16.438967 containerd[1588]: time="2026-03-07T00:53:16.438200392Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 00:53:16.438967 containerd[1588]: time="2026-03-07T00:53:16.438240557Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 00:53:16.438967 containerd[1588]: time="2026-03-07T00:53:16.438256986Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 00:53:16.438967 containerd[1588]: time="2026-03-07T00:53:16.438270993Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 00:53:16.438967 containerd[1588]: time="2026-03-07T00:53:16.438283547Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 00:53:16.438967 containerd[1588]: time="2026-03-07T00:53:16.438304295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 00:53:16.438967 containerd[1588]: time="2026-03-07T00:53:16.438317657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 00:53:16.438967 containerd[1588]: time="2026-03-07T00:53:16.438330291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 00:53:16.439352 containerd[1588]: time="2026-03-07T00:53:16.438344581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 00:53:16.439352 containerd[1588]: time="2026-03-07T00:53:16.438358104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 00:53:16.439352 containerd[1588]: time="2026-03-07T00:53:16.438372232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 00:53:16.439352 containerd[1588]: time="2026-03-07T00:53:16.438384624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 00:53:16.439352 containerd[1588]: time="2026-03-07T00:53:16.438398551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 00:53:16.439352 containerd[1588]: time="2026-03-07T00:53:16.438413728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 00:53:16.439352 containerd[1588]: time="2026-03-07T00:53:16.438428502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 00:53:16.439352 containerd[1588]: time="2026-03-07T00:53:16.438442348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 00:53:16.439352 containerd[1588]: time="2026-03-07T00:53:16.438455709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 00:53:16.439352 containerd[1588]: time="2026-03-07T00:53:16.438468263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 00:53:16.439352 containerd[1588]: time="2026-03-07T00:53:16.438483643Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 00:53:16.439352 containerd[1588]: time="2026-03-07T00:53:16.438503705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 00:53:16.439352 containerd[1588]: time="2026-03-07T00:53:16.438516743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 00:53:16.439352 containerd[1588]: time="2026-03-07T00:53:16.438528288Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 00:53:16.439591 containerd[1588]: time="2026-03-07T00:53:16.438636429Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 00:53:16.439591 containerd[1588]: time="2026-03-07T00:53:16.438654877Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 00:53:16.439591 containerd[1588]: time="2026-03-07T00:53:16.438667067Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 00:53:16.439591 containerd[1588]: time="2026-03-07T00:53:16.438679944Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 00:53:16.439591 containerd[1588]: time="2026-03-07T00:53:16.438689995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 00:53:16.439591 containerd[1588]: time="2026-03-07T00:53:16.438703155Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 00:53:16.439591 containerd[1588]: time="2026-03-07T00:53:16.438713408Z" level=info msg="NRI interface is disabled by configuration." Mar 7 00:53:16.439591 containerd[1588]: time="2026-03-07T00:53:16.438723620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 00:53:16.442995 containerd[1588]: time="2026-03-07T00:53:16.442128639Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 00:53:16.442995 containerd[1588]: time="2026-03-07T00:53:16.442207273Z" level=info msg="Connect containerd service" Mar 7 00:53:16.442995 containerd[1588]: time="2026-03-07T00:53:16.442396752Z" level=info msg="using legacy CRI server" Mar 7 00:53:16.442995 containerd[1588]: time="2026-03-07T00:53:16.442410032Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 00:53:16.442995 containerd[1588]: time="2026-03-07T00:53:16.442534118Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 00:53:16.443581 containerd[1588]: time="2026-03-07T00:53:16.443554215Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 00:53:16.449250 containerd[1588]: time="2026-03-07T00:53:16.446566833Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 00:53:16.449250 containerd[1588]: time="2026-03-07T00:53:16.446610469Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 00:53:16.449250 containerd[1588]: time="2026-03-07T00:53:16.446789171Z" level=info msg="Start subscribing containerd event" Mar 7 00:53:16.449250 containerd[1588]: time="2026-03-07T00:53:16.446825420Z" level=info msg="Start recovering state" Mar 7 00:53:16.449250 containerd[1588]: time="2026-03-07T00:53:16.446884839Z" level=info msg="Start event monitor" Mar 7 00:53:16.449250 containerd[1588]: time="2026-03-07T00:53:16.446894850Z" level=info msg="Start snapshots syncer" Mar 7 00:53:16.449250 containerd[1588]: time="2026-03-07T00:53:16.446903165Z" level=info msg="Start cni network conf syncer for default" Mar 7 00:53:16.449250 containerd[1588]: time="2026-03-07T00:53:16.446910229Z" level=info msg="Start streaming server" Mar 7 00:53:16.449250 containerd[1588]: time="2026-03-07T00:53:16.447875347Z" level=info msg="containerd successfully booted in 0.120009s" Mar 7 00:53:16.447213 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 00:53:16.680433 sshd_keygen[1587]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 00:53:16.723362 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 00:53:16.735892 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 00:53:16.747328 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 00:53:16.747592 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 00:53:16.758396 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 00:53:16.773439 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 00:53:16.782686 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 00:53:16.790730 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 7 00:53:16.792498 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 00:53:16.822688 tar[1583]: linux-arm64/README.md Mar 7 00:53:16.840407 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 00:53:17.014324 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:17.016716 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 00:53:17.017613 systemd[1]: Startup finished in 6.416s (kernel) + 4.720s (userspace) = 11.136s. Mar 7 00:53:17.024642 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:53:17.601272 kubelet[1698]: E0307 00:53:17.601208 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:53:17.607249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:53:17.607610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:24.246874 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 00:53:24.255417 systemd[1]: Started sshd@0-116.202.20.89:22-162.19.153.243:35670.service - OpenSSH per-connection server daemon (162.19.153.243:35670). Mar 7 00:53:24.396775 sshd[1710]: Invalid user ll from 162.19.153.243 port 35670 Mar 7 00:53:24.411843 sshd[1710]: Received disconnect from 162.19.153.243 port 35670:11: Bye Bye [preauth] Mar 7 00:53:24.411843 sshd[1710]: Disconnected from invalid user ll 162.19.153.243 port 35670 [preauth] Mar 7 00:53:24.413444 systemd[1]: sshd@0-116.202.20.89:22-162.19.153.243:35670.service: Deactivated successfully. Mar 7 00:53:27.858581 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 00:53:27.867421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:27.992275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:27.998919 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:53:28.050817 kubelet[1727]: E0307 00:53:28.050749 1727 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:53:28.055122 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:53:28.055642 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:35.086861 systemd[1]: Started sshd@1-116.202.20.89:22-81.192.46.49:35106.service - OpenSSH per-connection server daemon (81.192.46.49:35106). Mar 7 00:53:35.454926 sshd[1736]: Invalid user camera from 81.192.46.49 port 35106 Mar 7 00:53:35.512022 sshd[1736]: Received disconnect from 81.192.46.49 port 35106:11: Bye Bye [preauth] Mar 7 00:53:35.512022 sshd[1736]: Disconnected from invalid user camera 81.192.46.49 port 35106 [preauth] Mar 7 00:53:35.515395 systemd[1]: sshd@1-116.202.20.89:22-81.192.46.49:35106.service: Deactivated successfully. Mar 7 00:53:38.306392 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 00:53:38.315546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:38.454256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:38.457928 (kubelet)[1754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:53:38.507058 kubelet[1754]: E0307 00:53:38.506982 1754 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:53:38.512217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:53:38.512463 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:48.699232 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 00:53:48.709301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:48.847139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:48.866633 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:53:48.910417 kubelet[1774]: E0307 00:53:48.910363 1774 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:53:48.913018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:53:48.913167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:49.850695 systemd[1]: Started sshd@2-116.202.20.89:22-20.161.92.111:58948.service - OpenSSH per-connection server daemon (20.161.92.111:58948). Mar 7 00:53:50.440976 sshd[1782]: Accepted publickey for core from 20.161.92.111 port 58948 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:50.443299 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:50.452343 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 00:53:50.461723 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 00:53:50.466775 systemd-logind[1563]: New session 1 of user core. Mar 7 00:53:50.478433 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 00:53:50.486463 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 00:53:50.491134 (systemd)[1788]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 00:53:50.600934 systemd[1788]: Queued start job for default target default.target. Mar 7 00:53:50.601714 systemd[1788]: Created slice app.slice - User Application Slice. Mar 7 00:53:50.601750 systemd[1788]: Reached target paths.target - Paths. Mar 7 00:53:50.601762 systemd[1788]: Reached target timers.target - Timers. Mar 7 00:53:50.606147 systemd[1788]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 00:53:50.625669 systemd[1788]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 00:53:50.625730 systemd[1788]: Reached target sockets.target - Sockets. Mar 7 00:53:50.625742 systemd[1788]: Reached target basic.target - Basic System. Mar 7 00:53:50.625787 systemd[1788]: Reached target default.target - Main User Target. Mar 7 00:53:50.625813 systemd[1788]: Startup finished in 127ms. Mar 7 00:53:50.626066 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 00:53:50.643553 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 00:53:51.072461 systemd[1]: Started sshd@3-116.202.20.89:22-20.161.92.111:34430.service - OpenSSH per-connection server daemon (20.161.92.111:34430). Mar 7 00:53:51.659686 sshd[1800]: Accepted publickey for core from 20.161.92.111 port 34430 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:51.662107 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:51.667529 systemd-logind[1563]: New session 2 of user core. Mar 7 00:53:51.673231 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 00:53:52.079089 sshd[1800]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:52.086182 systemd[1]: sshd@3-116.202.20.89:22-20.161.92.111:34430.service: Deactivated successfully. Mar 7 00:53:52.091625 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 00:53:52.092735 systemd-logind[1563]: Session 2 logged out. Waiting for processes to exit. Mar 7 00:53:52.093854 systemd-logind[1563]: Removed session 2. Mar 7 00:53:52.179906 systemd[1]: Started sshd@4-116.202.20.89:22-20.161.92.111:34438.service - OpenSSH per-connection server daemon (20.161.92.111:34438). Mar 7 00:53:52.769001 sshd[1808]: Accepted publickey for core from 20.161.92.111 port 34438 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:52.770713 sshd[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:52.777424 systemd-logind[1563]: New session 3 of user core. Mar 7 00:53:52.788504 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 00:53:53.182325 sshd[1808]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:53.186850 systemd[1]: sshd@4-116.202.20.89:22-20.161.92.111:34438.service: Deactivated successfully. Mar 7 00:53:53.190095 systemd-logind[1563]: Session 3 logged out. Waiting for processes to exit. Mar 7 00:53:53.192175 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 00:53:53.193677 systemd-logind[1563]: Removed session 3. Mar 7 00:53:53.286390 systemd[1]: Started sshd@5-116.202.20.89:22-20.161.92.111:34444.service - OpenSSH per-connection server daemon (20.161.92.111:34444). Mar 7 00:53:53.874380 sshd[1816]: Accepted publickey for core from 20.161.92.111 port 34444 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:53.875619 sshd[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:53.882680 systemd-logind[1563]: New session 4 of user core. Mar 7 00:53:53.888559 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 00:53:54.293893 sshd[1816]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:54.302172 systemd-logind[1563]: Session 4 logged out. Waiting for processes to exit. Mar 7 00:53:54.302456 systemd[1]: sshd@5-116.202.20.89:22-20.161.92.111:34444.service: Deactivated successfully. Mar 7 00:53:54.306399 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 00:53:54.307571 systemd-logind[1563]: Removed session 4. Mar 7 00:53:54.394515 systemd[1]: Started sshd@6-116.202.20.89:22-20.161.92.111:34448.service - OpenSSH per-connection server daemon (20.161.92.111:34448). Mar 7 00:53:54.990991 sshd[1824]: Accepted publickey for core from 20.161.92.111 port 34448 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:54.993096 sshd[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:54.998521 systemd-logind[1563]: New session 5 of user core. Mar 7 00:53:55.005398 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 00:53:55.324638 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 00:53:55.325162 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:53:55.343506 sudo[1828]: pam_unix(sudo:session): session closed for user root Mar 7 00:53:55.439457 sshd[1824]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:55.446135 systemd-logind[1563]: Session 5 logged out. Waiting for processes to exit. Mar 7 00:53:55.447309 systemd[1]: sshd@6-116.202.20.89:22-20.161.92.111:34448.service: Deactivated successfully. Mar 7 00:53:55.450575 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 00:53:55.452176 systemd-logind[1563]: Removed session 5. Mar 7 00:53:55.542426 systemd[1]: Started sshd@7-116.202.20.89:22-20.161.92.111:34462.service - OpenSSH per-connection server daemon (20.161.92.111:34462). Mar 7 00:53:56.128286 sshd[1833]: Accepted publickey for core from 20.161.92.111 port 34462 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:56.130959 sshd[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:56.137037 systemd-logind[1563]: New session 6 of user core. Mar 7 00:53:56.143532 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 00:53:56.455521 sudo[1838]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 00:53:56.455920 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:53:56.461065 sudo[1838]: pam_unix(sudo:session): session closed for user root Mar 7 00:53:56.467873 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 00:53:56.468295 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:53:56.490325 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 00:53:56.492531 auditctl[1841]: No rules Mar 7 00:53:56.493172 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 00:53:56.493480 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 00:53:56.498604 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 00:53:56.541876 augenrules[1860]: No rules Mar 7 00:53:56.542867 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 00:53:56.546547 sudo[1837]: pam_unix(sudo:session): session closed for user root Mar 7 00:53:56.641552 sshd[1833]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:56.645847 systemd-logind[1563]: Session 6 logged out. Waiting for processes to exit. Mar 7 00:53:56.649559 systemd[1]: sshd@7-116.202.20.89:22-20.161.92.111:34462.service: Deactivated successfully. Mar 7 00:53:56.652497 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 00:53:56.653657 systemd-logind[1563]: Removed session 6. Mar 7 00:53:56.746465 systemd[1]: Started sshd@8-116.202.20.89:22-20.161.92.111:34464.service - OpenSSH per-connection server daemon (20.161.92.111:34464). Mar 7 00:53:57.335546 sshd[1869]: Accepted publickey for core from 20.161.92.111 port 34464 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:57.337841 sshd[1869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:57.343209 systemd-logind[1563]: New session 7 of user core. Mar 7 00:53:57.351608 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 00:53:57.659783 sudo[1873]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 00:53:57.660096 sudo[1873]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:53:57.966430 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 00:53:57.967599 (dockerd)[1889]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 00:53:58.221500 dockerd[1889]: time="2026-03-07T00:53:58.221343495Z" level=info msg="Starting up" Mar 7 00:53:58.296060 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2674212425-merged.mount: Deactivated successfully. Mar 7 00:53:58.318510 dockerd[1889]: time="2026-03-07T00:53:58.318175043Z" level=info msg="Loading containers: start." Mar 7 00:53:58.431970 kernel: Initializing XFRM netlink socket Mar 7 00:53:58.527742 systemd-networkd[1235]: docker0: Link UP Mar 7 00:53:58.550026 dockerd[1889]: time="2026-03-07T00:53:58.549922060Z" level=info msg="Loading containers: done." Mar 7 00:53:58.573846 dockerd[1889]: time="2026-03-07T00:53:58.573762484Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 00:53:58.574107 dockerd[1889]: time="2026-03-07T00:53:58.573958021Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 00:53:58.574197 dockerd[1889]: time="2026-03-07T00:53:58.574162359Z" level=info msg="Daemon has completed initialization" Mar 7 00:53:58.610714 dockerd[1889]: time="2026-03-07T00:53:58.610492326Z" level=info msg="API listen on /run/docker.sock" Mar 7 00:53:58.611137 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 00:53:58.948498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 00:53:58.958210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:59.097416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:59.106436 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:53:59.144778 containerd[1588]: time="2026-03-07T00:53:59.143668795Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 00:53:59.169384 kubelet[2038]: E0307 00:53:59.169333 2038 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:53:59.174092 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:53:59.174279 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:59.715074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2743840615.mount: Deactivated successfully. Mar 7 00:54:00.655194 containerd[1588]: time="2026-03-07T00:54:00.655106188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:00.658891 containerd[1588]: time="2026-03-07T00:54:00.658035861Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390272" Mar 7 00:54:00.658891 containerd[1588]: time="2026-03-07T00:54:00.658398570Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:00.662962 containerd[1588]: time="2026-03-07T00:54:00.662414530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:00.663751 containerd[1588]: time="2026-03-07T00:54:00.663712233Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 1.519976153s" Mar 7 00:54:00.663818 containerd[1588]: time="2026-03-07T00:54:00.663752037Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 7 00:54:00.665616 containerd[1588]: time="2026-03-07T00:54:00.665585583Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 00:54:00.884082 update_engine[1566]: I20260307 00:54:00.883993 1566 update_attempter.cc:509] Updating boot flags... Mar 7 00:54:00.926986 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2112) Mar 7 00:54:01.737560 containerd[1588]: time="2026-03-07T00:54:01.737486706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:01.739131 containerd[1588]: time="2026-03-07T00:54:01.738869171Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552126" Mar 7 00:54:01.740970 containerd[1588]: time="2026-03-07T00:54:01.740013818Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:01.744059 containerd[1588]: time="2026-03-07T00:54:01.743986958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:01.745961 containerd[1588]: time="2026-03-07T00:54:01.745193610Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.07926828s" Mar 7 00:54:01.745961 containerd[1588]: time="2026-03-07T00:54:01.745236493Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 7 00:54:01.746379 containerd[1588]: time="2026-03-07T00:54:01.746349857Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 00:54:02.667416 containerd[1588]: time="2026-03-07T00:54:02.666570098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:02.668261 containerd[1588]: time="2026-03-07T00:54:02.668218057Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301325" Mar 7 00:54:02.669970 containerd[1588]: time="2026-03-07T00:54:02.669101001Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:02.672843 containerd[1588]: time="2026-03-07T00:54:02.672762665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:02.674188 containerd[1588]: time="2026-03-07T00:54:02.674009954Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 927.618454ms" Mar 7 00:54:02.674188 containerd[1588]: time="2026-03-07T00:54:02.674052438Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 7 00:54:02.674868 containerd[1588]: time="2026-03-07T00:54:02.674661961Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 00:54:03.557924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1917695874.mount: Deactivated successfully. Mar 7 00:54:03.906391 containerd[1588]: time="2026-03-07T00:54:03.906230387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:03.907559 containerd[1588]: time="2026-03-07T00:54:03.907494113Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148896" Mar 7 00:54:03.908465 containerd[1588]: time="2026-03-07T00:54:03.908382134Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:03.912023 containerd[1588]: time="2026-03-07T00:54:03.911235650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:03.912596 containerd[1588]: time="2026-03-07T00:54:03.912248600Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.237550916s" Mar 7 00:54:03.912596 containerd[1588]: time="2026-03-07T00:54:03.912290283Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 7 00:54:03.913106 containerd[1588]: time="2026-03-07T00:54:03.912872202Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 00:54:04.411014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount720022837.mount: Deactivated successfully. Mar 7 00:54:05.128610 containerd[1588]: time="2026-03-07T00:54:05.128511232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:05.130876 containerd[1588]: time="2026-03-07T00:54:05.130797054Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Mar 7 00:54:05.132239 containerd[1588]: time="2026-03-07T00:54:05.132189541Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:05.136101 containerd[1588]: time="2026-03-07T00:54:05.136014620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:05.138959 containerd[1588]: time="2026-03-07T00:54:05.138670145Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.2257487s" Mar 7 00:54:05.138959 containerd[1588]: time="2026-03-07T00:54:05.138743670Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 7 00:54:05.139298 containerd[1588]: time="2026-03-07T00:54:05.139234220Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 00:54:05.653362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3623134688.mount: Deactivated successfully. Mar 7 00:54:05.661209 containerd[1588]: time="2026-03-07T00:54:05.661152369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:05.662511 containerd[1588]: time="2026-03-07T00:54:05.662472971Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Mar 7 00:54:05.663913 containerd[1588]: time="2026-03-07T00:54:05.662961921Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:05.665813 containerd[1588]: time="2026-03-07T00:54:05.665759576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:05.666888 containerd[1588]: time="2026-03-07T00:54:05.666583467Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 527.310925ms" Mar 7 00:54:05.666888 containerd[1588]: time="2026-03-07T00:54:05.666618349Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 7 00:54:05.667561 containerd[1588]: time="2026-03-07T00:54:05.667538887Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 00:54:06.211416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2948006487.mount: Deactivated successfully. Mar 7 00:54:06.992131 containerd[1588]: time="2026-03-07T00:54:06.992054437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:06.993547 containerd[1588]: time="2026-03-07T00:54:06.993341353Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885878" Mar 7 00:54:06.995167 containerd[1588]: time="2026-03-07T00:54:06.994773039Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:06.998059 containerd[1588]: time="2026-03-07T00:54:06.997958948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:06.999663 containerd[1588]: time="2026-03-07T00:54:06.999281587Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.331616092s" Mar 7 00:54:06.999663 containerd[1588]: time="2026-03-07T00:54:06.999323589Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 7 00:54:09.198326 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 7 00:54:09.207282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:09.352253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:09.357331 (kubelet)[2286]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:54:09.398951 kubelet[2286]: E0307 00:54:09.397426 2286 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:54:09.400314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:54:09.400483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:54:11.059578 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:11.068380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:11.102649 systemd[1]: Reloading requested from client PID 2302 ('systemctl') (unit session-7.scope)... Mar 7 00:54:11.102672 systemd[1]: Reloading... Mar 7 00:54:11.221990 zram_generator::config[2343]: No configuration found. Mar 7 00:54:11.323471 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:54:11.393761 systemd[1]: Reloading finished in 290 ms. Mar 7 00:54:11.448267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:11.451585 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:11.452796 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 00:54:11.453169 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:11.461317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:11.601228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:11.613492 (kubelet)[2406]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:54:11.657292 kubelet[2406]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:54:11.657292 kubelet[2406]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 00:54:11.657292 kubelet[2406]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:54:11.657807 kubelet[2406]: I0307 00:54:11.657364 2406 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 00:54:12.571529 kubelet[2406]: I0307 00:54:12.571475 2406 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 00:54:12.571666 kubelet[2406]: I0307 00:54:12.571552 2406 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:54:12.571849 kubelet[2406]: I0307 00:54:12.571813 2406 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 00:54:12.599447 kubelet[2406]: I0307 00:54:12.599414 2406 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:54:12.602671 kubelet[2406]: E0307 00:54:12.601306 2406 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://116.202.20.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 116.202.20.89:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 00:54:12.610778 kubelet[2406]: E0307 00:54:12.610693 2406 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 00:54:12.610778 kubelet[2406]: I0307 00:54:12.610765 2406 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 00:54:12.614353 kubelet[2406]: I0307 00:54:12.614326 2406 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 00:54:12.618351 kubelet[2406]: I0307 00:54:12.618240 2406 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:54:12.618671 kubelet[2406]: I0307 00:54:12.618337 2406 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-e1f368ffcb","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 00:54:12.618772 kubelet[2406]: I0307 00:54:12.618680 2406 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 00:54:12.618772 kubelet[2406]: I0307 00:54:12.618701 2406 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 00:54:12.619172 kubelet[2406]: I0307 00:54:12.619116 2406 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:54:12.623797 kubelet[2406]: I0307 00:54:12.623745 2406 kubelet.go:480] "Attempting to sync node with API server" Mar 7 00:54:12.623797 kubelet[2406]: I0307 00:54:12.623786 2406 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:54:12.625559 kubelet[2406]: I0307 00:54:12.624817 2406 kubelet.go:386] "Adding apiserver pod source" Mar 7 00:54:12.625559 kubelet[2406]: I0307 00:54:12.624856 2406 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:54:12.632666 kubelet[2406]: E0307 00:54:12.632576 2406 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://116.202.20.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-e1f368ffcb&limit=500&resourceVersion=0\": dial tcp 116.202.20.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 00:54:12.634258 kubelet[2406]: E0307 00:54:12.634216 2406 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://116.202.20.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 116.202.20.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 00:54:12.634510 kubelet[2406]: I0307 00:54:12.634494 2406 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 00:54:12.635329 kubelet[2406]: I0307 00:54:12.635309 2406 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:54:12.635537 kubelet[2406]: W0307 00:54:12.635526 2406 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 00:54:12.638158 kubelet[2406]: I0307 00:54:12.638120 2406 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 00:54:12.638246 kubelet[2406]: I0307 00:54:12.638187 2406 server.go:1289] "Started kubelet" Mar 7 00:54:12.638969 kubelet[2406]: I0307 00:54:12.638309 2406 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:54:12.639378 kubelet[2406]: I0307 00:54:12.639357 2406 server.go:317] "Adding debug handlers to kubelet server" Mar 7 00:54:12.640534 kubelet[2406]: I0307 00:54:12.640467 2406 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:54:12.640829 kubelet[2406]: I0307 00:54:12.640802 2406 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:54:12.642393 kubelet[2406]: E0307 00:54:12.640931 2406 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://116.202.20.89:6443/api/v1/namespaces/default/events\": dial tcp 116.202.20.89:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-e1f368ffcb.189a69002360710f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-e1f368ffcb,UID:ci-4081-3-6-n-e1f368ffcb,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-e1f368ffcb,},FirstTimestamp:2026-03-07 00:54:12.638150927 +0000 UTC m=+1.020023647,LastTimestamp:2026-03-07 00:54:12.638150927 +0000 UTC m=+1.020023647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-e1f368ffcb,}" Mar 7 00:54:12.644484 kubelet[2406]: I0307 00:54:12.644106 2406 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 00:54:12.645752 kubelet[2406]: I0307 00:54:12.645703 2406 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:54:12.647552 kubelet[2406]: I0307 00:54:12.647269 2406 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:54:12.651726 kubelet[2406]: I0307 00:54:12.651698 2406 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 00:54:12.655339 kubelet[2406]: I0307 00:54:12.651854 2406 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 00:54:12.655913 kubelet[2406]: E0307 00:54:12.651898 2406 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-e1f368ffcb\" not found" Mar 7 00:54:12.655913 kubelet[2406]: I0307 00:54:12.654772 2406 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:54:12.657910 kubelet[2406]: I0307 00:54:12.657709 2406 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 00:54:12.658366 kubelet[2406]: I0307 00:54:12.658350 2406 reconciler.go:26] "Reconciler: start to sync state" Mar 7 00:54:12.658844 kubelet[2406]: E0307 00:54:12.658823 2406 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://116.202.20.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 116.202.20.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 00:54:12.659027 kubelet[2406]: E0307 00:54:12.659004 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.20.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-e1f368ffcb?timeout=10s\": dial tcp 116.202.20.89:6443: connect: connection refused" interval="200ms" Mar 7 00:54:12.660766 kubelet[2406]: E0307 00:54:12.660744 2406 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 00:54:12.661131 kubelet[2406]: I0307 00:54:12.661117 2406 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:54:12.676517 kubelet[2406]: I0307 00:54:12.676452 2406 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 00:54:12.676517 kubelet[2406]: I0307 00:54:12.676500 2406 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 00:54:12.676517 kubelet[2406]: I0307 00:54:12.676523 2406 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:54:12.676689 kubelet[2406]: I0307 00:54:12.676531 2406 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 00:54:12.676689 kubelet[2406]: E0307 00:54:12.676580 2406 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:54:12.683153 kubelet[2406]: E0307 00:54:12.683086 2406 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://116.202.20.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 116.202.20.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 00:54:12.700289 kubelet[2406]: I0307 00:54:12.700247 2406 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 00:54:12.700289 kubelet[2406]: I0307 00:54:12.700268 2406 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 00:54:12.700289 kubelet[2406]: I0307 00:54:12.700289 2406 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:54:12.702172 kubelet[2406]: I0307 00:54:12.702131 2406 policy_none.go:49] "None policy: Start" Mar 7 00:54:12.702241 kubelet[2406]: I0307 00:54:12.702180 2406 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 00:54:12.702241 kubelet[2406]: I0307 00:54:12.702207 2406 state_mem.go:35] "Initializing new in-memory state store" Mar 7 00:54:12.706813 kubelet[2406]: E0307 00:54:12.706774 2406 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:54:12.707040 kubelet[2406]: I0307 00:54:12.707023 2406 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 00:54:12.707080 kubelet[2406]: I0307 00:54:12.707043 2406 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:54:12.708665 kubelet[2406]: I0307 00:54:12.708630 2406 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 00:54:12.711015 kubelet[2406]: E0307 00:54:12.710900 2406 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:54:12.711015 kubelet[2406]: E0307 00:54:12.710984 2406 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-e1f368ffcb\" not found" Mar 7 00:54:12.749261 systemd[1]: Started sshd@9-116.202.20.89:22-202.4.106.201:38194.service - OpenSSH per-connection server daemon (202.4.106.201:38194). Mar 7 00:54:12.786722 kubelet[2406]: E0307 00:54:12.786694 2406 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e1f368ffcb\" not found" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:12.794454 kubelet[2406]: E0307 00:54:12.794046 2406 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e1f368ffcb\" not found" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:12.798001 kubelet[2406]: E0307 00:54:12.797232 2406 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e1f368ffcb\" not found" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:12.813268 kubelet[2406]: I0307 00:54:12.813142 2406 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:12.813755 kubelet[2406]: E0307 00:54:12.813663 2406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://116.202.20.89:6443/api/v1/nodes\": dial tcp 116.202.20.89:6443: connect: connection refused" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:12.861043 kubelet[2406]: I0307 00:54:12.859698 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/24f3533cadceea8e6fdb92c40d5cd41c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-e1f368ffcb\" (UID: \"24f3533cadceea8e6fdb92c40d5cd41c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:12.861043 kubelet[2406]: I0307 00:54:12.859774 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24f3533cadceea8e6fdb92c40d5cd41c-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-e1f368ffcb\" (UID: \"24f3533cadceea8e6fdb92c40d5cd41c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:12.861043 kubelet[2406]: I0307 00:54:12.859810 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24f3533cadceea8e6fdb92c40d5cd41c-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-e1f368ffcb\" (UID: \"24f3533cadceea8e6fdb92c40d5cd41c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:12.861043 kubelet[2406]: I0307 00:54:12.859868 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24f3533cadceea8e6fdb92c40d5cd41c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-e1f368ffcb\" (UID: \"24f3533cadceea8e6fdb92c40d5cd41c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:12.861043 kubelet[2406]: I0307 00:54:12.859902 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c2a0d1ef19cc363114efc9ed6a299e75-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-e1f368ffcb\" (UID: \"c2a0d1ef19cc363114efc9ed6a299e75\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:12.861390 kubelet[2406]: I0307 00:54:12.859959 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33d4df733b6ec815f73a0809206328fa-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-e1f368ffcb\" (UID: \"33d4df733b6ec815f73a0809206328fa\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:12.861390 kubelet[2406]: E0307 00:54:12.859984 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.20.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-e1f368ffcb?timeout=10s\": dial tcp 116.202.20.89:6443: connect: connection refused" interval="400ms" Mar 7 00:54:12.861390 kubelet[2406]: I0307 00:54:12.860005 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33d4df733b6ec815f73a0809206328fa-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-e1f368ffcb\" (UID: \"33d4df733b6ec815f73a0809206328fa\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:12.861390 kubelet[2406]: I0307 00:54:12.860059 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33d4df733b6ec815f73a0809206328fa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-e1f368ffcb\" (UID: \"33d4df733b6ec815f73a0809206328fa\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:12.861390 kubelet[2406]: I0307 00:54:12.860090 2406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24f3533cadceea8e6fdb92c40d5cd41c-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-e1f368ffcb\" (UID: \"24f3533cadceea8e6fdb92c40d5cd41c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:13.017304 kubelet[2406]: I0307 00:54:13.017244 2406 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:13.017763 kubelet[2406]: E0307 00:54:13.017728 2406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://116.202.20.89:6443/api/v1/nodes\": dial tcp 116.202.20.89:6443: connect: connection refused" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:13.090972 containerd[1588]: time="2026-03-07T00:54:13.089411949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-e1f368ffcb,Uid:33d4df733b6ec815f73a0809206328fa,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:13.095093 containerd[1588]: time="2026-03-07T00:54:13.095058118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-e1f368ffcb,Uid:24f3533cadceea8e6fdb92c40d5cd41c,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:13.098122 containerd[1588]: time="2026-03-07T00:54:13.098063330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-e1f368ffcb,Uid:c2a0d1ef19cc363114efc9ed6a299e75,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:13.260736 kubelet[2406]: E0307 00:54:13.260669 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.20.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-e1f368ffcb?timeout=10s\": dial tcp 116.202.20.89:6443: connect: connection refused" interval="800ms" Mar 7 00:54:13.421129 kubelet[2406]: I0307 00:54:13.420672 2406 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:13.421129 kubelet[2406]: E0307 00:54:13.421069 2406 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://116.202.20.89:6443/api/v1/nodes\": dial tcp 116.202.20.89:6443: connect: connection refused" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:13.542885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2945097650.mount: Deactivated successfully. Mar 7 00:54:13.549098 containerd[1588]: time="2026-03-07T00:54:13.549053603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:54:13.550494 containerd[1588]: time="2026-03-07T00:54:13.550362981Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Mar 7 00:54:13.554344 containerd[1588]: time="2026-03-07T00:54:13.553961700Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:54:13.556468 containerd[1588]: time="2026-03-07T00:54:13.555446685Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:54:13.556468 containerd[1588]: time="2026-03-07T00:54:13.556390207Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 00:54:13.557556 containerd[1588]: time="2026-03-07T00:54:13.557522497Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:54:13.558341 containerd[1588]: time="2026-03-07T00:54:13.558309611Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 00:54:13.560224 containerd[1588]: time="2026-03-07T00:54:13.560157253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:54:13.561039 containerd[1588]: time="2026-03-07T00:54:13.561009570Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 465.871089ms" Mar 7 00:54:13.565867 containerd[1588]: time="2026-03-07T00:54:13.565761060Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 476.240146ms" Mar 7 00:54:13.570342 containerd[1588]: time="2026-03-07T00:54:13.570308780Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 472.170927ms" Mar 7 00:54:13.689117 containerd[1588]: time="2026-03-07T00:54:13.689011411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:13.689626 containerd[1588]: time="2026-03-07T00:54:13.689486032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:13.689626 containerd[1588]: time="2026-03-07T00:54:13.689561475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:13.690860 containerd[1588]: time="2026-03-07T00:54:13.690683364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:13.690860 containerd[1588]: time="2026-03-07T00:54:13.690829571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:13.691111 containerd[1588]: time="2026-03-07T00:54:13.690858812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:13.693965 containerd[1588]: time="2026-03-07T00:54:13.693380883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:13.693965 containerd[1588]: time="2026-03-07T00:54:13.693099951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:13.699158 containerd[1588]: time="2026-03-07T00:54:13.698891566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:13.699158 containerd[1588]: time="2026-03-07T00:54:13.698986330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:13.699158 containerd[1588]: time="2026-03-07T00:54:13.699001611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:13.699158 containerd[1588]: time="2026-03-07T00:54:13.699103415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:13.742191 kubelet[2406]: E0307 00:54:13.741552 2406 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://116.202.20.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-e1f368ffcb&limit=500&resourceVersion=0\": dial tcp 116.202.20.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 00:54:13.748357 kubelet[2406]: E0307 00:54:13.748304 2406 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://116.202.20.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 116.202.20.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 00:54:13.779794 containerd[1588]: time="2026-03-07T00:54:13.779754969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-e1f368ffcb,Uid:33d4df733b6ec815f73a0809206328fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"47ccb09d38441283ac12135cd79a3f3a9ae275482205fb196914e627cc3eadc3\"" Mar 7 00:54:13.786648 kubelet[2406]: E0307 00:54:13.786257 2406 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://116.202.20.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 116.202.20.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 00:54:13.788264 containerd[1588]: time="2026-03-07T00:54:13.788219702Z" level=info msg="CreateContainer within sandbox \"47ccb09d38441283ac12135cd79a3f3a9ae275482205fb196914e627cc3eadc3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 00:54:13.788880 containerd[1588]: time="2026-03-07T00:54:13.788850170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-e1f368ffcb,Uid:24f3533cadceea8e6fdb92c40d5cd41c,Namespace:kube-system,Attempt:0,} returns sandbox id \"db289c3c49bd280945f2d2c9688964a1d2ab151ca3beb94234dc54c9b2664f24\"" Mar 7 00:54:13.792654 containerd[1588]: time="2026-03-07T00:54:13.792625897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-e1f368ffcb,Uid:c2a0d1ef19cc363114efc9ed6a299e75,Namespace:kube-system,Attempt:0,} returns sandbox id \"331d3b10867cd8b2692003f634f0156ba78b3cba57b165580092c4b31cf07fe1\"" Mar 7 00:54:13.797260 containerd[1588]: time="2026-03-07T00:54:13.797043371Z" level=info msg="CreateContainer within sandbox \"db289c3c49bd280945f2d2c9688964a1d2ab151ca3beb94234dc54c9b2664f24\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 00:54:13.799710 containerd[1588]: time="2026-03-07T00:54:13.799363713Z" level=info msg="CreateContainer within sandbox \"331d3b10867cd8b2692003f634f0156ba78b3cba57b165580092c4b31cf07fe1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 00:54:13.810511 containerd[1588]: time="2026-03-07T00:54:13.810426481Z" level=info msg="CreateContainer within sandbox \"47ccb09d38441283ac12135cd79a3f3a9ae275482205fb196914e627cc3eadc3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2ce888059441ef46b2b5e8710907d5654992417655bb4c99af9b5549974c669f\"" Mar 7 00:54:13.812287 containerd[1588]: time="2026-03-07T00:54:13.812135756Z" level=info msg="StartContainer for \"2ce888059441ef46b2b5e8710907d5654992417655bb4c99af9b5549974c669f\"" Mar 7 00:54:13.825289 containerd[1588]: time="2026-03-07T00:54:13.825088807Z" level=info msg="CreateContainer within sandbox \"331d3b10867cd8b2692003f634f0156ba78b3cba57b165580092c4b31cf07fe1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5d6a1a93a8e6056e3048b8ba034e984e77701ab9bd882013d9708cce500c89de\"" Mar 7 00:54:13.825988 containerd[1588]: time="2026-03-07T00:54:13.825631591Z" level=info msg="StartContainer for \"5d6a1a93a8e6056e3048b8ba034e984e77701ab9bd882013d9708cce500c89de\"" Mar 7 00:54:13.827068 containerd[1588]: time="2026-03-07T00:54:13.827037253Z" level=info msg="CreateContainer within sandbox \"db289c3c49bd280945f2d2c9688964a1d2ab151ca3beb94234dc54c9b2664f24\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"23f3728b8925a62c84334615b5ac88aa8352a060f6c35ccf9443a414aadddd41\"" Mar 7 00:54:13.829016 containerd[1588]: time="2026-03-07T00:54:13.827760085Z" level=info msg="StartContainer for \"23f3728b8925a62c84334615b5ac88aa8352a060f6c35ccf9443a414aadddd41\"" Mar 7 00:54:13.831387 sshd[2440]: Invalid user ll from 202.4.106.201 port 38194 Mar 7 00:54:13.923839 containerd[1588]: time="2026-03-07T00:54:13.923797237Z" level=info msg="StartContainer for \"2ce888059441ef46b2b5e8710907d5654992417655bb4c99af9b5549974c669f\" returns successfully" Mar 7 00:54:13.933826 containerd[1588]: time="2026-03-07T00:54:13.933783957Z" level=info msg="StartContainer for \"23f3728b8925a62c84334615b5ac88aa8352a060f6c35ccf9443a414aadddd41\" returns successfully" Mar 7 00:54:13.942041 containerd[1588]: time="2026-03-07T00:54:13.941678985Z" level=info msg="StartContainer for \"5d6a1a93a8e6056e3048b8ba034e984e77701ab9bd882013d9708cce500c89de\" returns successfully" Mar 7 00:54:14.031325 sshd[2440]: Received disconnect from 202.4.106.201 port 38194:11: Bye Bye [preauth] Mar 7 00:54:14.031465 sshd[2440]: Disconnected from invalid user ll 202.4.106.201 port 38194 [preauth] Mar 7 00:54:14.035114 systemd[1]: sshd@9-116.202.20.89:22-202.4.106.201:38194.service: Deactivated successfully. Mar 7 00:54:14.062659 kubelet[2406]: E0307 00:54:14.061690 2406 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.20.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-e1f368ffcb?timeout=10s\": dial tcp 116.202.20.89:6443: connect: connection refused" interval="1.6s" Mar 7 00:54:14.223004 kubelet[2406]: I0307 00:54:14.222970 2406 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:14.701688 kubelet[2406]: E0307 00:54:14.701654 2406 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e1f368ffcb\" not found" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:14.703054 kubelet[2406]: E0307 00:54:14.702817 2406 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e1f368ffcb\" not found" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:14.708059 kubelet[2406]: E0307 00:54:14.708027 2406 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e1f368ffcb\" not found" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:15.710668 kubelet[2406]: E0307 00:54:15.710628 2406 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e1f368ffcb\" not found" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:15.711022 kubelet[2406]: E0307 00:54:15.711006 2406 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-e1f368ffcb\" not found" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:16.566281 kubelet[2406]: E0307 00:54:16.566214 2406 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-e1f368ffcb\" not found" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:16.586765 kubelet[2406]: I0307 00:54:16.586720 2406 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:16.637492 kubelet[2406]: I0307 00:54:16.637254 2406 apiserver.go:52] "Watching apiserver" Mar 7 00:54:16.652809 kubelet[2406]: I0307 00:54:16.652281 2406 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:16.656562 kubelet[2406]: I0307 00:54:16.656524 2406 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 00:54:16.665486 kubelet[2406]: E0307 00:54:16.665275 2406 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-e1f368ffcb\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:16.665486 kubelet[2406]: I0307 00:54:16.665307 2406 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:16.667382 kubelet[2406]: E0307 00:54:16.667345 2406 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-e1f368ffcb\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:16.667382 kubelet[2406]: I0307 00:54:16.667373 2406 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:16.671121 kubelet[2406]: E0307 00:54:16.669868 2406 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-e1f368ffcb\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:17.785218 kubelet[2406]: I0307 00:54:17.785065 2406 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:18.566050 systemd[1]: Reloading requested from client PID 2694 ('systemctl') (unit session-7.scope)... Mar 7 00:54:18.566672 systemd[1]: Reloading... Mar 7 00:54:18.652979 zram_generator::config[2733]: No configuration found. Mar 7 00:54:18.693158 kubelet[2406]: I0307 00:54:18.693096 2406 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:18.798829 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:54:18.810498 kubelet[2406]: I0307 00:54:18.810411 2406 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:18.874686 systemd[1]: Reloading finished in 307 ms. Mar 7 00:54:18.912141 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:18.922840 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 00:54:18.923542 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:18.931496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:19.060159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:19.070868 (kubelet)[2788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:54:19.123656 kubelet[2788]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:54:19.123656 kubelet[2788]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 00:54:19.123656 kubelet[2788]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:54:19.124078 kubelet[2788]: I0307 00:54:19.123743 2788 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 00:54:19.134865 kubelet[2788]: I0307 00:54:19.134049 2788 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 00:54:19.134865 kubelet[2788]: I0307 00:54:19.134292 2788 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:54:19.134865 kubelet[2788]: I0307 00:54:19.134569 2788 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 00:54:19.136427 kubelet[2788]: I0307 00:54:19.136395 2788 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 00:54:19.138989 kubelet[2788]: I0307 00:54:19.138889 2788 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:54:19.142878 kubelet[2788]: E0307 00:54:19.142844 2788 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 00:54:19.142878 kubelet[2788]: I0307 00:54:19.142877 2788 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 00:54:19.145095 kubelet[2788]: I0307 00:54:19.145065 2788 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 00:54:19.145520 kubelet[2788]: I0307 00:54:19.145488 2788 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:54:19.145676 kubelet[2788]: I0307 00:54:19.145517 2788 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-e1f368ffcb","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 00:54:19.145676 kubelet[2788]: I0307 00:54:19.145670 2788 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 00:54:19.145676 kubelet[2788]: I0307 00:54:19.145680 2788 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 00:54:19.145816 kubelet[2788]: I0307 00:54:19.145726 2788 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:54:19.145927 kubelet[2788]: I0307 00:54:19.145888 2788 kubelet.go:480] "Attempting to sync node with API server" Mar 7 00:54:19.145927 kubelet[2788]: I0307 00:54:19.145902 2788 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:54:19.145927 kubelet[2788]: I0307 00:54:19.145925 2788 kubelet.go:386] "Adding apiserver pod source" Mar 7 00:54:19.146125 kubelet[2788]: I0307 00:54:19.145987 2788 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:54:19.147093 kubelet[2788]: I0307 00:54:19.147037 2788 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 00:54:19.147093 kubelet[2788]: I0307 00:54:19.147721 2788 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:54:19.150366 kubelet[2788]: I0307 00:54:19.150342 2788 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 00:54:19.150523 kubelet[2788]: I0307 00:54:19.150510 2788 server.go:1289] "Started kubelet" Mar 7 00:54:19.157032 kubelet[2788]: I0307 00:54:19.156992 2788 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 00:54:19.169838 kubelet[2788]: I0307 00:54:19.169785 2788 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:54:19.170951 kubelet[2788]: I0307 00:54:19.170128 2788 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:54:19.170951 kubelet[2788]: I0307 00:54:19.170466 2788 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:54:19.174850 kubelet[2788]: I0307 00:54:19.174820 2788 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:54:19.177654 kubelet[2788]: I0307 00:54:19.177629 2788 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 00:54:19.179188 kubelet[2788]: E0307 00:54:19.179159 2788 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-e1f368ffcb\" not found" Mar 7 00:54:19.182564 kubelet[2788]: I0307 00:54:19.182539 2788 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 00:54:19.182776 kubelet[2788]: I0307 00:54:19.182765 2788 reconciler.go:26] "Reconciler: start to sync state" Mar 7 00:54:19.187667 kubelet[2788]: I0307 00:54:19.187633 2788 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 00:54:19.189792 kubelet[2788]: I0307 00:54:19.189762 2788 server.go:317] "Adding debug handlers to kubelet server" Mar 7 00:54:19.190092 kubelet[2788]: I0307 00:54:19.190075 2788 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 00:54:19.190187 kubelet[2788]: I0307 00:54:19.190177 2788 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 00:54:19.190244 kubelet[2788]: I0307 00:54:19.190235 2788 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:54:19.190334 kubelet[2788]: I0307 00:54:19.190324 2788 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 00:54:19.190478 kubelet[2788]: E0307 00:54:19.190461 2788 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:54:19.194545 kubelet[2788]: I0307 00:54:19.194501 2788 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:54:19.194676 kubelet[2788]: I0307 00:54:19.194646 2788 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:54:19.210963 kubelet[2788]: I0307 00:54:19.209507 2788 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:54:19.270320 kubelet[2788]: I0307 00:54:19.270242 2788 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 00:54:19.270320 kubelet[2788]: I0307 00:54:19.270314 2788 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 00:54:19.270499 kubelet[2788]: I0307 00:54:19.270339 2788 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:54:19.270599 kubelet[2788]: I0307 00:54:19.270514 2788 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 00:54:19.270599 kubelet[2788]: I0307 00:54:19.270528 2788 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 00:54:19.270599 kubelet[2788]: I0307 00:54:19.270572 2788 policy_none.go:49] "None policy: Start" Mar 7 00:54:19.270599 kubelet[2788]: I0307 00:54:19.270585 2788 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 00:54:19.270599 kubelet[2788]: I0307 00:54:19.270599 2788 state_mem.go:35] "Initializing new in-memory state store" Mar 7 00:54:19.270806 kubelet[2788]: I0307 00:54:19.270706 2788 state_mem.go:75] "Updated machine memory state" Mar 7 00:54:19.272065 kubelet[2788]: E0307 00:54:19.272037 2788 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:54:19.272229 kubelet[2788]: I0307 00:54:19.272209 2788 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 00:54:19.272308 kubelet[2788]: I0307 00:54:19.272226 2788 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:54:19.274158 kubelet[2788]: I0307 00:54:19.273317 2788 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 00:54:19.275647 kubelet[2788]: E0307 00:54:19.275021 2788 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:54:19.291633 kubelet[2788]: I0307 00:54:19.291589 2788 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.293475 kubelet[2788]: I0307 00:54:19.292102 2788 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.293475 kubelet[2788]: I0307 00:54:19.292551 2788 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.302708 kubelet[2788]: E0307 00:54:19.302668 2788 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-e1f368ffcb\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.303304 kubelet[2788]: E0307 00:54:19.302708 2788 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-e1f368ffcb\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.303861 kubelet[2788]: E0307 00:54:19.303698 2788 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-e1f368ffcb\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.377473 kubelet[2788]: I0307 00:54:19.377343 2788 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.387932 kubelet[2788]: I0307 00:54:19.387534 2788 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.387932 kubelet[2788]: I0307 00:54:19.387623 2788 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.484114 kubelet[2788]: I0307 00:54:19.483792 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24f3533cadceea8e6fdb92c40d5cd41c-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-e1f368ffcb\" (UID: \"24f3533cadceea8e6fdb92c40d5cd41c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.484114 kubelet[2788]: I0307 00:54:19.483847 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/24f3533cadceea8e6fdb92c40d5cd41c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-e1f368ffcb\" (UID: \"24f3533cadceea8e6fdb92c40d5cd41c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.484114 kubelet[2788]: I0307 00:54:19.483996 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24f3533cadceea8e6fdb92c40d5cd41c-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-e1f368ffcb\" (UID: \"24f3533cadceea8e6fdb92c40d5cd41c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.484114 kubelet[2788]: I0307 00:54:19.484072 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24f3533cadceea8e6fdb92c40d5cd41c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-e1f368ffcb\" (UID: \"24f3533cadceea8e6fdb92c40d5cd41c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.484114 kubelet[2788]: I0307 00:54:19.484116 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33d4df733b6ec815f73a0809206328fa-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-e1f368ffcb\" (UID: \"33d4df733b6ec815f73a0809206328fa\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.484650 kubelet[2788]: I0307 00:54:19.484157 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33d4df733b6ec815f73a0809206328fa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-e1f368ffcb\" (UID: \"33d4df733b6ec815f73a0809206328fa\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.484650 kubelet[2788]: I0307 00:54:19.484189 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24f3533cadceea8e6fdb92c40d5cd41c-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-e1f368ffcb\" (UID: \"24f3533cadceea8e6fdb92c40d5cd41c\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.484650 kubelet[2788]: I0307 00:54:19.484230 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c2a0d1ef19cc363114efc9ed6a299e75-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-e1f368ffcb\" (UID: \"c2a0d1ef19cc363114efc9ed6a299e75\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.484650 kubelet[2788]: I0307 00:54:19.484258 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33d4df733b6ec815f73a0809206328fa-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-e1f368ffcb\" (UID: \"33d4df733b6ec815f73a0809206328fa\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:19.568607 sudo[2824]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 7 00:54:19.569414 sudo[2824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 7 00:54:20.097261 sudo[2824]: pam_unix(sudo:session): session closed for user root Mar 7 00:54:20.152101 kubelet[2788]: I0307 00:54:20.152045 2788 apiserver.go:52] "Watching apiserver" Mar 7 00:54:20.183206 kubelet[2788]: I0307 00:54:20.183155 2788 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 00:54:20.246722 kubelet[2788]: I0307 00:54:20.246560 2788 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:20.254062 kubelet[2788]: E0307 00:54:20.253895 2788 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-e1f368ffcb\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-e1f368ffcb" Mar 7 00:54:20.290997 kubelet[2788]: I0307 00:54:20.290530 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-e1f368ffcb" podStartSLOduration=3.290514124 podStartE2EDuration="3.290514124s" podCreationTimestamp="2026-03-07 00:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:54:20.27696842 +0000 UTC m=+1.201406381" watchObservedRunningTime="2026-03-07 00:54:20.290514124 +0000 UTC m=+1.214952045" Mar 7 00:54:20.303377 kubelet[2788]: I0307 00:54:20.303128 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-e1f368ffcb" podStartSLOduration=2.303109755 podStartE2EDuration="2.303109755s" podCreationTimestamp="2026-03-07 00:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:54:20.291329312 +0000 UTC m=+1.215767233" watchObservedRunningTime="2026-03-07 00:54:20.303109755 +0000 UTC m=+1.227547716" Mar 7 00:54:20.316395 kubelet[2788]: I0307 00:54:20.316110 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-e1f368ffcb" podStartSLOduration=2.316091999 podStartE2EDuration="2.316091999s" podCreationTimestamp="2026-03-07 00:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:54:20.303890102 +0000 UTC m=+1.228328023" watchObservedRunningTime="2026-03-07 00:54:20.316091999 +0000 UTC m=+1.240529880" Mar 7 00:54:21.921331 sudo[1873]: pam_unix(sudo:session): session closed for user root Mar 7 00:54:22.016213 sshd[1869]: pam_unix(sshd:session): session closed for user core Mar 7 00:54:22.022221 systemd-logind[1563]: Session 7 logged out. Waiting for processes to exit. Mar 7 00:54:22.023165 systemd[1]: sshd@8-116.202.20.89:22-20.161.92.111:34464.service: Deactivated successfully. Mar 7 00:54:22.027235 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 00:54:22.028711 systemd-logind[1563]: Removed session 7. Mar 7 00:54:24.512574 kubelet[2788]: I0307 00:54:24.512521 2788 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 00:54:24.513609 containerd[1588]: time="2026-03-07T00:54:24.513507325Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 00:54:24.514768 kubelet[2788]: I0307 00:54:24.513856 2788 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 00:54:25.130971 kubelet[2788]: I0307 00:54:25.127585 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f701310b-7efd-4978-bf58-f2fe39b3a515-kube-proxy\") pod \"kube-proxy-2x9sx\" (UID: \"f701310b-7efd-4978-bf58-f2fe39b3a515\") " pod="kube-system/kube-proxy-2x9sx" Mar 7 00:54:25.130971 kubelet[2788]: I0307 00:54:25.127654 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f701310b-7efd-4978-bf58-f2fe39b3a515-xtables-lock\") pod \"kube-proxy-2x9sx\" (UID: \"f701310b-7efd-4978-bf58-f2fe39b3a515\") " pod="kube-system/kube-proxy-2x9sx" Mar 7 00:54:25.130971 kubelet[2788]: I0307 00:54:25.127676 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f701310b-7efd-4978-bf58-f2fe39b3a515-lib-modules\") pod \"kube-proxy-2x9sx\" (UID: \"f701310b-7efd-4978-bf58-f2fe39b3a515\") " pod="kube-system/kube-proxy-2x9sx" Mar 7 00:54:25.130971 kubelet[2788]: I0307 00:54:25.127739 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf59k\" (UniqueName: \"kubernetes.io/projected/f701310b-7efd-4978-bf58-f2fe39b3a515-kube-api-access-gf59k\") pod \"kube-proxy-2x9sx\" (UID: \"f701310b-7efd-4978-bf58-f2fe39b3a515\") " pod="kube-system/kube-proxy-2x9sx" Mar 7 00:54:25.228679 kubelet[2788]: I0307 00:54:25.228617 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-cni-path\") pod \"cilium-jvc64\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " pod="kube-system/cilium-jvc64" Mar 7 00:54:25.229001 kubelet[2788]: I0307 00:54:25.228970 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-etc-cni-netd\") pod \"cilium-jvc64\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " pod="kube-system/cilium-jvc64" Mar 7 00:54:25.229204 kubelet[2788]: I0307 00:54:25.229174 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-xtables-lock\") pod \"cilium-jvc64\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " pod="kube-system/cilium-jvc64" Mar 7 00:54:25.229375 kubelet[2788]: I0307 00:54:25.229347 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7145851b-6cdc-459f-9b34-c2ccab7019e7-cilium-config-path\") pod \"cilium-jvc64\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " pod="kube-system/cilium-jvc64" Mar 7 00:54:25.229774 kubelet[2788]: I0307 00:54:25.229712 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-cilium-run\") pod \"cilium-jvc64\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " pod="kube-system/cilium-jvc64" Mar 7 00:54:25.229880 kubelet[2788]: I0307 00:54:25.229792 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-cilium-cgroup\") pod \"cilium-jvc64\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " pod="kube-system/cilium-jvc64" Mar 7 00:54:25.229880 kubelet[2788]: I0307 00:54:25.229843 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-host-proc-sys-kernel\") pod \"cilium-jvc64\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " pod="kube-system/cilium-jvc64" Mar 7 00:54:25.230051 kubelet[2788]: I0307 00:54:25.229883 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7145851b-6cdc-459f-9b34-c2ccab7019e7-hubble-tls\") pod \"cilium-jvc64\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " pod="kube-system/cilium-jvc64" Mar 7 00:54:25.230051 kubelet[2788]: I0307 00:54:25.229983 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-hostproc\") pod \"cilium-jvc64\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " pod="kube-system/cilium-jvc64" Mar 7 00:54:25.230051 kubelet[2788]: I0307 00:54:25.230023 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-host-proc-sys-net\") pod \"cilium-jvc64\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " pod="kube-system/cilium-jvc64" Mar 7 00:54:25.230207 kubelet[2788]: I0307 00:54:25.230062 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6vbr\" (UniqueName: \"kubernetes.io/projected/7145851b-6cdc-459f-9b34-c2ccab7019e7-kube-api-access-x6vbr\") pod \"cilium-jvc64\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " pod="kube-system/cilium-jvc64" Mar 7 00:54:25.230207 kubelet[2788]: I0307 00:54:25.230128 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-bpf-maps\") pod \"cilium-jvc64\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " pod="kube-system/cilium-jvc64" Mar 7 00:54:25.230207 kubelet[2788]: I0307 00:54:25.230164 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-lib-modules\") pod \"cilium-jvc64\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " pod="kube-system/cilium-jvc64" Mar 7 00:54:25.230207 kubelet[2788]: I0307 00:54:25.230201 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7145851b-6cdc-459f-9b34-c2ccab7019e7-clustermesh-secrets\") pod \"cilium-jvc64\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " pod="kube-system/cilium-jvc64" Mar 7 00:54:25.238898 kubelet[2788]: E0307 00:54:25.238861 2788 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 7 00:54:25.238898 kubelet[2788]: E0307 00:54:25.238898 2788 projected.go:194] Error preparing data for projected volume kube-api-access-gf59k for pod kube-system/kube-proxy-2x9sx: configmap "kube-root-ca.crt" not found Mar 7 00:54:25.239076 kubelet[2788]: E0307 00:54:25.238988 2788 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f701310b-7efd-4978-bf58-f2fe39b3a515-kube-api-access-gf59k podName:f701310b-7efd-4978-bf58-f2fe39b3a515 nodeName:}" failed. No retries permitted until 2026-03-07 00:54:25.73896187 +0000 UTC m=+6.663399791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gf59k" (UniqueName: "kubernetes.io/projected/f701310b-7efd-4978-bf58-f2fe39b3a515-kube-api-access-gf59k") pod "kube-proxy-2x9sx" (UID: "f701310b-7efd-4978-bf58-f2fe39b3a515") : configmap "kube-root-ca.crt" not found Mar 7 00:54:25.345417 kubelet[2788]: E0307 00:54:25.342267 2788 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 7 00:54:25.345417 kubelet[2788]: E0307 00:54:25.342298 2788 projected.go:194] Error preparing data for projected volume kube-api-access-x6vbr for pod kube-system/cilium-jvc64: configmap "kube-root-ca.crt" not found Mar 7 00:54:25.345417 kubelet[2788]: E0307 00:54:25.342367 2788 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7145851b-6cdc-459f-9b34-c2ccab7019e7-kube-api-access-x6vbr podName:7145851b-6cdc-459f-9b34-c2ccab7019e7 nodeName:}" failed. No retries permitted until 2026-03-07 00:54:25.842348117 +0000 UTC m=+6.766786038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x6vbr" (UniqueName: "kubernetes.io/projected/7145851b-6cdc-459f-9b34-c2ccab7019e7-kube-api-access-x6vbr") pod "cilium-jvc64" (UID: "7145851b-6cdc-459f-9b34-c2ccab7019e7") : configmap "kube-root-ca.crt" not found Mar 7 00:54:25.734274 kubelet[2788]: I0307 00:54:25.734091 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/487c5252-9add-42cb-a3be-bc78474f583b-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-nrcr6\" (UID: \"487c5252-9add-42cb-a3be-bc78474f583b\") " pod="kube-system/cilium-operator-6c4d7847fc-nrcr6" Mar 7 00:54:25.734274 kubelet[2788]: I0307 00:54:25.734207 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvgsk\" (UniqueName: \"kubernetes.io/projected/487c5252-9add-42cb-a3be-bc78474f583b-kube-api-access-dvgsk\") pod \"cilium-operator-6c4d7847fc-nrcr6\" (UID: \"487c5252-9add-42cb-a3be-bc78474f583b\") " pod="kube-system/cilium-operator-6c4d7847fc-nrcr6" Mar 7 00:54:26.008136 containerd[1588]: time="2026-03-07T00:54:26.007578157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2x9sx,Uid:f701310b-7efd-4978-bf58-f2fe39b3a515,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:26.023202 containerd[1588]: time="2026-03-07T00:54:26.022678070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nrcr6,Uid:487c5252-9add-42cb-a3be-bc78474f583b,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:26.037187 containerd[1588]: time="2026-03-07T00:54:26.037081644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:26.037429 containerd[1588]: time="2026-03-07T00:54:26.037160806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:26.037429 containerd[1588]: time="2026-03-07T00:54:26.037177526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:26.037429 containerd[1588]: time="2026-03-07T00:54:26.037292730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:26.041071 containerd[1588]: time="2026-03-07T00:54:26.041036797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jvc64,Uid:7145851b-6cdc-459f-9b34-c2ccab7019e7,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:26.071339 containerd[1588]: time="2026-03-07T00:54:26.071075699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:26.071339 containerd[1588]: time="2026-03-07T00:54:26.071148101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:26.071339 containerd[1588]: time="2026-03-07T00:54:26.071160101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:26.071339 containerd[1588]: time="2026-03-07T00:54:26.071250184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:26.099080 containerd[1588]: time="2026-03-07T00:54:26.094758338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:26.099080 containerd[1588]: time="2026-03-07T00:54:26.095518600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:26.099080 containerd[1588]: time="2026-03-07T00:54:26.095710526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:26.099080 containerd[1588]: time="2026-03-07T00:54:26.096387585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:26.103456 containerd[1588]: time="2026-03-07T00:54:26.103401226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2x9sx,Uid:f701310b-7efd-4978-bf58-f2fe39b3a515,Namespace:kube-system,Attempt:0,} returns sandbox id \"48a5d7860248f847f79212ed9f6e31f50e9e7817cd8f1e9a477591c5f897ad62\"" Mar 7 00:54:26.113247 containerd[1588]: time="2026-03-07T00:54:26.113209428Z" level=info msg="CreateContainer within sandbox \"48a5d7860248f847f79212ed9f6e31f50e9e7817cd8f1e9a477591c5f897ad62\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 00:54:26.144920 containerd[1588]: time="2026-03-07T00:54:26.144876496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nrcr6,Uid:487c5252-9add-42cb-a3be-bc78474f583b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f9603d4a883ba03f7ba523440431724b3b10b0c06fcb6201c56644f72d830ee\"" Mar 7 00:54:26.145678 containerd[1588]: time="2026-03-07T00:54:26.145649119Z" level=info msg="CreateContainer within sandbox \"48a5d7860248f847f79212ed9f6e31f50e9e7817cd8f1e9a477591c5f897ad62\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9a8e542ed4d7de2d1d3726c7a16c6d66a21f98a9108016eb94a8850b3e5f2510\"" Mar 7 00:54:26.147579 containerd[1588]: time="2026-03-07T00:54:26.146873914Z" level=info msg="StartContainer for \"9a8e542ed4d7de2d1d3726c7a16c6d66a21f98a9108016eb94a8850b3e5f2510\"" Mar 7 00:54:26.148727 containerd[1588]: time="2026-03-07T00:54:26.148694606Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 7 00:54:26.158396 containerd[1588]: time="2026-03-07T00:54:26.158359443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jvc64,Uid:7145851b-6cdc-459f-9b34-c2ccab7019e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737\"" Mar 7 00:54:26.216168 containerd[1588]: time="2026-03-07T00:54:26.216114420Z" level=info msg="StartContainer for \"9a8e542ed4d7de2d1d3726c7a16c6d66a21f98a9108016eb94a8850b3e5f2510\" returns successfully" Mar 7 00:54:27.534365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2242288403.mount: Deactivated successfully. Mar 7 00:54:27.980877 containerd[1588]: time="2026-03-07T00:54:27.980562488Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:27.981875 containerd[1588]: time="2026-03-07T00:54:27.981312909Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 7 00:54:27.982434 containerd[1588]: time="2026-03-07T00:54:27.982380259Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:27.984273 containerd[1588]: time="2026-03-07T00:54:27.984127748Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.835280377s" Mar 7 00:54:27.984273 containerd[1588]: time="2026-03-07T00:54:27.984171309Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 7 00:54:27.986117 containerd[1588]: time="2026-03-07T00:54:27.986074762Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 7 00:54:27.990078 containerd[1588]: time="2026-03-07T00:54:27.989869028Z" level=info msg="CreateContainer within sandbox \"7f9603d4a883ba03f7ba523440431724b3b10b0c06fcb6201c56644f72d830ee\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 7 00:54:28.015189 containerd[1588]: time="2026-03-07T00:54:28.015132965Z" level=info msg="CreateContainer within sandbox \"7f9603d4a883ba03f7ba523440431724b3b10b0c06fcb6201c56644f72d830ee\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718\"" Mar 7 00:54:28.016815 containerd[1588]: time="2026-03-07T00:54:28.015899786Z" level=info msg="StartContainer for \"baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718\"" Mar 7 00:54:28.073876 containerd[1588]: time="2026-03-07T00:54:28.073760404Z" level=info msg="StartContainer for \"baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718\" returns successfully" Mar 7 00:54:28.304517 kubelet[2788]: I0307 00:54:28.304370 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2x9sx" podStartSLOduration=3.304333493 podStartE2EDuration="3.304333493s" podCreationTimestamp="2026-03-07 00:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:54:26.280565349 +0000 UTC m=+7.205003270" watchObservedRunningTime="2026-03-07 00:54:28.304333493 +0000 UTC m=+9.228771414" Mar 7 00:54:31.471436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1922492561.mount: Deactivated successfully. Mar 7 00:54:31.503598 kubelet[2788]: I0307 00:54:31.503512 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-nrcr6" podStartSLOduration=4.664418665 podStartE2EDuration="6.50349399s" podCreationTimestamp="2026-03-07 00:54:25 +0000 UTC" firstStartedPulling="2026-03-07 00:54:26.146369219 +0000 UTC m=+7.070807140" lastFinishedPulling="2026-03-07 00:54:27.985444584 +0000 UTC m=+8.909882465" observedRunningTime="2026-03-07 00:54:28.305272118 +0000 UTC m=+9.229710039" watchObservedRunningTime="2026-03-07 00:54:31.50349399 +0000 UTC m=+12.427931911" Mar 7 00:54:32.921849 containerd[1588]: time="2026-03-07T00:54:32.920907756Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:32.922865 containerd[1588]: time="2026-03-07T00:54:32.922830124Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 7 00:54:32.924424 containerd[1588]: time="2026-03-07T00:54:32.924371482Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:32.928541 containerd[1588]: time="2026-03-07T00:54:32.928373982Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.942240578s" Mar 7 00:54:32.928541 containerd[1588]: time="2026-03-07T00:54:32.928427663Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 7 00:54:32.933532 containerd[1588]: time="2026-03-07T00:54:32.933493150Z" level=info msg="CreateContainer within sandbox \"b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 00:54:32.947053 containerd[1588]: time="2026-03-07T00:54:32.946540075Z" level=info msg="CreateContainer within sandbox \"b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3\"" Mar 7 00:54:32.948283 containerd[1588]: time="2026-03-07T00:54:32.948226997Z" level=info msg="StartContainer for \"239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3\"" Mar 7 00:54:33.004609 containerd[1588]: time="2026-03-07T00:54:33.004545919Z" level=info msg="StartContainer for \"239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3\" returns successfully" Mar 7 00:54:33.176229 containerd[1588]: time="2026-03-07T00:54:33.175884667Z" level=info msg="shim disconnected" id=239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3 namespace=k8s.io Mar 7 00:54:33.176229 containerd[1588]: time="2026-03-07T00:54:33.175955988Z" level=warning msg="cleaning up after shim disconnected" id=239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3 namespace=k8s.io Mar 7 00:54:33.176229 containerd[1588]: time="2026-03-07T00:54:33.175966629Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:54:33.298203 containerd[1588]: time="2026-03-07T00:54:33.298076733Z" level=info msg="CreateContainer within sandbox \"b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 00:54:33.313321 containerd[1588]: time="2026-03-07T00:54:33.312123636Z" level=info msg="CreateContainer within sandbox \"b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"568964fb12e94b25ed9abde207a3119750588963e3a6d77ee9361dc11984e492\"" Mar 7 00:54:33.315008 containerd[1588]: time="2026-03-07T00:54:33.314848663Z" level=info msg="StartContainer for \"568964fb12e94b25ed9abde207a3119750588963e3a6d77ee9361dc11984e492\"" Mar 7 00:54:33.368289 containerd[1588]: time="2026-03-07T00:54:33.368245168Z" level=info msg="StartContainer for \"568964fb12e94b25ed9abde207a3119750588963e3a6d77ee9361dc11984e492\" returns successfully" Mar 7 00:54:33.381044 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 00:54:33.381833 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:54:33.381898 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:54:33.388154 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:54:33.410354 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:54:33.421418 containerd[1588]: time="2026-03-07T00:54:33.421263503Z" level=info msg="shim disconnected" id=568964fb12e94b25ed9abde207a3119750588963e3a6d77ee9361dc11984e492 namespace=k8s.io Mar 7 00:54:33.421418 containerd[1588]: time="2026-03-07T00:54:33.421409907Z" level=warning msg="cleaning up after shim disconnected" id=568964fb12e94b25ed9abde207a3119750588963e3a6d77ee9361dc11984e492 namespace=k8s.io Mar 7 00:54:33.421418 containerd[1588]: time="2026-03-07T00:54:33.421419747Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:54:33.944838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3-rootfs.mount: Deactivated successfully. Mar 7 00:54:34.300590 containerd[1588]: time="2026-03-07T00:54:34.300475250Z" level=info msg="CreateContainer within sandbox \"b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 00:54:34.323173 containerd[1588]: time="2026-03-07T00:54:34.323120873Z" level=info msg="CreateContainer within sandbox \"b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e\"" Mar 7 00:54:34.325846 containerd[1588]: time="2026-03-07T00:54:34.325489130Z" level=info msg="StartContainer for \"e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e\"" Mar 7 00:54:34.389082 containerd[1588]: time="2026-03-07T00:54:34.389012612Z" level=info msg="StartContainer for \"e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e\" returns successfully" Mar 7 00:54:34.455617 containerd[1588]: time="2026-03-07T00:54:34.455509367Z" level=info msg="shim disconnected" id=e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e namespace=k8s.io Mar 7 00:54:34.455617 containerd[1588]: time="2026-03-07T00:54:34.455564568Z" level=warning msg="cleaning up after shim disconnected" id=e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e namespace=k8s.io Mar 7 00:54:34.455617 containerd[1588]: time="2026-03-07T00:54:34.455572768Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:54:34.945053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e-rootfs.mount: Deactivated successfully. Mar 7 00:54:35.307796 containerd[1588]: time="2026-03-07T00:54:35.307680862Z" level=info msg="CreateContainer within sandbox \"b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 00:54:35.325034 containerd[1588]: time="2026-03-07T00:54:35.324985749Z" level=info msg="CreateContainer within sandbox \"b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24\"" Mar 7 00:54:35.326180 containerd[1588]: time="2026-03-07T00:54:35.326134096Z" level=info msg="StartContainer for \"c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24\"" Mar 7 00:54:35.371288 containerd[1588]: time="2026-03-07T00:54:35.371224877Z" level=info msg="StartContainer for \"c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24\" returns successfully" Mar 7 00:54:35.394482 containerd[1588]: time="2026-03-07T00:54:35.394416543Z" level=info msg="shim disconnected" id=c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24 namespace=k8s.io Mar 7 00:54:35.394482 containerd[1588]: time="2026-03-07T00:54:35.394474625Z" level=warning msg="cleaning up after shim disconnected" id=c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24 namespace=k8s.io Mar 7 00:54:35.394482 containerd[1588]: time="2026-03-07T00:54:35.394483545Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:54:35.945253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24-rootfs.mount: Deactivated successfully. Mar 7 00:54:36.317472 containerd[1588]: time="2026-03-07T00:54:36.316901806Z" level=info msg="CreateContainer within sandbox \"b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 00:54:36.346671 containerd[1588]: time="2026-03-07T00:54:36.346529411Z" level=info msg="CreateContainer within sandbox \"b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd\"" Mar 7 00:54:36.348442 containerd[1588]: time="2026-03-07T00:54:36.347474993Z" level=info msg="StartContainer for \"ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd\"" Mar 7 00:54:36.401541 containerd[1588]: time="2026-03-07T00:54:36.401460201Z" level=info msg="StartContainer for \"ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd\" returns successfully" Mar 7 00:54:36.532095 kubelet[2788]: I0307 00:54:36.530307 2788 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 00:54:36.610342 kubelet[2788]: I0307 00:54:36.610122 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7qmh\" (UniqueName: \"kubernetes.io/projected/f1fc4675-6d71-4967-9782-36e3018a9799-kube-api-access-m7qmh\") pod \"coredns-674b8bbfcf-4bxdv\" (UID: \"f1fc4675-6d71-4967-9782-36e3018a9799\") " pod="kube-system/coredns-674b8bbfcf-4bxdv" Mar 7 00:54:36.611489 kubelet[2788]: I0307 00:54:36.611237 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1fc4675-6d71-4967-9782-36e3018a9799-config-volume\") pod \"coredns-674b8bbfcf-4bxdv\" (UID: \"f1fc4675-6d71-4967-9782-36e3018a9799\") " pod="kube-system/coredns-674b8bbfcf-4bxdv" Mar 7 00:54:36.612674 kubelet[2788]: I0307 00:54:36.612578 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e0becd2-f3df-4cd5-a0b4-545dac0ba9fe-config-volume\") pod \"coredns-674b8bbfcf-qkvg4\" (UID: \"4e0becd2-f3df-4cd5-a0b4-545dac0ba9fe\") " pod="kube-system/coredns-674b8bbfcf-qkvg4" Mar 7 00:54:36.612674 kubelet[2788]: I0307 00:54:36.612611 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6l96\" (UniqueName: \"kubernetes.io/projected/4e0becd2-f3df-4cd5-a0b4-545dac0ba9fe-kube-api-access-t6l96\") pod \"coredns-674b8bbfcf-qkvg4\" (UID: \"4e0becd2-f3df-4cd5-a0b4-545dac0ba9fe\") " pod="kube-system/coredns-674b8bbfcf-qkvg4" Mar 7 00:54:36.882279 containerd[1588]: time="2026-03-07T00:54:36.881398141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4bxdv,Uid:f1fc4675-6d71-4967-9782-36e3018a9799,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:36.886961 containerd[1588]: time="2026-03-07T00:54:36.885255070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qkvg4,Uid:4e0becd2-f3df-4cd5-a0b4-545dac0ba9fe,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:38.556978 systemd-networkd[1235]: cilium_host: Link UP Mar 7 00:54:38.558656 systemd-networkd[1235]: cilium_net: Link UP Mar 7 00:54:38.558849 systemd-networkd[1235]: cilium_net: Gained carrier Mar 7 00:54:38.562694 systemd-networkd[1235]: cilium_host: Gained carrier Mar 7 00:54:38.675229 systemd-networkd[1235]: cilium_vxlan: Link UP Mar 7 00:54:38.676293 systemd-networkd[1235]: cilium_vxlan: Gained carrier Mar 7 00:54:38.963409 kernel: NET: Registered PF_ALG protocol family Mar 7 00:54:39.024253 systemd-networkd[1235]: cilium_host: Gained IPv6LL Mar 7 00:54:39.480612 systemd-networkd[1235]: cilium_net: Gained IPv6LL Mar 7 00:54:39.715141 systemd-networkd[1235]: lxc_health: Link UP Mar 7 00:54:39.721847 systemd-networkd[1235]: lxc_health: Gained carrier Mar 7 00:54:39.932227 systemd-networkd[1235]: lxcd58900df8b32: Link UP Mar 7 00:54:39.942492 kernel: eth0: renamed from tmp15284 Mar 7 00:54:39.947687 systemd-networkd[1235]: lxcd58900df8b32: Gained carrier Mar 7 00:54:39.957040 systemd-networkd[1235]: lxc8a58b12ca9cc: Link UP Mar 7 00:54:39.965989 kernel: eth0: renamed from tmpbd0dc Mar 7 00:54:39.976106 systemd-networkd[1235]: lxc8a58b12ca9cc: Gained carrier Mar 7 00:54:40.065683 kubelet[2788]: I0307 00:54:40.064979 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jvc64" podStartSLOduration=8.297602294 podStartE2EDuration="15.064962072s" podCreationTimestamp="2026-03-07 00:54:25 +0000 UTC" firstStartedPulling="2026-03-07 00:54:26.162270035 +0000 UTC m=+7.086707956" lastFinishedPulling="2026-03-07 00:54:32.929629813 +0000 UTC m=+13.854067734" observedRunningTime="2026-03-07 00:54:37.335558196 +0000 UTC m=+18.259996197" watchObservedRunningTime="2026-03-07 00:54:40.064962072 +0000 UTC m=+20.989399993" Mar 7 00:54:40.248123 systemd-networkd[1235]: cilium_vxlan: Gained IPv6LL Mar 7 00:54:41.208171 systemd-networkd[1235]: lxcd58900df8b32: Gained IPv6LL Mar 7 00:54:41.334151 kubelet[2788]: I0307 00:54:41.333085 2788 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:54:41.592991 systemd-networkd[1235]: lxc_health: Gained IPv6LL Mar 7 00:54:41.914026 systemd-networkd[1235]: lxc8a58b12ca9cc: Gained IPv6LL Mar 7 00:54:43.830782 containerd[1588]: time="2026-03-07T00:54:43.830471938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:43.831331 containerd[1588]: time="2026-03-07T00:54:43.830760824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:43.831331 containerd[1588]: time="2026-03-07T00:54:43.831009909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:43.831904 containerd[1588]: time="2026-03-07T00:54:43.831548241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:43.873924 containerd[1588]: time="2026-03-07T00:54:43.873656201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:43.873924 containerd[1588]: time="2026-03-07T00:54:43.873717122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:43.873924 containerd[1588]: time="2026-03-07T00:54:43.873732122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:43.873924 containerd[1588]: time="2026-03-07T00:54:43.873819884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:43.928527 containerd[1588]: time="2026-03-07T00:54:43.928490306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qkvg4,Uid:4e0becd2-f3df-4cd5-a0b4-545dac0ba9fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd0dcd89c6c8a1b0d1380dd811409fc8215df299949cf3d19daca6f11fe4b3bc\"" Mar 7 00:54:43.938901 containerd[1588]: time="2026-03-07T00:54:43.938863043Z" level=info msg="CreateContainer within sandbox \"bd0dcd89c6c8a1b0d1380dd811409fc8215df299949cf3d19daca6f11fe4b3bc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:54:43.950464 containerd[1588]: time="2026-03-07T00:54:43.950422005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4bxdv,Uid:f1fc4675-6d71-4967-9782-36e3018a9799,Namespace:kube-system,Attempt:0,} returns sandbox id \"15284b5759ae6e042a88544750edb01b2dc04be864fc0b9cf97442b853c4fdd1\"" Mar 7 00:54:43.966950 containerd[1588]: time="2026-03-07T00:54:43.966713145Z" level=info msg="CreateContainer within sandbox \"15284b5759ae6e042a88544750edb01b2dc04be864fc0b9cf97442b853c4fdd1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:54:43.974568 containerd[1588]: time="2026-03-07T00:54:43.974524348Z" level=info msg="CreateContainer within sandbox \"bd0dcd89c6c8a1b0d1380dd811409fc8215df299949cf3d19daca6f11fe4b3bc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"469511315bd4563e013ed1ee0ff963d2e2c8ce97e1331456034b279f33ed3e75\"" Mar 7 00:54:43.975411 containerd[1588]: time="2026-03-07T00:54:43.975139521Z" level=info msg="StartContainer for \"469511315bd4563e013ed1ee0ff963d2e2c8ce97e1331456034b279f33ed3e75\"" Mar 7 00:54:43.986589 containerd[1588]: time="2026-03-07T00:54:43.986497079Z" level=info msg="CreateContainer within sandbox \"15284b5759ae6e042a88544750edb01b2dc04be864fc0b9cf97442b853c4fdd1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db8b455021a843d6545d25d090c5f41c057d41648e8081c15f5bb02cc3a6333c\"" Mar 7 00:54:43.988749 containerd[1588]: time="2026-03-07T00:54:43.988636763Z" level=info msg="StartContainer for \"db8b455021a843d6545d25d090c5f41c057d41648e8081c15f5bb02cc3a6333c\"" Mar 7 00:54:44.048596 containerd[1588]: time="2026-03-07T00:54:44.048528203Z" level=info msg="StartContainer for \"469511315bd4563e013ed1ee0ff963d2e2c8ce97e1331456034b279f33ed3e75\" returns successfully" Mar 7 00:54:44.073016 containerd[1588]: time="2026-03-07T00:54:44.069899525Z" level=info msg="StartContainer for \"db8b455021a843d6545d25d090c5f41c057d41648e8081c15f5bb02cc3a6333c\" returns successfully" Mar 7 00:54:44.354217 kubelet[2788]: I0307 00:54:44.354036 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qkvg4" podStartSLOduration=19.354018272 podStartE2EDuration="19.354018272s" podCreationTimestamp="2026-03-07 00:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:54:44.352883769 +0000 UTC m=+25.277321730" watchObservedRunningTime="2026-03-07 00:54:44.354018272 +0000 UTC m=+25.278456193" Mar 7 00:54:44.372586 kubelet[2788]: I0307 00:54:44.369810 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4bxdv" podStartSLOduration=19.36893574 podStartE2EDuration="19.36893574s" podCreationTimestamp="2026-03-07 00:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:54:44.367176784 +0000 UTC m=+25.291614785" watchObservedRunningTime="2026-03-07 00:54:44.36893574 +0000 UTC m=+25.293373661" Mar 7 00:56:07.499544 systemd[1]: Started sshd@10-116.202.20.89:22-80.94.92.183:34886.service - OpenSSH per-connection server daemon (80.94.92.183:34886). Mar 7 00:56:07.547858 sshd[4190]: Connection closed by 80.94.92.183 port 34886 Mar 7 00:56:07.549623 systemd[1]: sshd@10-116.202.20.89:22-80.94.92.183:34886.service: Deactivated successfully. Mar 7 00:56:20.906274 update_engine[1566]: I20260307 00:56:20.904110 1566 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 7 00:56:20.906274 update_engine[1566]: I20260307 00:56:20.904187 1566 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 7 00:56:20.906274 update_engine[1566]: I20260307 00:56:20.904563 1566 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 7 00:56:20.906274 update_engine[1566]: I20260307 00:56:20.905447 1566 omaha_request_params.cc:62] Current group set to lts Mar 7 00:56:20.908053 update_engine[1566]: I20260307 00:56:20.907990 1566 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 7 00:56:20.908053 update_engine[1566]: I20260307 00:56:20.908041 1566 update_attempter.cc:643] Scheduling an action processor start. Mar 7 00:56:20.908164 update_engine[1566]: I20260307 00:56:20.908073 1566 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 00:56:20.908164 update_engine[1566]: I20260307 00:56:20.908135 1566 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 7 00:56:20.908290 update_engine[1566]: I20260307 00:56:20.908241 1566 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 00:56:20.908290 update_engine[1566]: I20260307 00:56:20.908263 1566 omaha_request_action.cc:272] Request: Mar 7 00:56:20.908290 update_engine[1566]: Mar 7 00:56:20.908290 update_engine[1566]: Mar 7 00:56:20.908290 update_engine[1566]: Mar 7 00:56:20.908290 update_engine[1566]: Mar 7 00:56:20.908290 update_engine[1566]: Mar 7 00:56:20.908290 update_engine[1566]: Mar 7 00:56:20.908290 update_engine[1566]: Mar 7 00:56:20.908290 update_engine[1566]: Mar 7 00:56:20.908290 update_engine[1566]: I20260307 00:56:20.908274 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 00:56:20.908794 locksmithd[1634]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 7 00:56:20.910282 update_engine[1566]: I20260307 00:56:20.910234 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 00:56:20.910675 update_engine[1566]: I20260307 00:56:20.910629 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 00:56:20.914449 update_engine[1566]: E20260307 00:56:20.914364 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 00:56:20.914570 update_engine[1566]: I20260307 00:56:20.914495 1566 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 7 00:56:30.890977 update_engine[1566]: I20260307 00:56:30.890790 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 00:56:30.891645 update_engine[1566]: I20260307 00:56:30.891196 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 00:56:30.891645 update_engine[1566]: I20260307 00:56:30.891534 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 00:56:30.892711 update_engine[1566]: E20260307 00:56:30.892618 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 00:56:30.892850 update_engine[1566]: I20260307 00:56:30.892722 1566 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 7 00:56:40.891962 update_engine[1566]: I20260307 00:56:40.891827 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 00:56:40.892789 update_engine[1566]: I20260307 00:56:40.892155 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 00:56:40.892789 update_engine[1566]: I20260307 00:56:40.892432 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 00:56:40.893538 update_engine[1566]: E20260307 00:56:40.893465 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 00:56:40.893538 update_engine[1566]: I20260307 00:56:40.893540 1566 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 7 00:56:45.793471 systemd[1]: Started sshd@11-116.202.20.89:22-20.161.92.111:37392.service - OpenSSH per-connection server daemon (20.161.92.111:37392). Mar 7 00:56:46.349378 systemd[1]: Started sshd@12-116.202.20.89:22-223.244.25.6:43118.service - OpenSSH per-connection server daemon (223.244.25.6:43118). Mar 7 00:56:46.375890 sshd[4198]: Accepted publickey for core from 20.161.92.111 port 37392 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:46.378255 sshd[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:46.384542 systemd-logind[1563]: New session 8 of user core. Mar 7 00:56:46.389245 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 00:56:46.894421 sshd[4198]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:46.899931 systemd[1]: sshd@11-116.202.20.89:22-20.161.92.111:37392.service: Deactivated successfully. Mar 7 00:56:46.906838 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 00:56:46.908158 systemd-logind[1563]: Session 8 logged out. Waiting for processes to exit. Mar 7 00:56:46.909470 systemd-logind[1563]: Removed session 8. Mar 7 00:56:50.885187 update_engine[1566]: I20260307 00:56:50.885058 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 00:56:50.885731 update_engine[1566]: I20260307 00:56:50.885488 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 00:56:50.885827 update_engine[1566]: I20260307 00:56:50.885802 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 00:56:50.886828 update_engine[1566]: E20260307 00:56:50.886756 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 00:56:50.886985 update_engine[1566]: I20260307 00:56:50.886852 1566 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 00:56:50.886985 update_engine[1566]: I20260307 00:56:50.886868 1566 omaha_request_action.cc:617] Omaha request response: Mar 7 00:56:50.887083 update_engine[1566]: E20260307 00:56:50.886996 1566 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 7 00:56:50.887083 update_engine[1566]: I20260307 00:56:50.887024 1566 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 7 00:56:50.887083 update_engine[1566]: I20260307 00:56:50.887034 1566 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 00:56:50.887083 update_engine[1566]: I20260307 00:56:50.887043 1566 update_attempter.cc:306] Processing Done. Mar 7 00:56:50.887083 update_engine[1566]: E20260307 00:56:50.887065 1566 update_attempter.cc:619] Update failed. Mar 7 00:56:50.887083 update_engine[1566]: I20260307 00:56:50.887075 1566 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 7 00:56:50.887787 update_engine[1566]: I20260307 00:56:50.887084 1566 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 7 00:56:50.887787 update_engine[1566]: I20260307 00:56:50.887095 1566 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 7 00:56:50.887787 update_engine[1566]: I20260307 00:56:50.887197 1566 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 00:56:50.887787 update_engine[1566]: I20260307 00:56:50.887231 1566 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 00:56:50.887787 update_engine[1566]: I20260307 00:56:50.887241 1566 omaha_request_action.cc:272] Request: Mar 7 00:56:50.887787 update_engine[1566]: Mar 7 00:56:50.887787 update_engine[1566]: Mar 7 00:56:50.887787 update_engine[1566]: Mar 7 00:56:50.887787 update_engine[1566]: Mar 7 00:56:50.887787 update_engine[1566]: Mar 7 00:56:50.887787 update_engine[1566]: Mar 7 00:56:50.887787 update_engine[1566]: I20260307 00:56:50.887275 1566 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 00:56:50.887787 update_engine[1566]: I20260307 00:56:50.887516 1566 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 00:56:50.887787 update_engine[1566]: I20260307 00:56:50.887749 1566 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 00:56:50.888435 locksmithd[1634]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 7 00:56:50.888930 update_engine[1566]: E20260307 00:56:50.888602 1566 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 00:56:50.888930 update_engine[1566]: I20260307 00:56:50.888671 1566 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 00:56:50.888930 update_engine[1566]: I20260307 00:56:50.888684 1566 omaha_request_action.cc:617] Omaha request response: Mar 7 00:56:50.888930 update_engine[1566]: I20260307 00:56:50.888695 1566 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 00:56:50.888930 update_engine[1566]: I20260307 00:56:50.888705 1566 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 00:56:50.888930 update_engine[1566]: I20260307 00:56:50.888716 1566 update_attempter.cc:306] Processing Done. Mar 7 00:56:50.888930 update_engine[1566]: I20260307 00:56:50.888726 1566 update_attempter.cc:310] Error event sent. Mar 7 00:56:50.888930 update_engine[1566]: I20260307 00:56:50.888741 1566 update_check_scheduler.cc:74] Next update check in 43m34s Mar 7 00:56:50.889689 locksmithd[1634]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 7 00:56:52.001733 systemd[1]: Started sshd@13-116.202.20.89:22-20.161.92.111:44410.service - OpenSSH per-connection server daemon (20.161.92.111:44410). Mar 7 00:56:52.588987 sshd[4214]: Accepted publickey for core from 20.161.92.111 port 44410 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:52.590875 sshd[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:52.599458 systemd-logind[1563]: New session 9 of user core. Mar 7 00:56:52.611516 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 00:56:53.085423 sshd[4214]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:53.091672 systemd[1]: sshd@13-116.202.20.89:22-20.161.92.111:44410.service: Deactivated successfully. Mar 7 00:56:53.096435 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 00:56:53.097656 systemd-logind[1563]: Session 9 logged out. Waiting for processes to exit. Mar 7 00:56:53.098794 systemd-logind[1563]: Removed session 9. Mar 7 00:56:58.188734 systemd[1]: Started sshd@14-116.202.20.89:22-20.161.92.111:44420.service - OpenSSH per-connection server daemon (20.161.92.111:44420). Mar 7 00:56:58.773115 sshd[4230]: Accepted publickey for core from 20.161.92.111 port 44420 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:58.774664 sshd[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:58.780036 systemd-logind[1563]: New session 10 of user core. Mar 7 00:56:58.786460 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 00:56:59.263193 sshd[4230]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:59.269339 systemd-logind[1563]: Session 10 logged out. Waiting for processes to exit. Mar 7 00:56:59.269916 systemd[1]: sshd@14-116.202.20.89:22-20.161.92.111:44420.service: Deactivated successfully. Mar 7 00:56:59.272804 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 00:56:59.275425 systemd-logind[1563]: Removed session 10. Mar 7 00:56:59.367757 systemd[1]: Started sshd@15-116.202.20.89:22-20.161.92.111:44430.service - OpenSSH per-connection server daemon (20.161.92.111:44430). Mar 7 00:56:59.955978 sshd[4245]: Accepted publickey for core from 20.161.92.111 port 44430 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:59.957821 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:59.965009 systemd-logind[1563]: New session 11 of user core. Mar 7 00:56:59.969825 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 00:57:00.491363 sshd[4245]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:00.497029 systemd-logind[1563]: Session 11 logged out. Waiting for processes to exit. Mar 7 00:57:00.497539 systemd[1]: sshd@15-116.202.20.89:22-20.161.92.111:44430.service: Deactivated successfully. Mar 7 00:57:00.503618 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 00:57:00.505326 systemd-logind[1563]: Removed session 11. Mar 7 00:57:00.594338 systemd[1]: Started sshd@16-116.202.20.89:22-20.161.92.111:51550.service - OpenSSH per-connection server daemon (20.161.92.111:51550). Mar 7 00:57:01.179699 sshd[4257]: Accepted publickey for core from 20.161.92.111 port 51550 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:01.183034 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:01.188790 systemd-logind[1563]: New session 12 of user core. Mar 7 00:57:01.194020 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 00:57:01.671357 sshd[4257]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:01.678518 systemd[1]: sshd@16-116.202.20.89:22-20.161.92.111:51550.service: Deactivated successfully. Mar 7 00:57:01.682738 systemd-logind[1563]: Session 12 logged out. Waiting for processes to exit. Mar 7 00:57:01.683071 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 00:57:01.686916 systemd-logind[1563]: Removed session 12. Mar 7 00:57:06.774381 systemd[1]: Started sshd@17-116.202.20.89:22-20.161.92.111:51558.service - OpenSSH per-connection server daemon (20.161.92.111:51558). Mar 7 00:57:07.365987 sshd[4270]: Accepted publickey for core from 20.161.92.111 port 51558 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:07.368208 sshd[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:07.373489 systemd-logind[1563]: New session 13 of user core. Mar 7 00:57:07.384625 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 00:57:07.855547 sshd[4270]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:07.862434 systemd[1]: sshd@17-116.202.20.89:22-20.161.92.111:51558.service: Deactivated successfully. Mar 7 00:57:07.866893 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 00:57:07.867949 systemd-logind[1563]: Session 13 logged out. Waiting for processes to exit. Mar 7 00:57:07.869199 systemd-logind[1563]: Removed session 13. Mar 7 00:57:12.959355 systemd[1]: Started sshd@18-116.202.20.89:22-20.161.92.111:56568.service - OpenSSH per-connection server daemon (20.161.92.111:56568). Mar 7 00:57:13.549002 sshd[4283]: Accepted publickey for core from 20.161.92.111 port 56568 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:13.551087 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:13.556449 systemd-logind[1563]: New session 14 of user core. Mar 7 00:57:13.561275 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 00:57:14.038390 sshd[4283]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:14.043961 systemd-logind[1563]: Session 14 logged out. Waiting for processes to exit. Mar 7 00:57:14.044746 systemd[1]: sshd@18-116.202.20.89:22-20.161.92.111:56568.service: Deactivated successfully. Mar 7 00:57:14.047812 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 00:57:14.049560 systemd-logind[1563]: Removed session 14. Mar 7 00:57:14.139224 systemd[1]: Started sshd@19-116.202.20.89:22-20.161.92.111:56582.service - OpenSSH per-connection server daemon (20.161.92.111:56582). Mar 7 00:57:14.728986 sshd[4297]: Accepted publickey for core from 20.161.92.111 port 56582 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:14.730432 sshd[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:14.736220 systemd-logind[1563]: New session 15 of user core. Mar 7 00:57:14.740230 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 00:57:15.266237 sshd[4297]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:15.272203 systemd[1]: sshd@19-116.202.20.89:22-20.161.92.111:56582.service: Deactivated successfully. Mar 7 00:57:15.277218 systemd-logind[1563]: Session 15 logged out. Waiting for processes to exit. Mar 7 00:57:15.277522 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 00:57:15.280468 systemd-logind[1563]: Removed session 15. Mar 7 00:57:15.367220 systemd[1]: Started sshd@20-116.202.20.89:22-20.161.92.111:56592.service - OpenSSH per-connection server daemon (20.161.92.111:56592). Mar 7 00:57:15.952978 sshd[4309]: Accepted publickey for core from 20.161.92.111 port 56592 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:15.954720 sshd[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:15.959781 systemd-logind[1563]: New session 16 of user core. Mar 7 00:57:15.966246 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 00:57:17.011265 sshd[4309]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:17.018374 systemd-logind[1563]: Session 16 logged out. Waiting for processes to exit. Mar 7 00:57:17.019695 systemd[1]: sshd@20-116.202.20.89:22-20.161.92.111:56592.service: Deactivated successfully. Mar 7 00:57:17.026035 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 00:57:17.027193 systemd-logind[1563]: Removed session 16. Mar 7 00:57:17.115325 systemd[1]: Started sshd@21-116.202.20.89:22-20.161.92.111:56598.service - OpenSSH per-connection server daemon (20.161.92.111:56598). Mar 7 00:57:17.703208 sshd[4328]: Accepted publickey for core from 20.161.92.111 port 56598 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:17.705628 sshd[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:17.712031 systemd-logind[1563]: New session 17 of user core. Mar 7 00:57:17.724478 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 00:57:18.329176 sshd[4328]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:18.332625 systemd[1]: sshd@21-116.202.20.89:22-20.161.92.111:56598.service: Deactivated successfully. Mar 7 00:57:18.340660 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 00:57:18.342923 systemd-logind[1563]: Session 17 logged out. Waiting for processes to exit. Mar 7 00:57:18.344591 systemd-logind[1563]: Removed session 17. Mar 7 00:57:18.429299 systemd[1]: Started sshd@22-116.202.20.89:22-20.161.92.111:56600.service - OpenSSH per-connection server daemon (20.161.92.111:56600). Mar 7 00:57:19.014991 sshd[4340]: Accepted publickey for core from 20.161.92.111 port 56600 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:19.016204 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:19.020933 systemd-logind[1563]: New session 18 of user core. Mar 7 00:57:19.033563 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 00:57:19.499291 sshd[4340]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:19.504638 systemd-logind[1563]: Session 18 logged out. Waiting for processes to exit. Mar 7 00:57:19.505414 systemd[1]: sshd@22-116.202.20.89:22-20.161.92.111:56600.service: Deactivated successfully. Mar 7 00:57:19.510530 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 00:57:19.512019 systemd-logind[1563]: Removed session 18. Mar 7 00:57:24.602488 systemd[1]: Started sshd@23-116.202.20.89:22-20.161.92.111:42916.service - OpenSSH per-connection server daemon (20.161.92.111:42916). Mar 7 00:57:25.191702 sshd[4357]: Accepted publickey for core from 20.161.92.111 port 42916 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:25.193170 sshd[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:25.201039 systemd-logind[1563]: New session 19 of user core. Mar 7 00:57:25.206350 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 00:57:25.683137 sshd[4357]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:25.690160 systemd[1]: sshd@23-116.202.20.89:22-20.161.92.111:42916.service: Deactivated successfully. Mar 7 00:57:25.695213 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 00:57:25.696193 systemd-logind[1563]: Session 19 logged out. Waiting for processes to exit. Mar 7 00:57:25.697387 systemd-logind[1563]: Removed session 19. Mar 7 00:57:30.786705 systemd[1]: Started sshd@24-116.202.20.89:22-20.161.92.111:53076.service - OpenSSH per-connection server daemon (20.161.92.111:53076). Mar 7 00:57:31.375849 sshd[4372]: Accepted publickey for core from 20.161.92.111 port 53076 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:31.377693 sshd[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:31.383007 systemd-logind[1563]: New session 20 of user core. Mar 7 00:57:31.390714 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 00:57:31.865273 sshd[4372]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:31.870591 systemd-logind[1563]: Session 20 logged out. Waiting for processes to exit. Mar 7 00:57:31.871386 systemd[1]: sshd@24-116.202.20.89:22-20.161.92.111:53076.service: Deactivated successfully. Mar 7 00:57:31.875829 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 00:57:31.878268 systemd-logind[1563]: Removed session 20. Mar 7 00:57:31.965248 systemd[1]: Started sshd@25-116.202.20.89:22-20.161.92.111:53086.service - OpenSSH per-connection server daemon (20.161.92.111:53086). Mar 7 00:57:32.565167 sshd[4387]: Accepted publickey for core from 20.161.92.111 port 53086 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:32.567691 sshd[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:32.573781 systemd-logind[1563]: New session 21 of user core. Mar 7 00:57:32.578352 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 00:57:35.194442 containerd[1588]: time="2026-03-07T00:57:35.194356635Z" level=info msg="StopContainer for \"baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718\" with timeout 30 (s)" Mar 7 00:57:35.196954 containerd[1588]: time="2026-03-07T00:57:35.195574103Z" level=info msg="Stop container \"baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718\" with signal terminated" Mar 7 00:57:35.223281 containerd[1588]: time="2026-03-07T00:57:35.223207465Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 00:57:35.231595 containerd[1588]: time="2026-03-07T00:57:35.231535219Z" level=info msg="StopContainer for \"ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd\" with timeout 2 (s)" Mar 7 00:57:35.232047 containerd[1588]: time="2026-03-07T00:57:35.231888747Z" level=info msg="Stop container \"ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd\" with signal terminated" Mar 7 00:57:35.240595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718-rootfs.mount: Deactivated successfully. Mar 7 00:57:35.248287 systemd-networkd[1235]: lxc_health: Link DOWN Mar 7 00:57:35.248307 systemd-networkd[1235]: lxc_health: Lost carrier Mar 7 00:57:35.271019 containerd[1588]: time="2026-03-07T00:57:35.269386259Z" level=info msg="shim disconnected" id=baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718 namespace=k8s.io Mar 7 00:57:35.271019 containerd[1588]: time="2026-03-07T00:57:35.269450140Z" level=warning msg="cleaning up after shim disconnected" id=baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718 namespace=k8s.io Mar 7 00:57:35.271019 containerd[1588]: time="2026-03-07T00:57:35.269459620Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:35.302956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd-rootfs.mount: Deactivated successfully. Mar 7 00:57:35.309651 containerd[1588]: time="2026-03-07T00:57:35.309607993Z" level=info msg="StopContainer for \"baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718\" returns successfully" Mar 7 00:57:35.310665 containerd[1588]: time="2026-03-07T00:57:35.310491934Z" level=info msg="StopPodSandbox for \"7f9603d4a883ba03f7ba523440431724b3b10b0c06fcb6201c56644f72d830ee\"" Mar 7 00:57:35.310869 containerd[1588]: time="2026-03-07T00:57:35.310812181Z" level=info msg="Container to stop \"baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:57:35.311134 containerd[1588]: time="2026-03-07T00:57:35.310771221Z" level=info msg="shim disconnected" id=ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd namespace=k8s.io Mar 7 00:57:35.313065 containerd[1588]: time="2026-03-07T00:57:35.313026793Z" level=warning msg="cleaning up after shim disconnected" id=ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd namespace=k8s.io Mar 7 00:57:35.313206 containerd[1588]: time="2026-03-07T00:57:35.313189957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:35.313990 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f9603d4a883ba03f7ba523440431724b3b10b0c06fcb6201c56644f72d830ee-shm.mount: Deactivated successfully. Mar 7 00:57:35.345693 containerd[1588]: time="2026-03-07T00:57:35.345631671Z" level=info msg="StopContainer for \"ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd\" returns successfully" Mar 7 00:57:35.346609 containerd[1588]: time="2026-03-07T00:57:35.346544332Z" level=info msg="StopPodSandbox for \"b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737\"" Mar 7 00:57:35.346966 containerd[1588]: time="2026-03-07T00:57:35.346878100Z" level=info msg="Container to stop \"239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:57:35.346966 containerd[1588]: time="2026-03-07T00:57:35.346902420Z" level=info msg="Container to stop \"e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:57:35.346966 containerd[1588]: time="2026-03-07T00:57:35.346913421Z" level=info msg="Container to stop \"c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:57:35.347222 containerd[1588]: time="2026-03-07T00:57:35.346923981Z" level=info msg="Container to stop \"ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:57:35.347574 containerd[1588]: time="2026-03-07T00:57:35.347286869Z" level=info msg="Container to stop \"568964fb12e94b25ed9abde207a3119750588963e3a6d77ee9361dc11984e492\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:57:35.369393 containerd[1588]: time="2026-03-07T00:57:35.369282620Z" level=info msg="shim disconnected" id=7f9603d4a883ba03f7ba523440431724b3b10b0c06fcb6201c56644f72d830ee namespace=k8s.io Mar 7 00:57:35.369393 containerd[1588]: time="2026-03-07T00:57:35.369367462Z" level=warning msg="cleaning up after shim disconnected" id=7f9603d4a883ba03f7ba523440431724b3b10b0c06fcb6201c56644f72d830ee namespace=k8s.io Mar 7 00:57:35.369393 containerd[1588]: time="2026-03-07T00:57:35.369376823Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:35.388970 containerd[1588]: time="2026-03-07T00:57:35.388374464Z" level=info msg="shim disconnected" id=b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737 namespace=k8s.io Mar 7 00:57:35.388970 containerd[1588]: time="2026-03-07T00:57:35.388652431Z" level=warning msg="cleaning up after shim disconnected" id=b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737 namespace=k8s.io Mar 7 00:57:35.388970 containerd[1588]: time="2026-03-07T00:57:35.388664871Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:35.389785 containerd[1588]: time="2026-03-07T00:57:35.389755056Z" level=info msg="TearDown network for sandbox \"7f9603d4a883ba03f7ba523440431724b3b10b0c06fcb6201c56644f72d830ee\" successfully" Mar 7 00:57:35.389820 containerd[1588]: time="2026-03-07T00:57:35.389786577Z" level=info msg="StopPodSandbox for \"7f9603d4a883ba03f7ba523440431724b3b10b0c06fcb6201c56644f72d830ee\" returns successfully" Mar 7 00:57:35.413046 containerd[1588]: time="2026-03-07T00:57:35.412978556Z" level=info msg="TearDown network for sandbox \"b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737\" successfully" Mar 7 00:57:35.413046 containerd[1588]: time="2026-03-07T00:57:35.413022557Z" level=info msg="StopPodSandbox for \"b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737\" returns successfully" Mar 7 00:57:35.501030 kubelet[2788]: I0307 00:57:35.500878 2788 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/487c5252-9add-42cb-a3be-bc78474f583b-cilium-config-path\") pod \"487c5252-9add-42cb-a3be-bc78474f583b\" (UID: \"487c5252-9add-42cb-a3be-bc78474f583b\") " Mar 7 00:57:35.502803 kubelet[2788]: I0307 00:57:35.502021 2788 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvgsk\" (UniqueName: \"kubernetes.io/projected/487c5252-9add-42cb-a3be-bc78474f583b-kube-api-access-dvgsk\") pod \"487c5252-9add-42cb-a3be-bc78474f583b\" (UID: \"487c5252-9add-42cb-a3be-bc78474f583b\") " Mar 7 00:57:35.506411 kubelet[2788]: I0307 00:57:35.506294 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/487c5252-9add-42cb-a3be-bc78474f583b-kube-api-access-dvgsk" (OuterVolumeSpecName: "kube-api-access-dvgsk") pod "487c5252-9add-42cb-a3be-bc78474f583b" (UID: "487c5252-9add-42cb-a3be-bc78474f583b"). InnerVolumeSpecName "kube-api-access-dvgsk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 00:57:35.507585 kubelet[2788]: I0307 00:57:35.507537 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/487c5252-9add-42cb-a3be-bc78474f583b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "487c5252-9add-42cb-a3be-bc78474f583b" (UID: "487c5252-9add-42cb-a3be-bc78474f583b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:57:35.602785 kubelet[2788]: I0307 00:57:35.602723 2788 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-etc-cni-netd\") pod \"7145851b-6cdc-459f-9b34-c2ccab7019e7\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " Mar 7 00:57:35.603446 kubelet[2788]: I0307 00:57:35.602909 2788 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-xtables-lock\") pod \"7145851b-6cdc-459f-9b34-c2ccab7019e7\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " Mar 7 00:57:35.603446 kubelet[2788]: I0307 00:57:35.602990 2788 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7145851b-6cdc-459f-9b34-c2ccab7019e7-clustermesh-secrets\") pod \"7145851b-6cdc-459f-9b34-c2ccab7019e7\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " Mar 7 00:57:35.603446 kubelet[2788]: I0307 00:57:35.603025 2788 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-host-proc-sys-net\") pod \"7145851b-6cdc-459f-9b34-c2ccab7019e7\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " Mar 7 00:57:35.603446 kubelet[2788]: I0307 00:57:35.603060 2788 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-bpf-maps\") pod \"7145851b-6cdc-459f-9b34-c2ccab7019e7\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " Mar 7 00:57:35.603446 kubelet[2788]: I0307 00:57:35.603094 2788 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-lib-modules\") pod \"7145851b-6cdc-459f-9b34-c2ccab7019e7\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " Mar 7 00:57:35.603446 kubelet[2788]: I0307 00:57:35.603153 2788 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-cni-path\") pod \"7145851b-6cdc-459f-9b34-c2ccab7019e7\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " Mar 7 00:57:35.604255 kubelet[2788]: I0307 00:57:35.603186 2788 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-cilium-cgroup\") pod \"7145851b-6cdc-459f-9b34-c2ccab7019e7\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " Mar 7 00:57:35.604255 kubelet[2788]: I0307 00:57:35.603622 2788 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-host-proc-sys-kernel\") pod \"7145851b-6cdc-459f-9b34-c2ccab7019e7\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " Mar 7 00:57:35.604255 kubelet[2788]: I0307 00:57:35.603788 2788 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-hostproc\") pod \"7145851b-6cdc-459f-9b34-c2ccab7019e7\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " Mar 7 00:57:35.604255 kubelet[2788]: I0307 00:57:35.603993 2788 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7145851b-6cdc-459f-9b34-c2ccab7019e7-hubble-tls\") pod \"7145851b-6cdc-459f-9b34-c2ccab7019e7\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " Mar 7 00:57:35.604255 kubelet[2788]: I0307 00:57:35.604044 2788 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7145851b-6cdc-459f-9b34-c2ccab7019e7-cilium-config-path\") pod \"7145851b-6cdc-459f-9b34-c2ccab7019e7\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " Mar 7 00:57:35.604255 kubelet[2788]: I0307 00:57:35.604075 2788 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-cilium-run\") pod \"7145851b-6cdc-459f-9b34-c2ccab7019e7\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " Mar 7 00:57:35.605161 kubelet[2788]: I0307 00:57:35.604637 2788 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6vbr\" (UniqueName: \"kubernetes.io/projected/7145851b-6cdc-459f-9b34-c2ccab7019e7-kube-api-access-x6vbr\") pod \"7145851b-6cdc-459f-9b34-c2ccab7019e7\" (UID: \"7145851b-6cdc-459f-9b34-c2ccab7019e7\") " Mar 7 00:57:35.605161 kubelet[2788]: I0307 00:57:35.604703 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-cni-path" (OuterVolumeSpecName: "cni-path") pod "7145851b-6cdc-459f-9b34-c2ccab7019e7" (UID: "7145851b-6cdc-459f-9b34-c2ccab7019e7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:57:35.605161 kubelet[2788]: I0307 00:57:35.604715 2788 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/487c5252-9add-42cb-a3be-bc78474f583b-cilium-config-path\") on node \"ci-4081-3-6-n-e1f368ffcb\" DevicePath \"\"" Mar 7 00:57:35.605161 kubelet[2788]: I0307 00:57:35.604787 2788 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dvgsk\" (UniqueName: \"kubernetes.io/projected/487c5252-9add-42cb-a3be-bc78474f583b-kube-api-access-dvgsk\") on node \"ci-4081-3-6-n-e1f368ffcb\" DevicePath \"\"" Mar 7 00:57:35.605161 kubelet[2788]: I0307 00:57:35.604817 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7145851b-6cdc-459f-9b34-c2ccab7019e7" (UID: "7145851b-6cdc-459f-9b34-c2ccab7019e7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:57:35.607365 kubelet[2788]: I0307 00:57:35.607073 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7145851b-6cdc-459f-9b34-c2ccab7019e7" (UID: "7145851b-6cdc-459f-9b34-c2ccab7019e7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:57:35.607365 kubelet[2788]: I0307 00:57:35.607131 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7145851b-6cdc-459f-9b34-c2ccab7019e7" (UID: "7145851b-6cdc-459f-9b34-c2ccab7019e7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:57:35.607365 kubelet[2788]: I0307 00:57:35.607151 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-hostproc" (OuterVolumeSpecName: "hostproc") pod "7145851b-6cdc-459f-9b34-c2ccab7019e7" (UID: "7145851b-6cdc-459f-9b34-c2ccab7019e7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:57:35.607365 kubelet[2788]: I0307 00:57:35.607150 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7145851b-6cdc-459f-9b34-c2ccab7019e7" (UID: "7145851b-6cdc-459f-9b34-c2ccab7019e7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:57:35.607365 kubelet[2788]: I0307 00:57:35.607162 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7145851b-6cdc-459f-9b34-c2ccab7019e7" (UID: "7145851b-6cdc-459f-9b34-c2ccab7019e7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:57:35.607535 kubelet[2788]: I0307 00:57:35.607185 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7145851b-6cdc-459f-9b34-c2ccab7019e7" (UID: "7145851b-6cdc-459f-9b34-c2ccab7019e7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:57:35.607535 kubelet[2788]: I0307 00:57:35.607214 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7145851b-6cdc-459f-9b34-c2ccab7019e7" (UID: "7145851b-6cdc-459f-9b34-c2ccab7019e7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:57:35.609013 kubelet[2788]: I0307 00:57:35.608753 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7145851b-6cdc-459f-9b34-c2ccab7019e7" (UID: "7145851b-6cdc-459f-9b34-c2ccab7019e7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:57:35.609897 kubelet[2788]: I0307 00:57:35.609857 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7145851b-6cdc-459f-9b34-c2ccab7019e7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7145851b-6cdc-459f-9b34-c2ccab7019e7" (UID: "7145851b-6cdc-459f-9b34-c2ccab7019e7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 00:57:35.610658 kubelet[2788]: I0307 00:57:35.610635 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7145851b-6cdc-459f-9b34-c2ccab7019e7-kube-api-access-x6vbr" (OuterVolumeSpecName: "kube-api-access-x6vbr") pod "7145851b-6cdc-459f-9b34-c2ccab7019e7" (UID: "7145851b-6cdc-459f-9b34-c2ccab7019e7"). InnerVolumeSpecName "kube-api-access-x6vbr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 00:57:35.610850 kubelet[2788]: I0307 00:57:35.610660 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7145851b-6cdc-459f-9b34-c2ccab7019e7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7145851b-6cdc-459f-9b34-c2ccab7019e7" (UID: "7145851b-6cdc-459f-9b34-c2ccab7019e7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 00:57:35.611639 kubelet[2788]: I0307 00:57:35.611606 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7145851b-6cdc-459f-9b34-c2ccab7019e7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7145851b-6cdc-459f-9b34-c2ccab7019e7" (UID: "7145851b-6cdc-459f-9b34-c2ccab7019e7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:57:35.706006 kubelet[2788]: I0307 00:57:35.705802 2788 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7145851b-6cdc-459f-9b34-c2ccab7019e7-cilium-config-path\") on node \"ci-4081-3-6-n-e1f368ffcb\" DevicePath \"\"" Mar 7 00:57:35.706006 kubelet[2788]: I0307 00:57:35.705846 2788 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-cilium-run\") on node \"ci-4081-3-6-n-e1f368ffcb\" DevicePath \"\"" Mar 7 00:57:35.706006 kubelet[2788]: I0307 00:57:35.705865 2788 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x6vbr\" (UniqueName: \"kubernetes.io/projected/7145851b-6cdc-459f-9b34-c2ccab7019e7-kube-api-access-x6vbr\") on node \"ci-4081-3-6-n-e1f368ffcb\" DevicePath \"\"" Mar 7 00:57:35.706006 kubelet[2788]: I0307 00:57:35.705880 2788 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-etc-cni-netd\") on node \"ci-4081-3-6-n-e1f368ffcb\" DevicePath \"\"" Mar 7 00:57:35.706006 kubelet[2788]: I0307 00:57:35.705899 2788 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-xtables-lock\") on node \"ci-4081-3-6-n-e1f368ffcb\" DevicePath \"\"" Mar 7 00:57:35.706006 kubelet[2788]: I0307 00:57:35.705913 2788 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7145851b-6cdc-459f-9b34-c2ccab7019e7-clustermesh-secrets\") on node \"ci-4081-3-6-n-e1f368ffcb\" DevicePath \"\"" Mar 7 00:57:35.706006 kubelet[2788]: I0307 00:57:35.705929 2788 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-host-proc-sys-net\") on node \"ci-4081-3-6-n-e1f368ffcb\" DevicePath \"\"" Mar 7 00:57:35.706006 kubelet[2788]: I0307 00:57:35.705971 2788 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-bpf-maps\") on node \"ci-4081-3-6-n-e1f368ffcb\" DevicePath \"\"" Mar 7 00:57:35.706529 kubelet[2788]: I0307 00:57:35.705987 2788 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-lib-modules\") on node \"ci-4081-3-6-n-e1f368ffcb\" DevicePath \"\"" Mar 7 00:57:35.706529 kubelet[2788]: I0307 00:57:35.706004 2788 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-cni-path\") on node \"ci-4081-3-6-n-e1f368ffcb\" DevicePath \"\"" Mar 7 00:57:35.706529 kubelet[2788]: I0307 00:57:35.706019 2788 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-cilium-cgroup\") on node \"ci-4081-3-6-n-e1f368ffcb\" DevicePath \"\"" Mar 7 00:57:35.706529 kubelet[2788]: I0307 00:57:35.706033 2788 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-host-proc-sys-kernel\") on node \"ci-4081-3-6-n-e1f368ffcb\" DevicePath \"\"" Mar 7 00:57:35.706529 kubelet[2788]: I0307 00:57:35.706046 2788 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7145851b-6cdc-459f-9b34-c2ccab7019e7-hostproc\") on node \"ci-4081-3-6-n-e1f368ffcb\" DevicePath \"\"" Mar 7 00:57:35.706529 kubelet[2788]: I0307 00:57:35.706060 2788 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7145851b-6cdc-459f-9b34-c2ccab7019e7-hubble-tls\") on node \"ci-4081-3-6-n-e1f368ffcb\" DevicePath \"\"" Mar 7 00:57:35.772498 kubelet[2788]: I0307 00:57:35.772260 2788 scope.go:117] "RemoveContainer" containerID="baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718" Mar 7 00:57:35.776981 containerd[1588]: time="2026-03-07T00:57:35.776897454Z" level=info msg="RemoveContainer for \"baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718\"" Mar 7 00:57:35.784763 containerd[1588]: time="2026-03-07T00:57:35.784627394Z" level=info msg="RemoveContainer for \"baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718\" returns successfully" Mar 7 00:57:35.785827 containerd[1588]: time="2026-03-07T00:57:35.785522575Z" level=error msg="ContainerStatus for \"baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718\": not found" Mar 7 00:57:35.786608 kubelet[2788]: I0307 00:57:35.785080 2788 scope.go:117] "RemoveContainer" containerID="baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718" Mar 7 00:57:35.786608 kubelet[2788]: E0307 00:57:35.785662 2788 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718\": not found" containerID="baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718" Mar 7 00:57:35.786608 kubelet[2788]: I0307 00:57:35.785711 2788 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718"} err="failed to get container status \"baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718\": rpc error: code = NotFound desc = an error occurred when try to find container \"baf13a55ff722a1889a29b87605456023be778ea2e8421a1893083880f837718\": not found" Mar 7 00:57:35.786608 kubelet[2788]: I0307 00:57:35.785753 2788 scope.go:117] "RemoveContainer" containerID="ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd" Mar 7 00:57:35.788377 containerd[1588]: time="2026-03-07T00:57:35.788132115Z" level=info msg="RemoveContainer for \"ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd\"" Mar 7 00:57:35.796598 containerd[1588]: time="2026-03-07T00:57:35.796336946Z" level=info msg="RemoveContainer for \"ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd\" returns successfully" Mar 7 00:57:35.797267 kubelet[2788]: I0307 00:57:35.797241 2788 scope.go:117] "RemoveContainer" containerID="c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24" Mar 7 00:57:35.802240 containerd[1588]: time="2026-03-07T00:57:35.802194482Z" level=info msg="RemoveContainer for \"c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24\"" Mar 7 00:57:35.810969 containerd[1588]: time="2026-03-07T00:57:35.807624688Z" level=info msg="RemoveContainer for \"c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24\" returns successfully" Mar 7 00:57:35.811100 kubelet[2788]: I0307 00:57:35.809249 2788 scope.go:117] "RemoveContainer" containerID="e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e" Mar 7 00:57:35.818840 containerd[1588]: time="2026-03-07T00:57:35.817985289Z" level=info msg="RemoveContainer for \"e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e\"" Mar 7 00:57:35.825256 containerd[1588]: time="2026-03-07T00:57:35.825215257Z" level=info msg="RemoveContainer for \"e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e\" returns successfully" Mar 7 00:57:35.825701 kubelet[2788]: I0307 00:57:35.825673 2788 scope.go:117] "RemoveContainer" containerID="568964fb12e94b25ed9abde207a3119750588963e3a6d77ee9361dc11984e492" Mar 7 00:57:35.827059 containerd[1588]: time="2026-03-07T00:57:35.827033779Z" level=info msg="RemoveContainer for \"568964fb12e94b25ed9abde207a3119750588963e3a6d77ee9361dc11984e492\"" Mar 7 00:57:35.830242 containerd[1588]: time="2026-03-07T00:57:35.830136531Z" level=info msg="RemoveContainer for \"568964fb12e94b25ed9abde207a3119750588963e3a6d77ee9361dc11984e492\" returns successfully" Mar 7 00:57:35.830513 kubelet[2788]: I0307 00:57:35.830481 2788 scope.go:117] "RemoveContainer" containerID="239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3" Mar 7 00:57:35.831727 containerd[1588]: time="2026-03-07T00:57:35.831585125Z" level=info msg="RemoveContainer for \"239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3\"" Mar 7 00:57:35.834865 containerd[1588]: time="2026-03-07T00:57:35.834776559Z" level=info msg="RemoveContainer for \"239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3\" returns successfully" Mar 7 00:57:35.835126 kubelet[2788]: I0307 00:57:35.835039 2788 scope.go:117] "RemoveContainer" containerID="ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd" Mar 7 00:57:35.835322 containerd[1588]: time="2026-03-07T00:57:35.835262051Z" level=error msg="ContainerStatus for \"ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd\": not found" Mar 7 00:57:35.835552 kubelet[2788]: E0307 00:57:35.835438 2788 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd\": not found" containerID="ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd" Mar 7 00:57:35.835552 kubelet[2788]: I0307 00:57:35.835466 2788 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd"} err="failed to get container status \"ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec8717d16dae724b618ca9c1200d04a86542e50a138e8b0c12f29529d8c69bbd\": not found" Mar 7 00:57:35.835552 kubelet[2788]: I0307 00:57:35.835491 2788 scope.go:117] "RemoveContainer" containerID="c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24" Mar 7 00:57:35.835764 containerd[1588]: time="2026-03-07T00:57:35.835691541Z" level=error msg="ContainerStatus for \"c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24\": not found" Mar 7 00:57:35.835904 kubelet[2788]: E0307 00:57:35.835889 2788 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24\": not found" containerID="c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24" Mar 7 00:57:35.836084 kubelet[2788]: I0307 00:57:35.835955 2788 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24"} err="failed to get container status \"c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24\": rpc error: code = NotFound desc = an error occurred when try to find container \"c84801b4a02a3065fa42fc568cb6b42bb8b7e23406f6669f3340f02472c8ec24\": not found" Mar 7 00:57:35.836084 kubelet[2788]: I0307 00:57:35.835971 2788 scope.go:117] "RemoveContainer" containerID="e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e" Mar 7 00:57:35.836177 containerd[1588]: time="2026-03-07T00:57:35.836143231Z" level=error msg="ContainerStatus for \"e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e\": not found" Mar 7 00:57:35.836402 kubelet[2788]: E0307 00:57:35.836295 2788 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e\": not found" containerID="e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e" Mar 7 00:57:35.836402 kubelet[2788]: I0307 00:57:35.836333 2788 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e"} err="failed to get container status \"e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1af6c27eb97491b16f4c21d3c68b2f944b03db18897d8b1b120c31d1539f02e\": not found" Mar 7 00:57:35.836402 kubelet[2788]: I0307 00:57:35.836347 2788 scope.go:117] "RemoveContainer" containerID="568964fb12e94b25ed9abde207a3119750588963e3a6d77ee9361dc11984e492" Mar 7 00:57:35.836731 containerd[1588]: time="2026-03-07T00:57:35.836663283Z" level=error msg="ContainerStatus for \"568964fb12e94b25ed9abde207a3119750588963e3a6d77ee9361dc11984e492\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"568964fb12e94b25ed9abde207a3119750588963e3a6d77ee9361dc11984e492\": not found" Mar 7 00:57:35.836927 kubelet[2788]: E0307 00:57:35.836780 2788 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"568964fb12e94b25ed9abde207a3119750588963e3a6d77ee9361dc11984e492\": not found" containerID="568964fb12e94b25ed9abde207a3119750588963e3a6d77ee9361dc11984e492" Mar 7 00:57:35.836927 kubelet[2788]: I0307 00:57:35.836802 2788 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"568964fb12e94b25ed9abde207a3119750588963e3a6d77ee9361dc11984e492"} err="failed to get container status \"568964fb12e94b25ed9abde207a3119750588963e3a6d77ee9361dc11984e492\": rpc error: code = NotFound desc = an error occurred when try to find container \"568964fb12e94b25ed9abde207a3119750588963e3a6d77ee9361dc11984e492\": not found" Mar 7 00:57:35.836927 kubelet[2788]: I0307 00:57:35.836816 2788 scope.go:117] "RemoveContainer" containerID="239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3" Mar 7 00:57:35.837047 containerd[1588]: time="2026-03-07T00:57:35.837014051Z" level=error msg="ContainerStatus for \"239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3\": not found" Mar 7 00:57:35.837178 kubelet[2788]: E0307 00:57:35.837128 2788 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3\": not found" containerID="239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3" Mar 7 00:57:35.837178 kubelet[2788]: I0307 00:57:35.837153 2788 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3"} err="failed to get container status \"239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3\": rpc error: code = NotFound desc = an error occurred when try to find container \"239192929ba8c42ee3d759494a25287b2d6a533ff51daeb30bd718316b671ae3\": not found" Mar 7 00:57:36.205099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737-rootfs.mount: Deactivated successfully. Mar 7 00:57:36.205403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f9603d4a883ba03f7ba523440431724b3b10b0c06fcb6201c56644f72d830ee-rootfs.mount: Deactivated successfully. Mar 7 00:57:36.205519 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6da42a593a82b546f12b2f7ff7347e6a39e64099a5aa60f44afd435fe693737-shm.mount: Deactivated successfully. Mar 7 00:57:36.205637 systemd[1]: var-lib-kubelet-pods-7145851b\x2d6cdc\x2d459f\x2d9b34\x2dc2ccab7019e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx6vbr.mount: Deactivated successfully. Mar 7 00:57:36.205746 systemd[1]: var-lib-kubelet-pods-487c5252\x2d9add\x2d42cb\x2da3be\x2dbc78474f583b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddvgsk.mount: Deactivated successfully. Mar 7 00:57:36.205863 systemd[1]: var-lib-kubelet-pods-7145851b\x2d6cdc\x2d459f\x2d9b34\x2dc2ccab7019e7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 7 00:57:36.206488 systemd[1]: var-lib-kubelet-pods-7145851b\x2d6cdc\x2d459f\x2d9b34\x2dc2ccab7019e7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 7 00:57:37.194592 kubelet[2788]: I0307 00:57:37.194495 2788 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="487c5252-9add-42cb-a3be-bc78474f583b" path="/var/lib/kubelet/pods/487c5252-9add-42cb-a3be-bc78474f583b/volumes" Mar 7 00:57:37.195120 kubelet[2788]: I0307 00:57:37.195087 2788 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7145851b-6cdc-459f-9b34-c2ccab7019e7" path="/var/lib/kubelet/pods/7145851b-6cdc-459f-9b34-c2ccab7019e7/volumes" Mar 7 00:57:37.229477 sshd[4387]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:37.237416 systemd[1]: sshd@25-116.202.20.89:22-20.161.92.111:53086.service: Deactivated successfully. Mar 7 00:57:37.241230 systemd-logind[1563]: Session 21 logged out. Waiting for processes to exit. Mar 7 00:57:37.241738 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 00:57:37.245076 systemd-logind[1563]: Removed session 21. Mar 7 00:57:37.331468 systemd[1]: Started sshd@26-116.202.20.89:22-20.161.92.111:53092.service - OpenSSH per-connection server daemon (20.161.92.111:53092). Mar 7 00:57:37.918831 sshd[4555]: Accepted publickey for core from 20.161.92.111 port 53092 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:37.922589 sshd[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:37.930748 systemd-logind[1563]: New session 22 of user core. Mar 7 00:57:37.940523 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 00:57:39.330269 kubelet[2788]: E0307 00:57:39.330218 2788 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 00:57:39.487182 sshd[4555]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:39.492834 systemd[1]: sshd@26-116.202.20.89:22-20.161.92.111:53092.service: Deactivated successfully. Mar 7 00:57:39.498141 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 00:57:39.499505 systemd-logind[1563]: Session 22 logged out. Waiting for processes to exit. Mar 7 00:57:39.501080 systemd-logind[1563]: Removed session 22. Mar 7 00:57:39.530083 kubelet[2788]: I0307 00:57:39.530024 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/963e9030-0644-4e4b-b059-54934a81490c-xtables-lock\") pod \"cilium-2wlmm\" (UID: \"963e9030-0644-4e4b-b059-54934a81490c\") " pod="kube-system/cilium-2wlmm" Mar 7 00:57:39.530083 kubelet[2788]: I0307 00:57:39.530101 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/963e9030-0644-4e4b-b059-54934a81490c-cilium-config-path\") pod \"cilium-2wlmm\" (UID: \"963e9030-0644-4e4b-b059-54934a81490c\") " pod="kube-system/cilium-2wlmm" Mar 7 00:57:39.530083 kubelet[2788]: I0307 00:57:39.530207 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/963e9030-0644-4e4b-b059-54934a81490c-cilium-ipsec-secrets\") pod \"cilium-2wlmm\" (UID: \"963e9030-0644-4e4b-b059-54934a81490c\") " pod="kube-system/cilium-2wlmm" Mar 7 00:57:39.530590 kubelet[2788]: I0307 00:57:39.530290 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/963e9030-0644-4e4b-b059-54934a81490c-hubble-tls\") pod \"cilium-2wlmm\" (UID: \"963e9030-0644-4e4b-b059-54934a81490c\") " pod="kube-system/cilium-2wlmm" Mar 7 00:57:39.530590 kubelet[2788]: I0307 00:57:39.530380 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/963e9030-0644-4e4b-b059-54934a81490c-bpf-maps\") pod \"cilium-2wlmm\" (UID: \"963e9030-0644-4e4b-b059-54934a81490c\") " pod="kube-system/cilium-2wlmm" Mar 7 00:57:39.530590 kubelet[2788]: I0307 00:57:39.530429 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/963e9030-0644-4e4b-b059-54934a81490c-lib-modules\") pod \"cilium-2wlmm\" (UID: \"963e9030-0644-4e4b-b059-54934a81490c\") " pod="kube-system/cilium-2wlmm" Mar 7 00:57:39.530590 kubelet[2788]: I0307 00:57:39.530471 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/963e9030-0644-4e4b-b059-54934a81490c-host-proc-sys-net\") pod \"cilium-2wlmm\" (UID: \"963e9030-0644-4e4b-b059-54934a81490c\") " pod="kube-system/cilium-2wlmm" Mar 7 00:57:39.530590 kubelet[2788]: I0307 00:57:39.530509 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/963e9030-0644-4e4b-b059-54934a81490c-hostproc\") pod \"cilium-2wlmm\" (UID: \"963e9030-0644-4e4b-b059-54934a81490c\") " pod="kube-system/cilium-2wlmm" Mar 7 00:57:39.530590 kubelet[2788]: I0307 00:57:39.530550 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/963e9030-0644-4e4b-b059-54934a81490c-etc-cni-netd\") pod \"cilium-2wlmm\" (UID: \"963e9030-0644-4e4b-b059-54934a81490c\") " pod="kube-system/cilium-2wlmm" Mar 7 00:57:39.530915 kubelet[2788]: I0307 00:57:39.530602 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/963e9030-0644-4e4b-b059-54934a81490c-cilium-cgroup\") pod \"cilium-2wlmm\" (UID: \"963e9030-0644-4e4b-b059-54934a81490c\") " pod="kube-system/cilium-2wlmm" Mar 7 00:57:39.530915 kubelet[2788]: I0307 00:57:39.530643 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/963e9030-0644-4e4b-b059-54934a81490c-cilium-run\") pod \"cilium-2wlmm\" (UID: \"963e9030-0644-4e4b-b059-54934a81490c\") " pod="kube-system/cilium-2wlmm" Mar 7 00:57:39.530915 kubelet[2788]: I0307 00:57:39.530707 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/963e9030-0644-4e4b-b059-54934a81490c-cni-path\") pod \"cilium-2wlmm\" (UID: \"963e9030-0644-4e4b-b059-54934a81490c\") " pod="kube-system/cilium-2wlmm" Mar 7 00:57:39.530915 kubelet[2788]: I0307 00:57:39.530748 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/963e9030-0644-4e4b-b059-54934a81490c-host-proc-sys-kernel\") pod \"cilium-2wlmm\" (UID: \"963e9030-0644-4e4b-b059-54934a81490c\") " pod="kube-system/cilium-2wlmm" Mar 7 00:57:39.530915 kubelet[2788]: I0307 00:57:39.530802 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/963e9030-0644-4e4b-b059-54934a81490c-clustermesh-secrets\") pod \"cilium-2wlmm\" (UID: \"963e9030-0644-4e4b-b059-54934a81490c\") " pod="kube-system/cilium-2wlmm" Mar 7 00:57:39.530915 kubelet[2788]: I0307 00:57:39.530845 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdjfg\" (UniqueName: \"kubernetes.io/projected/963e9030-0644-4e4b-b059-54934a81490c-kube-api-access-mdjfg\") pod \"cilium-2wlmm\" (UID: \"963e9030-0644-4e4b-b059-54934a81490c\") " pod="kube-system/cilium-2wlmm" Mar 7 00:57:39.585867 systemd[1]: Started sshd@27-116.202.20.89:22-20.161.92.111:53094.service - OpenSSH per-connection server daemon (20.161.92.111:53094). Mar 7 00:57:39.719831 containerd[1588]: time="2026-03-07T00:57:39.719075173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wlmm,Uid:963e9030-0644-4e4b-b059-54934a81490c,Namespace:kube-system,Attempt:0,}" Mar 7 00:57:39.742544 containerd[1588]: time="2026-03-07T00:57:39.742268307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:57:39.742544 containerd[1588]: time="2026-03-07T00:57:39.742333828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:57:39.742544 containerd[1588]: time="2026-03-07T00:57:39.742345069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:57:39.742544 containerd[1588]: time="2026-03-07T00:57:39.742436671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:57:39.779553 containerd[1588]: time="2026-03-07T00:57:39.779490923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wlmm,Uid:963e9030-0644-4e4b-b059-54934a81490c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d84f49a661f57dbd6912baa6ede4b4fc4232cfd278234ad25ab4cc3a4b06ab6\"" Mar 7 00:57:39.787777 containerd[1588]: time="2026-03-07T00:57:39.787731193Z" level=info msg="CreateContainer within sandbox \"1d84f49a661f57dbd6912baa6ede4b4fc4232cfd278234ad25ab4cc3a4b06ab6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 00:57:39.801911 containerd[1588]: time="2026-03-07T00:57:39.801806876Z" level=info msg="CreateContainer within sandbox \"1d84f49a661f57dbd6912baa6ede4b4fc4232cfd278234ad25ab4cc3a4b06ab6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e2f6a90e113507be502fdd78c0ecb3d337796215510710454dac9aca3fa23139\"" Mar 7 00:57:39.805825 containerd[1588]: time="2026-03-07T00:57:39.802724777Z" level=info msg="StartContainer for \"e2f6a90e113507be502fdd78c0ecb3d337796215510710454dac9aca3fa23139\"" Mar 7 00:57:39.861849 containerd[1588]: time="2026-03-07T00:57:39.861715614Z" level=info msg="StartContainer for \"e2f6a90e113507be502fdd78c0ecb3d337796215510710454dac9aca3fa23139\" returns successfully" Mar 7 00:57:39.899892 containerd[1588]: time="2026-03-07T00:57:39.899814130Z" level=info msg="shim disconnected" id=e2f6a90e113507be502fdd78c0ecb3d337796215510710454dac9aca3fa23139 namespace=k8s.io Mar 7 00:57:39.899892 containerd[1588]: time="2026-03-07T00:57:39.899869652Z" level=warning msg="cleaning up after shim disconnected" id=e2f6a90e113507be502fdd78c0ecb3d337796215510710454dac9aca3fa23139 namespace=k8s.io Mar 7 00:57:39.899892 containerd[1588]: time="2026-03-07T00:57:39.899878932Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:40.191343 sshd[4568]: Accepted publickey for core from 20.161.92.111 port 53094 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:40.192862 sshd[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:40.198611 systemd-logind[1563]: New session 23 of user core. Mar 7 00:57:40.204441 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 00:57:40.607285 sshd[4568]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:40.613490 systemd[1]: sshd@27-116.202.20.89:22-20.161.92.111:53094.service: Deactivated successfully. Mar 7 00:57:40.617575 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 00:57:40.618778 systemd-logind[1563]: Session 23 logged out. Waiting for processes to exit. Mar 7 00:57:40.619971 systemd-logind[1563]: Removed session 23. Mar 7 00:57:40.709375 systemd[1]: Started sshd@28-116.202.20.89:22-20.161.92.111:59740.service - OpenSSH per-connection server daemon (20.161.92.111:59740). Mar 7 00:57:40.810642 containerd[1588]: time="2026-03-07T00:57:40.810277103Z" level=info msg="CreateContainer within sandbox \"1d84f49a661f57dbd6912baa6ede4b4fc4232cfd278234ad25ab4cc3a4b06ab6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 00:57:40.828494 containerd[1588]: time="2026-03-07T00:57:40.828271115Z" level=info msg="CreateContainer within sandbox \"1d84f49a661f57dbd6912baa6ede4b4fc4232cfd278234ad25ab4cc3a4b06ab6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"897cdfb39d3ba6b763d6a7bff3893feff19816f0d41d254fd0682bce6c340d4b\"" Mar 7 00:57:40.830988 containerd[1588]: time="2026-03-07T00:57:40.829346940Z" level=info msg="StartContainer for \"897cdfb39d3ba6b763d6a7bff3893feff19816f0d41d254fd0682bce6c340d4b\"" Mar 7 00:57:40.889467 containerd[1588]: time="2026-03-07T00:57:40.889225994Z" level=info msg="StartContainer for \"897cdfb39d3ba6b763d6a7bff3893feff19816f0d41d254fd0682bce6c340d4b\" returns successfully" Mar 7 00:57:40.925415 containerd[1588]: time="2026-03-07T00:57:40.925188019Z" level=info msg="shim disconnected" id=897cdfb39d3ba6b763d6a7bff3893feff19816f0d41d254fd0682bce6c340d4b namespace=k8s.io Mar 7 00:57:40.925415 containerd[1588]: time="2026-03-07T00:57:40.925240580Z" level=warning msg="cleaning up after shim disconnected" id=897cdfb39d3ba6b763d6a7bff3893feff19816f0d41d254fd0682bce6c340d4b namespace=k8s.io Mar 7 00:57:40.925415 containerd[1588]: time="2026-03-07T00:57:40.925250460Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:41.296894 sshd[4684]: Accepted publickey for core from 20.161.92.111 port 59740 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:41.299327 sshd[4684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:41.304776 systemd-logind[1563]: New session 24 of user core. Mar 7 00:57:41.310551 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 00:57:41.639698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-897cdfb39d3ba6b763d6a7bff3893feff19816f0d41d254fd0682bce6c340d4b-rootfs.mount: Deactivated successfully. Mar 7 00:57:41.813183 containerd[1588]: time="2026-03-07T00:57:41.812969538Z" level=info msg="CreateContainer within sandbox \"1d84f49a661f57dbd6912baa6ede4b4fc4232cfd278234ad25ab4cc3a4b06ab6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 00:57:41.830742 containerd[1588]: time="2026-03-07T00:57:41.830672103Z" level=info msg="CreateContainer within sandbox \"1d84f49a661f57dbd6912baa6ede4b4fc4232cfd278234ad25ab4cc3a4b06ab6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1c82d29d162b836f946e45ace16d16e2841748a00c5947c0967c2a822c95b9de\"" Mar 7 00:57:41.832597 containerd[1588]: time="2026-03-07T00:57:41.832302981Z" level=info msg="StartContainer for \"1c82d29d162b836f946e45ace16d16e2841748a00c5947c0967c2a822c95b9de\"" Mar 7 00:57:41.902408 containerd[1588]: time="2026-03-07T00:57:41.902026416Z" level=info msg="StartContainer for \"1c82d29d162b836f946e45ace16d16e2841748a00c5947c0967c2a822c95b9de\" returns successfully" Mar 7 00:57:41.937976 containerd[1588]: time="2026-03-07T00:57:41.937636471Z" level=info msg="shim disconnected" id=1c82d29d162b836f946e45ace16d16e2841748a00c5947c0967c2a822c95b9de namespace=k8s.io Mar 7 00:57:41.937976 containerd[1588]: time="2026-03-07T00:57:41.937718073Z" level=warning msg="cleaning up after shim disconnected" id=1c82d29d162b836f946e45ace16d16e2841748a00c5947c0967c2a822c95b9de namespace=k8s.io Mar 7 00:57:41.937976 containerd[1588]: time="2026-03-07T00:57:41.937732993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:42.641784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c82d29d162b836f946e45ace16d16e2841748a00c5947c0967c2a822c95b9de-rootfs.mount: Deactivated successfully. Mar 7 00:57:42.820907 containerd[1588]: time="2026-03-07T00:57:42.820792874Z" level=info msg="CreateContainer within sandbox \"1d84f49a661f57dbd6912baa6ede4b4fc4232cfd278234ad25ab4cc3a4b06ab6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 00:57:42.834780 containerd[1588]: time="2026-03-07T00:57:42.834733993Z" level=info msg="CreateContainer within sandbox \"1d84f49a661f57dbd6912baa6ede4b4fc4232cfd278234ad25ab4cc3a4b06ab6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b41ae565679ba44be2d99be16d1751ad58897cab63012ea866f162e654a8ba7a\"" Mar 7 00:57:42.839020 containerd[1588]: time="2026-03-07T00:57:42.837124487Z" level=info msg="StartContainer for \"b41ae565679ba44be2d99be16d1751ad58897cab63012ea866f162e654a8ba7a\"" Mar 7 00:57:42.895588 containerd[1588]: time="2026-03-07T00:57:42.895473979Z" level=info msg="StartContainer for \"b41ae565679ba44be2d99be16d1751ad58897cab63012ea866f162e654a8ba7a\" returns successfully" Mar 7 00:57:42.916610 containerd[1588]: time="2026-03-07T00:57:42.916545460Z" level=info msg="shim disconnected" id=b41ae565679ba44be2d99be16d1751ad58897cab63012ea866f162e654a8ba7a namespace=k8s.io Mar 7 00:57:42.916826 containerd[1588]: time="2026-03-07T00:57:42.916808626Z" level=warning msg="cleaning up after shim disconnected" id=b41ae565679ba44be2d99be16d1751ad58897cab63012ea866f162e654a8ba7a namespace=k8s.io Mar 7 00:57:42.916889 containerd[1588]: time="2026-03-07T00:57:42.916876588Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:43.641091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b41ae565679ba44be2d99be16d1751ad58897cab63012ea866f162e654a8ba7a-rootfs.mount: Deactivated successfully. Mar 7 00:57:43.788021 kubelet[2788]: I0307 00:57:43.785114 2788 setters.go:618] "Node became not ready" node="ci-4081-3-6-n-e1f368ffcb" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T00:57:43Z","lastTransitionTime":"2026-03-07T00:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 7 00:57:43.829550 containerd[1588]: time="2026-03-07T00:57:43.829499494Z" level=info msg="CreateContainer within sandbox \"1d84f49a661f57dbd6912baa6ede4b4fc4232cfd278234ad25ab4cc3a4b06ab6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 00:57:43.861814 containerd[1588]: time="2026-03-07T00:57:43.861055652Z" level=info msg="CreateContainer within sandbox \"1d84f49a661f57dbd6912baa6ede4b4fc4232cfd278234ad25ab4cc3a4b06ab6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3c0fc464a5e8a6b11fa5ed4f81a7931c82f1a840d341574cee192ad96facf329\"" Mar 7 00:57:43.864476 containerd[1588]: time="2026-03-07T00:57:43.862244959Z" level=info msg="StartContainer for \"3c0fc464a5e8a6b11fa5ed4f81a7931c82f1a840d341574cee192ad96facf329\"" Mar 7 00:57:43.958342 containerd[1588]: time="2026-03-07T00:57:43.958257426Z" level=info msg="StartContainer for \"3c0fc464a5e8a6b11fa5ed4f81a7931c82f1a840d341574cee192ad96facf329\" returns successfully" Mar 7 00:57:44.265589 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 7 00:57:44.642367 systemd[1]: run-containerd-runc-k8s.io-3c0fc464a5e8a6b11fa5ed4f81a7931c82f1a840d341574cee192ad96facf329-runc.whHL8V.mount: Deactivated successfully. Mar 7 00:57:44.847549 kubelet[2788]: I0307 00:57:44.847120 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2wlmm" podStartSLOduration=5.847095579 podStartE2EDuration="5.847095579s" podCreationTimestamp="2026-03-07 00:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:57:44.846609208 +0000 UTC m=+205.771047169" watchObservedRunningTime="2026-03-07 00:57:44.847095579 +0000 UTC m=+205.771533540" Mar 7 00:57:45.891980 kubelet[2788]: E0307 00:57:45.890041 2788 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49052->127.0.0.1:46269: write tcp 127.0.0.1:49052->127.0.0.1:46269: write: broken pipe Mar 7 00:57:47.212081 systemd-networkd[1235]: lxc_health: Link UP Mar 7 00:57:47.224130 systemd-networkd[1235]: lxc_health: Gained carrier Mar 7 00:57:48.728147 systemd-networkd[1235]: lxc_health: Gained IPv6LL Mar 7 00:57:50.290772 kubelet[2788]: E0307 00:57:50.290720 2788 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39708->127.0.0.1:46269: write tcp 127.0.0.1:39708->127.0.0.1:46269: write: broken pipe Mar 7 00:57:52.541194 sshd[4684]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:52.549630 systemd[1]: sshd@28-116.202.20.89:22-20.161.92.111:59740.service: Deactivated successfully. Mar 7 00:57:52.553381 systemd-logind[1563]: Session 24 logged out. Waiting for processes to exit. Mar 7 00:57:52.553616 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 00:57:52.555599 systemd-logind[1563]: Removed session 24. Mar 7 00:57:56.565822 systemd[1]: Started sshd@29-116.202.20.89:22-162.19.153.243:44606.service - OpenSSH per-connection server daemon (162.19.153.243:44606). Mar 7 00:57:56.699267 sshd[5514]: Invalid user opc from 162.19.153.243 port 44606 Mar 7 00:57:56.709979 sshd[5514]: Received disconnect from 162.19.153.243 port 44606:11: Bye Bye [preauth] Mar 7 00:57:56.709979 sshd[5514]: Disconnected from invalid user opc 162.19.153.243 port 44606 [preauth] Mar 7 00:57:56.712715 systemd[1]: sshd@29-116.202.20.89:22-162.19.153.243:44606.service: Deactivated successfully. Mar 7 00:58:07.868189 kubelet[2788]: E0307 00:58:07.868126 2788 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:45352->10.0.0.2:2379: read: connection timed out" Mar 7 00:58:07.899876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d6a1a93a8e6056e3048b8ba034e984e77701ab9bd882013d9708cce500c89de-rootfs.mount: Deactivated successfully. Mar 7 00:58:07.916453 containerd[1588]: time="2026-03-07T00:58:07.916231315Z" level=info msg="shim disconnected" id=5d6a1a93a8e6056e3048b8ba034e984e77701ab9bd882013d9708cce500c89de namespace=k8s.io Mar 7 00:58:07.916453 containerd[1588]: time="2026-03-07T00:58:07.916303996Z" level=warning msg="cleaning up after shim disconnected" id=5d6a1a93a8e6056e3048b8ba034e984e77701ab9bd882013d9708cce500c89de namespace=k8s.io Mar 7 00:58:07.916453 containerd[1588]: time="2026-03-07T00:58:07.916333877Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:58:08.458813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23f3728b8925a62c84334615b5ac88aa8352a060f6c35ccf9443a414aadddd41-rootfs.mount: Deactivated successfully. Mar 7 00:58:08.462237 containerd[1588]: time="2026-03-07T00:58:08.462142366Z" level=info msg="shim disconnected" id=23f3728b8925a62c84334615b5ac88aa8352a060f6c35ccf9443a414aadddd41 namespace=k8s.io Mar 7 00:58:08.462555 containerd[1588]: time="2026-03-07T00:58:08.462225128Z" level=warning msg="cleaning up after shim disconnected" id=23f3728b8925a62c84334615b5ac88aa8352a060f6c35ccf9443a414aadddd41 namespace=k8s.io Mar 7 00:58:08.462555 containerd[1588]: time="2026-03-07T00:58:08.462413852Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:58:08.896856 kubelet[2788]: I0307 00:58:08.896718 2788 scope.go:117] "RemoveContainer" containerID="5d6a1a93a8e6056e3048b8ba034e984e77701ab9bd882013d9708cce500c89de" Mar 7 00:58:08.899280 containerd[1588]: time="2026-03-07T00:58:08.899227500Z" level=info msg="CreateContainer within sandbox \"331d3b10867cd8b2692003f634f0156ba78b3cba57b165580092c4b31cf07fe1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 7 00:58:08.900486 kubelet[2788]: I0307 00:58:08.900459 2788 scope.go:117] "RemoveContainer" containerID="23f3728b8925a62c84334615b5ac88aa8352a060f6c35ccf9443a414aadddd41" Mar 7 00:58:08.902272 containerd[1588]: time="2026-03-07T00:58:08.902178684Z" level=info msg="CreateContainer within sandbox \"db289c3c49bd280945f2d2c9688964a1d2ab151ca3beb94234dc54c9b2664f24\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 7 00:58:08.918831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3336467567.mount: Deactivated successfully. Mar 7 00:58:08.921531 containerd[1588]: time="2026-03-07T00:58:08.921486661Z" level=info msg="CreateContainer within sandbox \"331d3b10867cd8b2692003f634f0156ba78b3cba57b165580092c4b31cf07fe1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ee1b34cdb17ce147dd6d2ff214f79d8833e52b13d5b7720968585c31b28bd08e\"" Mar 7 00:58:08.922117 containerd[1588]: time="2026-03-07T00:58:08.922092035Z" level=info msg="StartContainer for \"ee1b34cdb17ce147dd6d2ff214f79d8833e52b13d5b7720968585c31b28bd08e\"" Mar 7 00:58:08.928058 containerd[1588]: time="2026-03-07T00:58:08.927991282Z" level=info msg="CreateContainer within sandbox \"db289c3c49bd280945f2d2c9688964a1d2ab151ca3beb94234dc54c9b2664f24\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2088a828ad4f132641f77a59008e54c57025d296e6d44955f343c518ad20155d\"" Mar 7 00:58:08.928737 containerd[1588]: time="2026-03-07T00:58:08.928703418Z" level=info msg="StartContainer for \"2088a828ad4f132641f77a59008e54c57025d296e6d44955f343c518ad20155d\"" Mar 7 00:58:09.011599 containerd[1588]: time="2026-03-07T00:58:09.010221300Z" level=info msg="StartContainer for \"2088a828ad4f132641f77a59008e54c57025d296e6d44955f343c518ad20155d\" returns successfully" Mar 7 00:58:09.011599 containerd[1588]: time="2026-03-07T00:58:09.011358525Z" level=info msg="StartContainer for \"ee1b34cdb17ce147dd6d2ff214f79d8833e52b13d5b7720968585c31b28bd08e\" returns successfully" Mar 7 00:58:11.224236 systemd[1]: Started sshd@30-116.202.20.89:22-81.192.46.49:43388.service - OpenSSH per-connection server daemon (81.192.46.49:43388). Mar 7 00:58:11.638901 sshd[5648]: Received disconnect from 81.192.46.49 port 43388:11: Bye Bye [preauth] Mar 7 00:58:11.638901 sshd[5648]: Disconnected from authenticating user root 81.192.46.49 port 43388 [preauth] Mar 7 00:58:11.642258 systemd[1]: sshd@30-116.202.20.89:22-81.192.46.49:43388.service: Deactivated successfully. Mar 7 00:58:11.707370 kubelet[2788]: E0307 00:58:11.706685 2788 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:45146->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-e1f368ffcb.189a69355e95be35 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-e1f368ffcb,UID:33d4df733b6ec815f73a0809206328fa,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-e1f368ffcb,},FirstTimestamp:2026-03-07 00:58:01.264766517 +0000 UTC m=+222.189204438,LastTimestamp:2026-03-07 00:58:01.264766517 +0000 UTC m=+222.189204438,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-e1f368ffcb,}"