Apr 30 12:41:12.873707 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 12:41:12.873730 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Tue Apr 29 22:28:35 -00 2025 Apr 30 12:41:12.873740 kernel: KASLR enabled Apr 30 12:41:12.873746 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 30 12:41:12.873752 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Apr 30 12:41:12.873758 kernel: random: crng init done Apr 30 12:41:12.873764 kernel: secureboot: Secure boot disabled Apr 30 12:41:12.873771 kernel: ACPI: Early table checksum verification disabled Apr 30 12:41:12.873776 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 30 12:41:12.873784 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 30 12:41:12.873825 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:12.873832 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:12.873838 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:12.873843 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:12.873851 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:12.873860 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:12.873866 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:12.873873 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:12.873879 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:12.873885 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 30 12:41:12.873891 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 30 12:41:12.873897 kernel: NUMA: Failed to initialise from firmware Apr 30 12:41:12.873903 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 12:41:12.873909 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Apr 30 12:41:12.873938 kernel: Zone ranges: Apr 30 12:41:12.873948 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 30 12:41:12.873954 kernel: DMA32 empty Apr 30 12:41:12.873959 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 30 12:41:12.873965 kernel: Movable zone start for each node Apr 30 12:41:12.873971 kernel: Early memory node ranges Apr 30 12:41:12.873977 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Apr 30 12:41:12.873983 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Apr 30 12:41:12.873989 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Apr 30 12:41:12.873995 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 30 12:41:12.874001 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 30 12:41:12.874007 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 30 12:41:12.874013 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 30 12:41:12.874020 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 30 12:41:12.874026 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 30 12:41:12.874032 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 12:41:12.874042 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 30 12:41:12.874048 kernel: psci: probing for conduit method from ACPI. Apr 30 12:41:12.874055 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 12:41:12.874063 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 12:41:12.874069 kernel: psci: Trusted OS migration not required Apr 30 12:41:12.874075 kernel: psci: SMC Calling Convention v1.1 Apr 30 12:41:12.874082 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 30 12:41:12.874089 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 12:41:12.874095 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 12:41:12.874102 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 12:41:12.874109 kernel: Detected PIPT I-cache on CPU0 Apr 30 12:41:12.874115 kernel: CPU features: detected: GIC system register CPU interface Apr 30 12:41:12.874122 kernel: CPU features: detected: Hardware dirty bit management Apr 30 12:41:12.874129 kernel: CPU features: detected: Spectre-v4 Apr 30 12:41:12.874136 kernel: CPU features: detected: Spectre-BHB Apr 30 12:41:12.874142 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 12:41:12.874149 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 12:41:12.874155 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 12:41:12.874162 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 12:41:12.874168 kernel: alternatives: applying boot alternatives Apr 30 12:41:12.874176 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=984055eb0c340c9cf0fb51b368030ed72e75b7f2e065edc13766888ef0b42074 Apr 30 12:41:12.874183 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 12:41:12.874189 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 12:41:12.874196 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 12:41:12.874204 kernel: Fallback order for Node 0: 0 Apr 30 12:41:12.874211 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 30 12:41:12.874217 kernel: Policy zone: Normal Apr 30 12:41:12.874224 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 12:41:12.874231 kernel: software IO TLB: area num 2. Apr 30 12:41:12.874237 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 30 12:41:12.874244 kernel: Memory: 3883832K/4096000K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 212168K reserved, 0K cma-reserved) Apr 30 12:41:12.874250 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 12:41:12.874257 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 12:41:12.874264 kernel: rcu: RCU event tracing is enabled. Apr 30 12:41:12.874271 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 12:41:12.874278 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 12:41:12.874286 kernel: Tracing variant of Tasks RCU enabled. Apr 30 12:41:12.874293 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 12:41:12.874299 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 12:41:12.874306 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 12:41:12.874312 kernel: GICv3: 256 SPIs implemented Apr 30 12:41:12.874319 kernel: GICv3: 0 Extended SPIs implemented Apr 30 12:41:12.874325 kernel: Root IRQ handler: gic_handle_irq Apr 30 12:41:12.874332 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 12:41:12.874338 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 30 12:41:12.874345 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 30 12:41:12.874351 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 12:41:12.874359 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 30 12:41:12.874366 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 30 12:41:12.874373 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 30 12:41:12.874379 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 12:41:12.874386 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 12:41:12.874392 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 12:41:12.874399 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 12:41:12.874406 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 12:41:12.874412 kernel: Console: colour dummy device 80x25 Apr 30 12:41:12.874419 kernel: ACPI: Core revision 20230628 Apr 30 12:41:12.874426 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 12:41:12.874434 kernel: pid_max: default: 32768 minimum: 301 Apr 30 12:41:12.874441 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 12:41:12.874448 kernel: landlock: Up and running. Apr 30 12:41:12.874455 kernel: SELinux: Initializing. Apr 30 12:41:12.874461 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 12:41:12.874468 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 12:41:12.874475 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:41:12.874482 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:41:12.874488 kernel: rcu: Hierarchical SRCU implementation. Apr 30 12:41:12.874497 kernel: rcu: Max phase no-delay instances is 400. Apr 30 12:41:12.874504 kernel: Platform MSI: ITS@0x8080000 domain created Apr 30 12:41:12.874510 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 30 12:41:12.874517 kernel: Remapping and enabling EFI services. Apr 30 12:41:12.874523 kernel: smp: Bringing up secondary CPUs ... Apr 30 12:41:12.874530 kernel: Detected PIPT I-cache on CPU1 Apr 30 12:41:12.874537 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 30 12:41:12.874543 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 30 12:41:12.874550 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 12:41:12.874558 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 12:41:12.874565 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 12:41:12.874577 kernel: SMP: Total of 2 processors activated. Apr 30 12:41:12.874586 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 12:41:12.874593 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 12:41:12.874600 kernel: CPU features: detected: Common not Private translations Apr 30 12:41:12.874607 kernel: CPU features: detected: CRC32 instructions Apr 30 12:41:12.874613 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 30 12:41:12.874621 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 12:41:12.874629 kernel: CPU features: detected: LSE atomic instructions Apr 30 12:41:12.874636 kernel: CPU features: detected: Privileged Access Never Apr 30 12:41:12.874643 kernel: CPU features: detected: RAS Extension Support Apr 30 12:41:12.874650 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 30 12:41:12.874657 kernel: CPU: All CPU(s) started at EL1 Apr 30 12:41:12.874664 kernel: alternatives: applying system-wide alternatives Apr 30 12:41:12.874671 kernel: devtmpfs: initialized Apr 30 12:41:12.874678 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 12:41:12.874687 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 12:41:12.874694 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 12:41:12.874701 kernel: SMBIOS 3.0.0 present. Apr 30 12:41:12.874708 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 30 12:41:12.874715 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 12:41:12.874722 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 12:41:12.874729 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 12:41:12.874736 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 12:41:12.874743 kernel: audit: initializing netlink subsys (disabled) Apr 30 12:41:12.874752 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Apr 30 12:41:12.874759 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 12:41:12.874766 kernel: cpuidle: using governor menu Apr 30 12:41:12.874773 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 12:41:12.874779 kernel: ASID allocator initialised with 32768 entries Apr 30 12:41:12.874786 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 12:41:12.874805 kernel: Serial: AMBA PL011 UART driver Apr 30 12:41:12.874812 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 12:41:12.874821 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 12:41:12.874831 kernel: Modules: 509264 pages in range for PLT usage Apr 30 12:41:12.874838 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 12:41:12.874845 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 12:41:12.874852 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 12:41:12.874859 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 12:41:12.874866 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 12:41:12.874873 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 12:41:12.874880 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 12:41:12.874887 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 12:41:12.874896 kernel: ACPI: Added _OSI(Module Device) Apr 30 12:41:12.874903 kernel: ACPI: Added _OSI(Processor Device) Apr 30 12:41:12.874910 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 12:41:12.874928 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 12:41:12.874936 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 12:41:12.874943 kernel: ACPI: Interpreter enabled Apr 30 12:41:12.874949 kernel: ACPI: Using GIC for interrupt routing Apr 30 12:41:12.874956 kernel: ACPI: MCFG table detected, 1 entries Apr 30 12:41:12.874964 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 30 12:41:12.874973 kernel: printk: console [ttyAMA0] enabled Apr 30 12:41:12.874980 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 12:41:12.875128 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 12:41:12.875202 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 12:41:12.875269 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 12:41:12.875333 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 30 12:41:12.875396 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 30 12:41:12.875408 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 30 12:41:12.875416 kernel: PCI host bridge to bus 0000:00 Apr 30 12:41:12.875486 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 30 12:41:12.875546 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 12:41:12.875606 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 30 12:41:12.875665 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 12:41:12.875743 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 30 12:41:12.875868 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 30 12:41:12.876052 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 30 12:41:12.876124 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 12:41:12.876205 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:12.876273 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 30 12:41:12.876346 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:12.876418 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 30 12:41:12.876493 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:12.876562 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 30 12:41:12.876634 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:12.876701 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 30 12:41:12.876774 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:12.876862 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 30 12:41:12.876956 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:12.877026 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 30 12:41:12.877099 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:12.877165 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 30 12:41:12.877238 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:12.877310 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 30 12:41:12.877384 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:12.877450 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 30 12:41:12.877528 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 30 12:41:12.877597 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 30 12:41:12.877672 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 12:41:12.877741 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 30 12:41:12.877825 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 12:41:12.877896 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 12:41:12.877991 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 12:41:12.878061 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 30 12:41:12.878137 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 30 12:41:12.878206 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 30 12:41:12.878279 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 30 12:41:12.878354 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 30 12:41:12.878424 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 30 12:41:12.878498 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 12:41:12.878566 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 30 12:41:12.878633 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 30 12:41:12.878707 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 30 12:41:12.878778 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 30 12:41:12.878888 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 12:41:12.879065 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 12:41:12.879139 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 30 12:41:12.879204 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 30 12:41:12.879269 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 12:41:12.879342 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 30 12:41:12.879406 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 30 12:41:12.879470 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 30 12:41:12.879535 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 30 12:41:12.879598 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 30 12:41:12.879661 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 30 12:41:12.879726 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 30 12:41:12.879804 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 30 12:41:12.879873 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 30 12:41:12.879950 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 30 12:41:12.880014 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 30 12:41:12.880076 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 30 12:41:12.880141 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 30 12:41:12.880208 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 30 12:41:12.880273 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 30 12:41:12.880342 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 30 12:41:12.880404 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 30 12:41:12.880468 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 30 12:41:12.880532 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 12:41:12.880595 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 30 12:41:12.880657 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 30 12:41:12.880722 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 12:41:12.880817 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 30 12:41:12.880897 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 30 12:41:12.881026 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 12:41:12.881095 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 30 12:41:12.881159 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 30 12:41:12.881222 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 30 12:41:12.881287 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 12:41:12.881358 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 30 12:41:12.881423 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 12:41:12.881488 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 30 12:41:12.881551 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 12:41:12.881615 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 30 12:41:12.881677 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 12:41:12.881742 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 30 12:41:12.881826 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 12:41:12.881892 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 30 12:41:12.882642 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 12:41:12.882726 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 30 12:41:12.882837 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 12:41:12.882989 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 30 12:41:12.883076 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 12:41:12.883149 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 30 12:41:12.883213 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 12:41:12.883280 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 30 12:41:12.883343 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 30 12:41:12.883407 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 30 12:41:12.883470 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 12:41:12.883536 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 30 12:41:12.883605 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 12:41:12.883671 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 30 12:41:12.883733 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 12:41:12.883812 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 30 12:41:12.883879 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 30 12:41:12.883961 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 30 12:41:12.884027 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 30 12:41:12.884092 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 30 12:41:12.884161 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 30 12:41:12.885082 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 30 12:41:12.885169 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 30 12:41:12.885239 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 30 12:41:12.885305 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 30 12:41:12.885370 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 30 12:41:12.885433 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 30 12:41:12.885502 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 30 12:41:12.885584 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 30 12:41:12.885650 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 12:41:12.885716 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 30 12:41:12.885780 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 12:41:12.885862 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 30 12:41:12.885940 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 30 12:41:12.886006 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 12:41:12.886077 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 30 12:41:12.886149 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 12:41:12.886213 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 30 12:41:12.886278 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 30 12:41:12.886341 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 12:41:12.886416 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 12:41:12.886484 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 30 12:41:12.886551 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 12:41:12.886616 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 30 12:41:12.886681 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 30 12:41:12.886745 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 12:41:12.886836 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 12:41:12.889633 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 12:41:12.889836 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 30 12:41:12.889941 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 30 12:41:12.890019 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 12:41:12.890098 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 30 12:41:12.890165 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 30 12:41:12.890233 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 12:41:12.890298 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 30 12:41:12.890362 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 30 12:41:12.890432 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 12:41:12.890505 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 30 12:41:12.890575 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 30 12:41:12.890640 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 12:41:12.890706 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 30 12:41:12.890769 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 30 12:41:12.890849 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 12:41:12.890945 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 30 12:41:12.891026 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 30 12:41:12.891097 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 30 12:41:12.891164 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 12:41:12.891229 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 30 12:41:12.891295 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 30 12:41:12.891360 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 12:41:12.891430 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 12:41:12.891497 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 30 12:41:12.891566 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 30 12:41:12.891632 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 12:41:12.891702 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 12:41:12.891797 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 30 12:41:12.891896 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 30 12:41:12.892116 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 12:41:12.892187 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 30 12:41:12.892245 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 12:41:12.892308 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 30 12:41:12.892386 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 30 12:41:12.892458 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 30 12:41:12.892517 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 12:41:12.892595 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 30 12:41:12.892657 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 30 12:41:12.892720 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 12:41:12.892848 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 30 12:41:12.894532 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 30 12:41:12.894616 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 12:41:12.894695 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 30 12:41:12.894762 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 30 12:41:12.894845 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 12:41:12.894947 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 30 12:41:12.895016 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 30 12:41:12.895084 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 12:41:12.895157 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 30 12:41:12.895226 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 30 12:41:12.895293 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 12:41:12.895366 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 30 12:41:12.895430 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 30 12:41:12.895494 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 12:41:12.895564 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 30 12:41:12.895629 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 30 12:41:12.895694 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 12:41:12.895766 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 30 12:41:12.895870 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 30 12:41:12.898017 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 12:41:12.898041 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 12:41:12.898050 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 12:41:12.898058 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 12:41:12.898072 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 12:41:12.898080 kernel: iommu: Default domain type: Translated Apr 30 12:41:12.898087 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 12:41:12.898095 kernel: efivars: Registered efivars operations Apr 30 12:41:12.898102 kernel: vgaarb: loaded Apr 30 12:41:12.898110 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 12:41:12.898118 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 12:41:12.898126 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 12:41:12.898133 kernel: pnp: PnP ACPI init Apr 30 12:41:12.898225 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 30 12:41:12.898236 kernel: pnp: PnP ACPI: found 1 devices Apr 30 12:41:12.898244 kernel: NET: Registered PF_INET protocol family Apr 30 12:41:12.898252 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 12:41:12.898260 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 12:41:12.898268 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 12:41:12.898275 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 12:41:12.898283 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 12:41:12.898291 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 12:41:12.898301 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 12:41:12.898309 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 12:41:12.898317 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 12:41:12.898394 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 30 12:41:12.898405 kernel: PCI: CLS 0 bytes, default 64 Apr 30 12:41:12.898413 kernel: kvm [1]: HYP mode not available Apr 30 12:41:12.898420 kernel: Initialise system trusted keyrings Apr 30 12:41:12.898428 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 12:41:12.898436 kernel: Key type asymmetric registered Apr 30 12:41:12.898445 kernel: Asymmetric key parser 'x509' registered Apr 30 12:41:12.898453 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 12:41:12.898460 kernel: io scheduler mq-deadline registered Apr 30 12:41:12.898468 kernel: io scheduler kyber registered Apr 30 12:41:12.898475 kernel: io scheduler bfq registered Apr 30 12:41:12.898483 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 30 12:41:12.898555 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 30 12:41:12.898623 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 30 12:41:12.898694 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:12.898765 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 30 12:41:12.898855 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 30 12:41:12.898944 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:12.899021 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 30 12:41:12.899090 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 30 12:41:12.899161 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:12.899230 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 30 12:41:12.899298 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 30 12:41:12.899363 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:12.899433 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 30 12:41:12.899500 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 30 12:41:12.899569 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:12.899639 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 30 12:41:12.899707 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 30 12:41:12.899774 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:12.899863 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 30 12:41:12.902349 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 30 12:41:12.902467 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:12.902539 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 30 12:41:12.902605 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 30 12:41:12.902670 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:12.902681 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 30 12:41:12.902747 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 30 12:41:12.902867 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 30 12:41:12.902954 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:12.902966 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 12:41:12.902974 kernel: ACPI: button: Power Button [PWRB] Apr 30 12:41:12.902982 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 12:41:12.903057 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 30 12:41:12.903130 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 30 12:41:12.903141 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 12:41:12.903153 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 30 12:41:12.903223 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 30 12:41:12.903233 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 30 12:41:12.903241 kernel: thunder_xcv, ver 1.0 Apr 30 12:41:12.903248 kernel: thunder_bgx, ver 1.0 Apr 30 12:41:12.903256 kernel: nicpf, ver 1.0 Apr 30 12:41:12.903263 kernel: nicvf, ver 1.0 Apr 30 12:41:12.903405 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 12:41:12.903480 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T12:41:12 UTC (1746016872) Apr 30 12:41:12.903491 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 12:41:12.903499 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 30 12:41:12.903507 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 12:41:12.903514 kernel: watchdog: Hard watchdog permanently disabled Apr 30 12:41:12.903522 kernel: NET: Registered PF_INET6 protocol family Apr 30 12:41:12.903529 kernel: Segment Routing with IPv6 Apr 30 12:41:12.903537 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 12:41:12.903544 kernel: NET: Registered PF_PACKET protocol family Apr 30 12:41:12.903555 kernel: Key type dns_resolver registered Apr 30 12:41:12.903562 kernel: registered taskstats version 1 Apr 30 12:41:12.903570 kernel: Loading compiled-in X.509 certificates Apr 30 12:41:12.903577 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4e3d8be893bce81adbd52ab54fa98214a1a14a2e' Apr 30 12:41:12.903585 kernel: Key type .fscrypt registered Apr 30 12:41:12.903592 kernel: Key type fscrypt-provisioning registered Apr 30 12:41:12.903600 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 12:41:12.903608 kernel: ima: Allocated hash algorithm: sha1 Apr 30 12:41:12.903615 kernel: ima: No architecture policies found Apr 30 12:41:12.903625 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 12:41:12.903635 kernel: clk: Disabling unused clocks Apr 30 12:41:12.903642 kernel: Freeing unused kernel memory: 38336K Apr 30 12:41:12.903650 kernel: Run /init as init process Apr 30 12:41:12.903657 kernel: with arguments: Apr 30 12:41:12.903665 kernel: /init Apr 30 12:41:12.903672 kernel: with environment: Apr 30 12:41:12.903679 kernel: HOME=/ Apr 30 12:41:12.903686 kernel: TERM=linux Apr 30 12:41:12.903695 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 12:41:12.903703 systemd[1]: Successfully made /usr/ read-only. Apr 30 12:41:12.903714 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:41:12.903722 systemd[1]: Detected virtualization kvm. Apr 30 12:41:12.903730 systemd[1]: Detected architecture arm64. Apr 30 12:41:12.903738 systemd[1]: Running in initrd. Apr 30 12:41:12.903745 systemd[1]: No hostname configured, using default hostname. Apr 30 12:41:12.903755 systemd[1]: Hostname set to . Apr 30 12:41:12.903763 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:41:12.903770 systemd[1]: Queued start job for default target initrd.target. Apr 30 12:41:12.903778 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:41:12.903786 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:41:12.903808 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 12:41:12.903816 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:41:12.903825 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 12:41:12.903836 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 12:41:12.903846 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 12:41:12.903854 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 12:41:12.903862 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:41:12.903869 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:41:12.903877 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:41:12.903885 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:41:12.903894 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:41:12.903902 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:41:12.903910 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:41:12.905252 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:41:12.905268 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 12:41:12.905276 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 30 12:41:12.905285 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:41:12.905293 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:41:12.905301 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:41:12.905315 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:41:12.905324 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 12:41:12.905332 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:41:12.905340 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 12:41:12.905348 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 12:41:12.905356 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:41:12.905364 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:41:12.905372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:41:12.905383 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 12:41:12.905423 systemd-journald[237]: Collecting audit messages is disabled. Apr 30 12:41:12.905445 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:41:12.905456 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 12:41:12.905464 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 12:41:12.905473 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:41:12.905481 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:41:12.905491 systemd-journald[237]: Journal started Apr 30 12:41:12.905512 systemd-journald[237]: Runtime Journal (/run/log/journal/58c1f397d44d443c91395bae54fae033) is 8M, max 76.6M, 68.6M free. Apr 30 12:41:12.889907 systemd-modules-load[238]: Inserted module 'overlay' Apr 30 12:41:12.908075 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 12:41:12.910012 kernel: Bridge firewalling registered Apr 30 12:41:12.909674 systemd-modules-load[238]: Inserted module 'br_netfilter' Apr 30 12:41:12.913694 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:41:12.917207 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:41:12.918351 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:41:12.921370 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:41:12.933145 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:41:12.935362 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:41:12.936778 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:41:12.937671 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:41:12.943311 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 12:41:12.949103 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:41:12.953418 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:41:12.961112 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:41:12.967832 dracut-cmdline[270]: dracut-dracut-053 Apr 30 12:41:12.972781 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=984055eb0c340c9cf0fb51b368030ed72e75b7f2e065edc13766888ef0b42074 Apr 30 12:41:13.002877 systemd-resolved[275]: Positive Trust Anchors: Apr 30 12:41:13.003526 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:41:13.003559 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:41:13.013772 systemd-resolved[275]: Defaulting to hostname 'linux'. Apr 30 12:41:13.014851 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:41:13.015871 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:41:13.063963 kernel: SCSI subsystem initialized Apr 30 12:41:13.068975 kernel: Loading iSCSI transport class v2.0-870. Apr 30 12:41:13.076975 kernel: iscsi: registered transport (tcp) Apr 30 12:41:13.089982 kernel: iscsi: registered transport (qla4xxx) Apr 30 12:41:13.090058 kernel: QLogic iSCSI HBA Driver Apr 30 12:41:13.135576 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 12:41:13.143128 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 12:41:13.160172 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 12:41:13.160245 kernel: device-mapper: uevent: version 1.0.3 Apr 30 12:41:13.160257 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 12:41:13.210978 kernel: raid6: neonx8 gen() 15711 MB/s Apr 30 12:41:13.227984 kernel: raid6: neonx4 gen() 15726 MB/s Apr 30 12:41:13.244982 kernel: raid6: neonx2 gen() 13136 MB/s Apr 30 12:41:13.261986 kernel: raid6: neonx1 gen() 10463 MB/s Apr 30 12:41:13.278973 kernel: raid6: int64x8 gen() 6758 MB/s Apr 30 12:41:13.295997 kernel: raid6: int64x4 gen() 7308 MB/s Apr 30 12:41:13.313008 kernel: raid6: int64x2 gen() 6077 MB/s Apr 30 12:41:13.329972 kernel: raid6: int64x1 gen() 5034 MB/s Apr 30 12:41:13.330064 kernel: raid6: using algorithm neonx4 gen() 15726 MB/s Apr 30 12:41:13.347000 kernel: raid6: .... xor() 12344 MB/s, rmw enabled Apr 30 12:41:13.347074 kernel: raid6: using neon recovery algorithm Apr 30 12:41:13.350955 kernel: xor: measuring software checksum speed Apr 30 12:41:13.352111 kernel: 8regs : 19898 MB/sec Apr 30 12:41:13.352138 kernel: 32regs : 21670 MB/sec Apr 30 12:41:13.352148 kernel: arm64_neon : 26441 MB/sec Apr 30 12:41:13.352158 kernel: xor: using function: arm64_neon (26441 MB/sec) Apr 30 12:41:13.400974 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 12:41:13.416960 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:41:13.424163 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:41:13.441894 systemd-udevd[456]: Using default interface naming scheme 'v255'. Apr 30 12:41:13.445841 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:41:13.459251 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 12:41:13.476027 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Apr 30 12:41:13.515506 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:41:13.525201 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:41:13.573850 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:41:13.581220 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 12:41:13.601461 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 12:41:13.603736 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:41:13.605728 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:41:13.607410 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:41:13.617209 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 12:41:13.636014 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:41:13.664473 kernel: scsi host0: Virtio SCSI HBA Apr 30 12:41:13.667077 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 12:41:13.667146 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 30 12:41:13.706728 kernel: ACPI: bus type USB registered Apr 30 12:41:13.706799 kernel: usbcore: registered new interface driver usbfs Apr 30 12:41:13.706813 kernel: usbcore: registered new interface driver hub Apr 30 12:41:13.706822 kernel: usbcore: registered new device driver usb Apr 30 12:41:13.705505 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:41:13.705635 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:41:13.707387 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:41:13.709559 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:41:13.709715 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:41:13.712897 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:41:13.724188 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:41:13.731030 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 30 12:41:13.740662 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 30 12:41:13.740845 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 30 12:41:13.740990 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 30 12:41:13.741215 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 12:41:13.741324 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 30 12:41:13.741909 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 12:41:13.741962 kernel: GPT:17805311 != 80003071 Apr 30 12:41:13.741975 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 12:41:13.741985 kernel: GPT:17805311 != 80003071 Apr 30 12:41:13.742002 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 12:41:13.742012 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:41:13.742023 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 30 12:41:13.742212 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 12:41:13.742224 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 30 12:41:13.742310 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 30 12:41:13.751279 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:41:13.759281 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 12:41:13.767659 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 30 12:41:13.768053 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 12:41:13.768180 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 12:41:13.768266 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 30 12:41:13.768346 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 30 12:41:13.768430 kernel: hub 1-0:1.0: USB hub found Apr 30 12:41:13.768536 kernel: hub 1-0:1.0: 4 ports detected Apr 30 12:41:13.768615 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 12:41:13.768706 kernel: hub 2-0:1.0: USB hub found Apr 30 12:41:13.768854 kernel: hub 2-0:1.0: 4 ports detected Apr 30 12:41:13.762340 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:41:13.786740 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:41:13.804949 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (503) Apr 30 12:41:13.808965 kernel: BTRFS: device fsid 8f86a166-b3d6-49f7-a49d-597eaeb9f5e5 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (504) Apr 30 12:41:13.819454 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 30 12:41:13.838163 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 30 12:41:13.851805 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 30 12:41:13.852536 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 30 12:41:13.865432 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 12:41:13.872097 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 12:41:13.880598 disk-uuid[574]: Primary Header is updated. Apr 30 12:41:13.880598 disk-uuid[574]: Secondary Entries is updated. Apr 30 12:41:13.880598 disk-uuid[574]: Secondary Header is updated. Apr 30 12:41:13.888325 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:41:13.891947 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:41:14.007877 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 12:41:14.249013 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 30 12:41:14.387629 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 30 12:41:14.387712 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 30 12:41:14.390938 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 30 12:41:14.443983 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 30 12:41:14.444216 kernel: usbcore: registered new interface driver usbhid Apr 30 12:41:14.445172 kernel: usbhid: USB HID core driver Apr 30 12:41:14.899000 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:41:14.899588 disk-uuid[575]: The operation has completed successfully. Apr 30 12:41:14.952381 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 12:41:14.952505 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 12:41:14.995210 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 12:41:15.000259 sh[590]: Success Apr 30 12:41:15.011969 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 12:41:15.067087 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 12:41:15.081131 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 12:41:15.088076 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 12:41:15.101487 kernel: BTRFS info (device dm-0): first mount of filesystem 8f86a166-b3d6-49f7-a49d-597eaeb9f5e5 Apr 30 12:41:15.101553 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:41:15.101568 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 12:41:15.101581 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 12:41:15.102184 kernel: BTRFS info (device dm-0): using free space tree Apr 30 12:41:15.107950 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 12:41:15.110655 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 12:41:15.112569 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 12:41:15.119220 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 12:41:15.124117 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 12:41:15.141818 kernel: BTRFS info (device sda6): first mount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:41:15.141868 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:41:15.141880 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:41:15.149954 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 12:41:15.150019 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:41:15.155055 kernel: BTRFS info (device sda6): last unmount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:41:15.158522 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 12:41:15.164194 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 12:41:15.240107 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:41:15.247176 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:41:15.264592 ignition[683]: Ignition 2.20.0 Apr 30 12:41:15.264607 ignition[683]: Stage: fetch-offline Apr 30 12:41:15.264644 ignition[683]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:41:15.264654 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:41:15.264810 ignition[683]: parsed url from cmdline: "" Apr 30 12:41:15.264813 ignition[683]: no config URL provided Apr 30 12:41:15.264818 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 12:41:15.264825 ignition[683]: no config at "/usr/lib/ignition/user.ign" Apr 30 12:41:15.269199 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:41:15.264829 ignition[683]: failed to fetch config: resource requires networking Apr 30 12:41:15.265018 ignition[683]: Ignition finished successfully Apr 30 12:41:15.279358 systemd-networkd[773]: lo: Link UP Apr 30 12:41:15.279371 systemd-networkd[773]: lo: Gained carrier Apr 30 12:41:15.281075 systemd-networkd[773]: Enumeration completed Apr 30 12:41:15.281181 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:41:15.281826 systemd[1]: Reached target network.target - Network. Apr 30 12:41:15.282577 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:41:15.282581 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:41:15.283278 systemd-networkd[773]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:41:15.283282 systemd-networkd[773]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:41:15.283723 systemd-networkd[773]: eth0: Link UP Apr 30 12:41:15.283726 systemd-networkd[773]: eth0: Gained carrier Apr 30 12:41:15.283732 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:41:15.288171 systemd-networkd[773]: eth1: Link UP Apr 30 12:41:15.288174 systemd-networkd[773]: eth1: Gained carrier Apr 30 12:41:15.288181 systemd-networkd[773]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:41:15.292903 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 12:41:15.304676 ignition[779]: Ignition 2.20.0 Apr 30 12:41:15.304689 ignition[779]: Stage: fetch Apr 30 12:41:15.304866 ignition[779]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:41:15.304876 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:41:15.304989 ignition[779]: parsed url from cmdline: "" Apr 30 12:41:15.304992 ignition[779]: no config URL provided Apr 30 12:41:15.304997 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 12:41:15.305005 ignition[779]: no config at "/usr/lib/ignition/user.ign" Apr 30 12:41:15.305087 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 30 12:41:15.305883 ignition[779]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 30 12:41:15.314999 systemd-networkd[773]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 12:41:15.349049 systemd-networkd[773]: eth0: DHCPv4 address 91.99.82.124/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 12:41:15.506681 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 30 12:41:15.512141 ignition[779]: GET result: OK Apr 30 12:41:15.512264 ignition[779]: parsing config with SHA512: e9939790d0375958ae9dff9d104a68e0a579a74cf36b94dd187b4fa3fcdcadcb96913d82040339667736cfcd0cbf72b981dac1fc33994d15e20cb103da5c7827 Apr 30 12:41:15.519704 unknown[779]: fetched base config from "system" Apr 30 12:41:15.519714 unknown[779]: fetched base config from "system" Apr 30 12:41:15.520196 ignition[779]: fetch: fetch complete Apr 30 12:41:15.519720 unknown[779]: fetched user config from "hetzner" Apr 30 12:41:15.520202 ignition[779]: fetch: fetch passed Apr 30 12:41:15.521661 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 12:41:15.520248 ignition[779]: Ignition finished successfully Apr 30 12:41:15.530992 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 12:41:15.548652 ignition[786]: Ignition 2.20.0 Apr 30 12:41:15.548665 ignition[786]: Stage: kargs Apr 30 12:41:15.548855 ignition[786]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:41:15.548865 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:41:15.550956 ignition[786]: kargs: kargs passed Apr 30 12:41:15.551011 ignition[786]: Ignition finished successfully Apr 30 12:41:15.553156 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 12:41:15.561131 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 12:41:15.575967 ignition[793]: Ignition 2.20.0 Apr 30 12:41:15.575981 ignition[793]: Stage: disks Apr 30 12:41:15.576192 ignition[793]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:41:15.579739 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 12:41:15.576205 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:41:15.577399 ignition[793]: disks: disks passed Apr 30 12:41:15.582813 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 12:41:15.577459 ignition[793]: Ignition finished successfully Apr 30 12:41:15.584493 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 12:41:15.585136 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:41:15.586280 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:41:15.588071 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:41:15.600220 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 12:41:15.618134 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 12:41:15.621669 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 12:41:15.627101 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 12:41:15.676931 kernel: EXT4-fs (sda9): mounted filesystem 597557b0-8ae6-4a5a-8e98-f3f884fcfe65 r/w with ordered data mode. Quota mode: none. Apr 30 12:41:15.678137 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 12:41:15.679525 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 12:41:15.690136 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:41:15.694015 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 12:41:15.697237 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 12:41:15.699039 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 12:41:15.699088 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:41:15.701135 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 12:41:15.708166 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 12:41:15.711523 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (810) Apr 30 12:41:15.713991 kernel: BTRFS info (device sda6): first mount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:41:15.714050 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:41:15.715045 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:41:15.724680 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 12:41:15.724785 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:41:15.728603 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:41:15.764841 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 12:41:15.766236 coreos-metadata[812]: Apr 30 12:41:15.766 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 30 12:41:15.768161 coreos-metadata[812]: Apr 30 12:41:15.767 INFO Fetch successful Apr 30 12:41:15.768161 coreos-metadata[812]: Apr 30 12:41:15.768 INFO wrote hostname ci-4230-1-1-7-cef124738e to /sysroot/etc/hostname Apr 30 12:41:15.771544 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 12:41:15.774175 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Apr 30 12:41:15.778725 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 12:41:15.783329 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 12:41:15.880903 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 12:41:15.887074 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 12:41:15.890141 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 12:41:15.898960 kernel: BTRFS info (device sda6): last unmount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:41:15.918993 ignition[927]: INFO : Ignition 2.20.0 Apr 30 12:41:15.918993 ignition[927]: INFO : Stage: mount Apr 30 12:41:15.920965 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:41:15.920965 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:41:15.920965 ignition[927]: INFO : mount: mount passed Apr 30 12:41:15.920965 ignition[927]: INFO : Ignition finished successfully Apr 30 12:41:15.923093 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 12:41:15.928180 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 12:41:15.930945 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 12:41:16.101806 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 12:41:16.110234 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:41:16.119977 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (939) Apr 30 12:41:16.121963 kernel: BTRFS info (device sda6): first mount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:41:16.122022 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:41:16.122041 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:41:16.125710 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 12:41:16.125778 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:41:16.130449 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:41:16.151827 ignition[955]: INFO : Ignition 2.20.0 Apr 30 12:41:16.151827 ignition[955]: INFO : Stage: files Apr 30 12:41:16.153189 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:41:16.153189 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:41:16.153189 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Apr 30 12:41:16.156387 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 12:41:16.156387 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 12:41:16.162209 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 12:41:16.162209 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 12:41:16.162209 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 12:41:16.162047 unknown[955]: wrote ssh authorized keys file for user: core Apr 30 12:41:16.167573 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Apr 30 12:41:16.167573 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Apr 30 12:41:16.279008 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 12:41:16.501984 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Apr 30 12:41:16.501984 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:41:16.504405 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 30 12:41:16.537069 systemd-networkd[773]: eth1: Gained IPv6LL Apr 30 12:41:16.729171 systemd-networkd[773]: eth0: Gained IPv6LL Apr 30 12:41:16.959979 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 12:41:17.033957 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:41:17.035229 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 12:41:17.035229 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 12:41:17.035229 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:41:17.035229 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:41:17.035229 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:41:17.035229 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:41:17.035229 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:41:17.035229 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:41:17.035229 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:41:17.035229 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:41:17.035229 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 12:41:17.035229 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 12:41:17.035229 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 12:41:17.035229 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Apr 30 12:41:17.313329 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 12:41:17.552446 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 12:41:17.552446 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 12:41:17.554897 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:41:17.554897 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:41:17.554897 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 12:41:17.554897 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 30 12:41:17.554897 ignition[955]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 12:41:17.554897 ignition[955]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 12:41:17.554897 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 30 12:41:17.554897 ignition[955]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 30 12:41:17.554897 ignition[955]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 12:41:17.554897 ignition[955]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:41:17.554897 ignition[955]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:41:17.554897 ignition[955]: INFO : files: files passed Apr 30 12:41:17.554897 ignition[955]: INFO : Ignition finished successfully Apr 30 12:41:17.556597 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 12:41:17.565468 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 12:41:17.568278 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 12:41:17.569738 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 12:41:17.569894 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 12:41:17.584489 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:41:17.584489 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:41:17.586885 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:41:17.589204 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:41:17.590087 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 12:41:17.595140 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 12:41:17.623185 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 12:41:17.623355 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 12:41:17.625261 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 12:41:17.627038 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 12:41:17.629828 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 12:41:17.636181 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 12:41:17.649010 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:41:17.655109 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 12:41:17.666988 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:41:17.667845 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:41:17.669271 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 12:41:17.670300 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 12:41:17.670434 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:41:17.671799 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 12:41:17.672466 systemd[1]: Stopped target basic.target - Basic System. Apr 30 12:41:17.674029 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 12:41:17.674965 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:41:17.677487 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 12:41:17.678603 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 12:41:17.679582 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:41:17.680658 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 12:41:17.681714 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 12:41:17.682661 systemd[1]: Stopped target swap.target - Swaps. Apr 30 12:41:17.683493 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 12:41:17.683619 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:41:17.684905 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:41:17.685538 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:41:17.686547 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 12:41:17.689956 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:41:17.690618 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 12:41:17.690734 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 12:41:17.693659 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 12:41:17.693962 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:41:17.696001 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 12:41:17.696217 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 12:41:17.697887 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 12:41:17.698121 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 12:41:17.711371 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 12:41:17.712578 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 12:41:17.712881 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:41:17.716160 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 12:41:17.716601 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 12:41:17.716715 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:41:17.719888 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 12:41:17.720007 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:41:17.731989 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 12:41:17.732086 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 12:41:17.738935 ignition[1008]: INFO : Ignition 2.20.0 Apr 30 12:41:17.738935 ignition[1008]: INFO : Stage: umount Apr 30 12:41:17.738935 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:41:17.738935 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:41:17.747082 ignition[1008]: INFO : umount: umount passed Apr 30 12:41:17.747082 ignition[1008]: INFO : Ignition finished successfully Apr 30 12:41:17.743752 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 12:41:17.748632 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 12:41:17.748723 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 12:41:17.754354 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 12:41:17.754453 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 12:41:17.756274 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 12:41:17.756332 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 12:41:17.760004 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 12:41:17.760055 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 12:41:17.763044 systemd[1]: Stopped target network.target - Network. Apr 30 12:41:17.763545 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 12:41:17.763606 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:41:17.765457 systemd[1]: Stopped target paths.target - Path Units. Apr 30 12:41:17.768699 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 12:41:17.772530 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:41:17.773362 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 12:41:17.773877 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 12:41:17.774488 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 12:41:17.774535 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:41:17.777111 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 12:41:17.777150 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:41:17.777678 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 12:41:17.777725 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 12:41:17.778368 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 12:41:17.778404 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 12:41:17.780092 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 12:41:17.780800 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 12:41:17.787541 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 12:41:17.787733 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 12:41:17.795296 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 30 12:41:17.796497 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 12:41:17.797978 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 12:41:17.800183 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 30 12:41:17.800426 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 12:41:17.800514 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 12:41:17.802188 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 12:41:17.802250 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:41:17.803426 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 12:41:17.803478 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 12:41:17.815197 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 12:41:17.816383 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 12:41:17.816511 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:41:17.819641 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:41:17.819697 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:41:17.821068 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 12:41:17.821130 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 12:41:17.821799 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 12:41:17.821841 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:41:17.823726 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:41:17.825876 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 30 12:41:17.828601 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:41:17.838114 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 12:41:17.838993 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 12:41:17.842649 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 12:41:17.844448 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:41:17.847287 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 12:41:17.847443 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 12:41:17.849820 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 12:41:17.849854 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:41:17.850457 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 12:41:17.850502 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:41:17.852070 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 12:41:17.852113 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 12:41:17.853533 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:41:17.853574 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:41:17.861102 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 12:41:17.862461 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 12:41:17.862525 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:41:17.865374 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:41:17.865461 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:41:17.868091 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 30 12:41:17.868149 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:41:17.868492 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 12:41:17.868575 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 12:41:17.870041 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 12:41:17.875120 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 12:41:17.885192 systemd[1]: Switching root. Apr 30 12:41:17.920772 systemd-journald[237]: Journal stopped Apr 30 12:41:18.849015 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Apr 30 12:41:18.849119 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 12:41:18.849134 kernel: SELinux: policy capability open_perms=1 Apr 30 12:41:18.849143 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 12:41:18.849152 kernel: SELinux: policy capability always_check_network=0 Apr 30 12:41:18.849161 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 12:41:18.849170 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 12:41:18.849179 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 12:41:18.849191 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 12:41:18.849206 kernel: audit: type=1403 audit(1746016878.024:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 12:41:18.849220 systemd[1]: Successfully loaded SELinux policy in 33.390ms. Apr 30 12:41:18.849240 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.063ms. Apr 30 12:41:18.849251 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:41:18.849261 systemd[1]: Detected virtualization kvm. Apr 30 12:41:18.849272 systemd[1]: Detected architecture arm64. Apr 30 12:41:18.849282 systemd[1]: Detected first boot. Apr 30 12:41:18.849296 systemd[1]: Hostname set to . Apr 30 12:41:18.849307 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:41:18.849317 zram_generator::config[1052]: No configuration found. Apr 30 12:41:18.849327 kernel: NET: Registered PF_VSOCK protocol family Apr 30 12:41:18.849337 systemd[1]: Populated /etc with preset unit settings. Apr 30 12:41:18.849347 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 30 12:41:18.849357 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 12:41:18.849367 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 12:41:18.849377 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 12:41:18.849389 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 12:41:18.849399 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 12:41:18.849408 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 12:41:18.849419 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 12:41:18.849429 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 12:41:18.849439 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 12:41:18.849449 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 12:41:18.849459 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 12:41:18.849471 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:41:18.849482 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:41:18.849493 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 12:41:18.849502 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 12:41:18.849517 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 12:41:18.849531 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:41:18.849542 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 12:41:18.849553 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:41:18.849564 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 12:41:18.849574 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 12:41:18.849584 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 12:41:18.849594 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 12:41:18.849604 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:41:18.849614 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:41:18.849623 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:41:18.849633 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:41:18.849645 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 12:41:18.849655 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 12:41:18.849665 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 30 12:41:18.849678 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:41:18.849690 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:41:18.849700 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:41:18.849712 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 12:41:18.849722 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 12:41:18.849743 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 12:41:18.849755 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 12:41:18.849765 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 12:41:18.849775 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 12:41:18.849785 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 12:41:18.849796 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 12:41:18.849808 systemd[1]: Reached target machines.target - Containers. Apr 30 12:41:18.849819 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 12:41:18.849829 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:41:18.849839 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:41:18.849850 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 12:41:18.849860 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:41:18.849870 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:41:18.849880 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:41:18.849891 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 12:41:18.849903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:41:18.849913 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 12:41:18.851040 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 12:41:18.851056 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 12:41:18.851067 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 12:41:18.851077 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 12:41:18.851088 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:41:18.851099 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:41:18.851116 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:41:18.851127 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 12:41:18.851137 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 12:41:18.851148 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 30 12:41:18.851158 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:41:18.851170 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 12:41:18.851180 systemd[1]: Stopped verity-setup.service. Apr 30 12:41:18.851191 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 12:41:18.851200 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 12:41:18.851211 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 12:41:18.851221 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 12:41:18.851232 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 12:41:18.851243 kernel: fuse: init (API version 7.39) Apr 30 12:41:18.851254 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 12:41:18.851264 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:41:18.851274 kernel: ACPI: bus type drm_connector registered Apr 30 12:41:18.851283 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 12:41:18.851293 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 12:41:18.851304 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:41:18.851315 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:41:18.851326 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:41:18.851336 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:41:18.851347 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 12:41:18.851360 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:41:18.851370 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:41:18.851381 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 12:41:18.851392 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 12:41:18.851404 kernel: loop: module loaded Apr 30 12:41:18.851413 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 12:41:18.851423 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 12:41:18.851433 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:41:18.851476 systemd-journald[1119]: Collecting audit messages is disabled. Apr 30 12:41:18.851500 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:41:18.851511 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:41:18.851523 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 12:41:18.851535 systemd-journald[1119]: Journal started Apr 30 12:41:18.851556 systemd-journald[1119]: Runtime Journal (/run/log/journal/58c1f397d44d443c91395bae54fae033) is 8M, max 76.6M, 68.6M free. Apr 30 12:41:18.577054 systemd[1]: Queued start job for default target multi-user.target. Apr 30 12:41:18.590334 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 12:41:18.591155 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 12:41:18.854287 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:41:18.855833 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 12:41:18.856889 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 30 12:41:18.858814 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 12:41:18.859609 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 12:41:18.874607 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 12:41:18.875357 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 12:41:18.875389 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:41:18.877086 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 30 12:41:18.882545 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 12:41:18.885320 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 12:41:18.887192 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:41:18.896124 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 12:41:18.898755 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 12:41:18.900240 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:41:18.902667 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 12:41:18.904101 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:41:18.912173 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:41:18.917187 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 12:41:18.926181 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 12:41:18.932569 systemd-journald[1119]: Time spent on flushing to /var/log/journal/58c1f397d44d443c91395bae54fae033 is 77.966ms for 1142 entries. Apr 30 12:41:18.932569 systemd-journald[1119]: System Journal (/var/log/journal/58c1f397d44d443c91395bae54fae033) is 8M, max 584.8M, 576.8M free. Apr 30 12:41:19.040176 systemd-journald[1119]: Received client request to flush runtime journal. Apr 30 12:41:19.040227 kernel: loop0: detected capacity change from 0 to 123192 Apr 30 12:41:19.040242 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 12:41:19.040253 kernel: loop1: detected capacity change from 0 to 201592 Apr 30 12:41:18.929530 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 12:41:18.943435 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 12:41:18.946612 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 12:41:18.951152 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 30 12:41:18.969270 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:41:18.991022 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:41:19.006104 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 12:41:19.010960 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 30 12:41:19.034720 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 12:41:19.047167 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:41:19.049486 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 12:41:19.058823 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 12:41:19.086959 kernel: loop2: detected capacity change from 0 to 8 Apr 30 12:41:19.089321 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Apr 30 12:41:19.089342 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Apr 30 12:41:19.104738 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:41:19.111393 kernel: loop3: detected capacity change from 0 to 113512 Apr 30 12:41:19.154953 kernel: loop4: detected capacity change from 0 to 123192 Apr 30 12:41:19.173947 kernel: loop5: detected capacity change from 0 to 201592 Apr 30 12:41:19.198031 kernel: loop6: detected capacity change from 0 to 8 Apr 30 12:41:19.202100 kernel: loop7: detected capacity change from 0 to 113512 Apr 30 12:41:19.217611 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 30 12:41:19.218122 (sd-merge)[1198]: Merged extensions into '/usr'. Apr 30 12:41:19.226300 systemd[1]: Reload requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 12:41:19.226314 systemd[1]: Reloading... Apr 30 12:41:19.335061 zram_generator::config[1224]: No configuration found. Apr 30 12:41:19.377983 ldconfig[1169]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 12:41:19.482896 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:41:19.545996 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 12:41:19.546456 systemd[1]: Reloading finished in 318 ms. Apr 30 12:41:19.571714 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 12:41:19.573384 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 12:41:19.582288 systemd[1]: Starting ensure-sysext.service... Apr 30 12:41:19.587069 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:41:19.610336 systemd[1]: Reload requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Apr 30 12:41:19.610446 systemd[1]: Reloading... Apr 30 12:41:19.627428 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 12:41:19.627621 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 12:41:19.628292 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 12:41:19.628490 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Apr 30 12:41:19.628532 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Apr 30 12:41:19.634410 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:41:19.634518 systemd-tmpfiles[1266]: Skipping /boot Apr 30 12:41:19.647277 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:41:19.647393 systemd-tmpfiles[1266]: Skipping /boot Apr 30 12:41:19.694948 zram_generator::config[1295]: No configuration found. Apr 30 12:41:19.795139 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:41:19.856107 systemd[1]: Reloading finished in 245 ms. Apr 30 12:41:19.872973 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 12:41:19.886962 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:41:19.899244 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:41:19.906033 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 12:41:19.911060 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 12:41:19.914054 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:41:19.923202 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:41:19.933212 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 12:41:19.938635 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:41:19.941893 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:41:19.947240 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:41:19.958232 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:41:19.959139 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:41:19.959254 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:41:19.963512 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 12:41:19.968939 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:41:19.969146 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:41:19.969230 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:41:19.979475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:41:19.979673 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:41:19.981376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:41:19.981561 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:41:19.983871 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:41:19.985124 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:41:19.992326 systemd[1]: Finished ensure-sysext.service. Apr 30 12:41:19.993991 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 12:41:19.999319 systemd-udevd[1342]: Using default interface naming scheme 'v255'. Apr 30 12:41:20.000182 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 12:41:20.007171 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:41:20.013979 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:41:20.016123 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:41:20.016173 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:41:20.016210 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:41:20.016260 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:41:20.029426 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 12:41:20.031997 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 12:41:20.034021 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 12:41:20.037187 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 12:41:20.037853 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:41:20.039308 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:41:20.040977 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:41:20.053507 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:41:20.058270 augenrules[1381]: No rules Apr 30 12:41:20.059211 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:41:20.059434 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:41:20.069068 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 12:41:20.085992 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 12:41:20.195131 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 30 12:41:20.222999 systemd-resolved[1338]: Positive Trust Anchors: Apr 30 12:41:20.223301 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:41:20.223395 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:41:20.229025 systemd-resolved[1338]: Using system hostname 'ci-4230-1-1-7-cef124738e'. Apr 30 12:41:20.231438 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:41:20.232417 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:41:20.236244 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 12:41:20.236985 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 12:41:20.256478 systemd-networkd[1375]: lo: Link UP Apr 30 12:41:20.256770 systemd-networkd[1375]: lo: Gained carrier Apr 30 12:41:20.259321 systemd-networkd[1375]: Enumeration completed Apr 30 12:41:20.259480 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:41:20.261565 systemd[1]: Reached target network.target - Network. Apr 30 12:41:20.266172 systemd-networkd[1375]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:41:20.266176 systemd-networkd[1375]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:41:20.267029 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 30 12:41:20.268986 systemd-networkd[1375]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:41:20.269011 systemd-networkd[1375]: eth1: Link UP Apr 30 12:41:20.269014 systemd-networkd[1375]: eth1: Gained carrier Apr 30 12:41:20.269023 systemd-networkd[1375]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:41:20.270327 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 12:41:20.293993 systemd-networkd[1375]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 12:41:20.295087 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Apr 30 12:41:20.297424 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:41:20.297532 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:41:20.298627 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Apr 30 12:41:20.299524 systemd-networkd[1375]: eth0: Link UP Apr 30 12:41:20.299530 systemd-networkd[1375]: eth0: Gained carrier Apr 30 12:41:20.299551 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:41:20.303417 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Apr 30 12:41:20.306961 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 30 12:41:20.323989 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1394) Apr 30 12:41:20.327992 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 12:41:20.343662 systemd-networkd[1375]: eth0: DHCPv4 address 91.99.82.124/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 12:41:20.344106 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Apr 30 12:41:20.344692 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Apr 30 12:41:20.379470 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 12:41:20.387614 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 12:41:20.395096 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 30 12:41:20.395251 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:41:20.397246 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:41:20.400104 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:41:20.406015 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:41:20.406650 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:41:20.406683 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:41:20.406703 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 12:41:20.411425 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:41:20.411633 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:41:20.412905 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:41:20.420103 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 12:41:20.422678 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:41:20.422943 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:41:20.424326 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:41:20.425077 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:41:20.429812 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:41:20.437235 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 30 12:41:20.437283 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 12:41:20.437295 kernel: [drm] features: -context_init Apr 30 12:41:20.438963 kernel: [drm] number of scanouts: 1 Apr 30 12:41:20.439000 kernel: [drm] number of cap sets: 0 Apr 30 12:41:20.441967 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 30 12:41:20.447491 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 12:41:20.456933 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 12:41:20.502680 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:41:20.511661 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:41:20.513057 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:41:20.523331 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:41:20.588004 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:41:20.633994 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 12:41:20.646243 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 12:41:20.659501 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:41:20.689735 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 12:41:20.691983 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:41:20.692622 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:41:20.693356 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 12:41:20.694116 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 12:41:20.694945 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 12:41:20.695578 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 12:41:20.696240 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 12:41:20.696876 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 12:41:20.696910 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:41:20.697593 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:41:20.699591 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 12:41:20.701744 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 12:41:20.705289 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 30 12:41:20.706108 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 30 12:41:20.706741 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 30 12:41:20.715207 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 12:41:20.717079 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 30 12:41:20.723129 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 12:41:20.724755 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 12:41:20.726325 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:41:20.727520 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:41:20.728773 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:41:20.728832 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:41:20.729806 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:41:20.736304 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 12:41:20.739168 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 12:41:20.748700 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 12:41:20.752325 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 12:41:20.765471 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 12:41:20.769265 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 12:41:20.772002 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 12:41:20.772669 jq[1464]: false Apr 30 12:41:20.776073 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 12:41:20.779829 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 30 12:41:20.783173 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 12:41:20.786125 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 12:41:20.790422 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 12:41:20.792677 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 12:41:20.793256 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 12:41:20.795559 coreos-metadata[1460]: Apr 30 12:41:20.794 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 30 12:41:20.795096 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 12:41:20.798057 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 12:41:20.807884 coreos-metadata[1460]: Apr 30 12:41:20.799 INFO Fetch successful Apr 30 12:41:20.807884 coreos-metadata[1460]: Apr 30 12:41:20.799 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 30 12:41:20.807884 coreos-metadata[1460]: Apr 30 12:41:20.805 INFO Fetch successful Apr 30 12:41:20.800975 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 12:41:20.805261 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 12:41:20.805440 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 12:41:20.835089 extend-filesystems[1465]: Found loop4 Apr 30 12:41:20.835089 extend-filesystems[1465]: Found loop5 Apr 30 12:41:20.835089 extend-filesystems[1465]: Found loop6 Apr 30 12:41:20.835089 extend-filesystems[1465]: Found loop7 Apr 30 12:41:20.835089 extend-filesystems[1465]: Found sda Apr 30 12:41:20.835089 extend-filesystems[1465]: Found sda1 Apr 30 12:41:20.835089 extend-filesystems[1465]: Found sda2 Apr 30 12:41:20.835089 extend-filesystems[1465]: Found sda3 Apr 30 12:41:20.835089 extend-filesystems[1465]: Found usr Apr 30 12:41:20.835089 extend-filesystems[1465]: Found sda4 Apr 30 12:41:20.835089 extend-filesystems[1465]: Found sda6 Apr 30 12:41:20.835089 extend-filesystems[1465]: Found sda7 Apr 30 12:41:20.835089 extend-filesystems[1465]: Found sda9 Apr 30 12:41:20.835089 extend-filesystems[1465]: Checking size of /dev/sda9 Apr 30 12:41:20.890618 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 30 12:41:20.836507 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 12:41:20.902054 extend-filesystems[1465]: Resized partition /dev/sda9 Apr 30 12:41:20.853477 dbus-daemon[1461]: [system] SELinux support is enabled Apr 30 12:41:20.908324 tar[1482]: linux-arm64/LICENSE Apr 30 12:41:20.908324 tar[1482]: linux-arm64/helm Apr 30 12:41:20.837157 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 12:41:20.908531 jq[1474]: true Apr 30 12:41:20.910483 extend-filesystems[1501]: resize2fs 1.47.1 (20-May-2024) Apr 30 12:41:20.853936 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 12:41:20.859150 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 12:41:20.919558 jq[1502]: true Apr 30 12:41:20.859341 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 12:41:20.894385 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 12:41:20.894447 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 12:41:20.898256 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 12:41:20.898277 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 12:41:20.901645 (ntainerd)[1498]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 12:41:20.937610 update_engine[1473]: I20250430 12:41:20.937467 1473 main.cc:92] Flatcar Update Engine starting Apr 30 12:41:20.939941 systemd[1]: Started update-engine.service - Update Engine. Apr 30 12:41:20.942055 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 12:41:20.944373 update_engine[1473]: I20250430 12:41:20.944207 1473 update_check_scheduler.cc:74] Next update check in 5m27s Apr 30 12:41:21.001956 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 30 12:41:21.008615 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 12:41:21.009726 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 12:41:21.018432 extend-filesystems[1501]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 12:41:21.018432 extend-filesystems[1501]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 30 12:41:21.018432 extend-filesystems[1501]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 30 12:41:21.051345 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1394) Apr 30 12:41:21.051383 extend-filesystems[1465]: Resized filesystem in /dev/sda9 Apr 30 12:41:21.051383 extend-filesystems[1465]: Found sr0 Apr 30 12:41:21.019653 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 12:41:21.019914 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 12:41:21.021428 systemd-logind[1472]: New seat seat0. Apr 30 12:41:21.059846 systemd-logind[1472]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 12:41:21.069058 bash[1532]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:41:21.059862 systemd-logind[1472]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 30 12:41:21.069595 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 12:41:21.072967 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 12:41:21.081068 systemd[1]: Starting sshkeys.service... Apr 30 12:41:21.138137 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 12:41:21.146310 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 12:41:21.197123 coreos-metadata[1540]: Apr 30 12:41:21.196 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 30 12:41:21.198535 coreos-metadata[1540]: Apr 30 12:41:21.198 INFO Fetch successful Apr 30 12:41:21.205523 unknown[1540]: wrote ssh authorized keys file for user: core Apr 30 12:41:21.238945 update-ssh-keys[1549]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:41:21.238208 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 12:41:21.246770 systemd[1]: Finished sshkeys.service. Apr 30 12:41:21.246780 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 12:41:21.338144 containerd[1498]: time="2025-04-30T12:41:21.336375520Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 12:41:21.403092 containerd[1498]: time="2025-04-30T12:41:21.403043280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:41:21.406006 containerd[1498]: time="2025-04-30T12:41:21.405898080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:41:21.406108 containerd[1498]: time="2025-04-30T12:41:21.406091360Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 12:41:21.406164 containerd[1498]: time="2025-04-30T12:41:21.406151280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 12:41:21.407384 containerd[1498]: time="2025-04-30T12:41:21.407350480Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 12:41:21.407472 containerd[1498]: time="2025-04-30T12:41:21.407457720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 12:41:21.407618 containerd[1498]: time="2025-04-30T12:41:21.407589320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:41:21.407685 containerd[1498]: time="2025-04-30T12:41:21.407660720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:41:21.408067 containerd[1498]: time="2025-04-30T12:41:21.408046520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:41:21.409269 containerd[1498]: time="2025-04-30T12:41:21.408956880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 12:41:21.409269 containerd[1498]: time="2025-04-30T12:41:21.408985520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:41:21.409269 containerd[1498]: time="2025-04-30T12:41:21.408995920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 12:41:21.409269 containerd[1498]: time="2025-04-30T12:41:21.409122960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:41:21.409571 containerd[1498]: time="2025-04-30T12:41:21.409542920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:41:21.410025 containerd[1498]: time="2025-04-30T12:41:21.410003280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:41:21.410108 containerd[1498]: time="2025-04-30T12:41:21.410094040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 12:41:21.410611 containerd[1498]: time="2025-04-30T12:41:21.410592400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 12:41:21.410809 containerd[1498]: time="2025-04-30T12:41:21.410790760Z" level=info msg="metadata content store policy set" policy=shared Apr 30 12:41:21.419951 containerd[1498]: time="2025-04-30T12:41:21.418952360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 12:41:21.419951 containerd[1498]: time="2025-04-30T12:41:21.419022800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 12:41:21.419951 containerd[1498]: time="2025-04-30T12:41:21.419042440Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 12:41:21.419951 containerd[1498]: time="2025-04-30T12:41:21.419059160Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 12:41:21.419951 containerd[1498]: time="2025-04-30T12:41:21.419074040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 12:41:21.419951 containerd[1498]: time="2025-04-30T12:41:21.419254560Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 12:41:21.419951 containerd[1498]: time="2025-04-30T12:41:21.419515320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 12:41:21.419951 containerd[1498]: time="2025-04-30T12:41:21.419619360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 12:41:21.419951 containerd[1498]: time="2025-04-30T12:41:21.419636680Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 12:41:21.419951 containerd[1498]: time="2025-04-30T12:41:21.419651200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 12:41:21.419951 containerd[1498]: time="2025-04-30T12:41:21.419664880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 12:41:21.419951 containerd[1498]: time="2025-04-30T12:41:21.419676360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 12:41:21.419951 containerd[1498]: time="2025-04-30T12:41:21.419688160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 12:41:21.419951 containerd[1498]: time="2025-04-30T12:41:21.419714440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 12:41:21.420265 containerd[1498]: time="2025-04-30T12:41:21.419731840Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 12:41:21.420265 containerd[1498]: time="2025-04-30T12:41:21.419744920Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 12:41:21.420265 containerd[1498]: time="2025-04-30T12:41:21.419756680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 12:41:21.420265 containerd[1498]: time="2025-04-30T12:41:21.419768160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 12:41:21.420265 containerd[1498]: time="2025-04-30T12:41:21.419794800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 12:41:21.420265 containerd[1498]: time="2025-04-30T12:41:21.419809280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 12:41:21.420265 containerd[1498]: time="2025-04-30T12:41:21.419822040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 12:41:21.420265 containerd[1498]: time="2025-04-30T12:41:21.419834880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 12:41:21.420265 containerd[1498]: time="2025-04-30T12:41:21.419847720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 12:41:21.420265 containerd[1498]: time="2025-04-30T12:41:21.419861720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 12:41:21.420265 containerd[1498]: time="2025-04-30T12:41:21.419874160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 12:41:21.420265 containerd[1498]: time="2025-04-30T12:41:21.419887200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 12:41:21.420265 containerd[1498]: time="2025-04-30T12:41:21.419901240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 12:41:21.421176 containerd[1498]: time="2025-04-30T12:41:21.421150440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 12:41:21.421254 containerd[1498]: time="2025-04-30T12:41:21.421240000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 12:41:21.421308 containerd[1498]: time="2025-04-30T12:41:21.421296320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 12:41:21.421362 containerd[1498]: time="2025-04-30T12:41:21.421350080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 12:41:21.421441 containerd[1498]: time="2025-04-30T12:41:21.421425960Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 12:41:21.421511 containerd[1498]: time="2025-04-30T12:41:21.421497560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 12:41:21.421564 containerd[1498]: time="2025-04-30T12:41:21.421553120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 12:41:21.421615 containerd[1498]: time="2025-04-30T12:41:21.421603120Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 12:41:21.421938 containerd[1498]: time="2025-04-30T12:41:21.421840960Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 12:41:21.421938 containerd[1498]: time="2025-04-30T12:41:21.421873920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 12:41:21.421938 containerd[1498]: time="2025-04-30T12:41:21.421887760Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 12:41:21.421938 containerd[1498]: time="2025-04-30T12:41:21.421901680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 12:41:21.422946 containerd[1498]: time="2025-04-30T12:41:21.421911880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 12:41:21.422946 containerd[1498]: time="2025-04-30T12:41:21.422063640Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 12:41:21.422946 containerd[1498]: time="2025-04-30T12:41:21.422076920Z" level=info msg="NRI interface is disabled by configuration." Apr 30 12:41:21.422946 containerd[1498]: time="2025-04-30T12:41:21.422088800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 12:41:21.423042 containerd[1498]: time="2025-04-30T12:41:21.422423080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 12:41:21.423042 containerd[1498]: time="2025-04-30T12:41:21.422468600Z" level=info msg="Connect containerd service" Apr 30 12:41:21.423042 containerd[1498]: time="2025-04-30T12:41:21.422506160Z" level=info msg="using legacy CRI server" Apr 30 12:41:21.423042 containerd[1498]: time="2025-04-30T12:41:21.422512760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 12:41:21.423042 containerd[1498]: time="2025-04-30T12:41:21.422783160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 12:41:21.425617 containerd[1498]: time="2025-04-30T12:41:21.425588320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:41:21.426198 containerd[1498]: time="2025-04-30T12:41:21.426131480Z" level=info msg="Start subscribing containerd event" Apr 30 12:41:21.426198 containerd[1498]: time="2025-04-30T12:41:21.426192680Z" level=info msg="Start recovering state" Apr 30 12:41:21.426280 containerd[1498]: time="2025-04-30T12:41:21.426262800Z" level=info msg="Start event monitor" Apr 30 12:41:21.426306 containerd[1498]: time="2025-04-30T12:41:21.426279840Z" level=info msg="Start snapshots syncer" Apr 30 12:41:21.426306 containerd[1498]: time="2025-04-30T12:41:21.426291040Z" level=info msg="Start cni network conf syncer for default" Apr 30 12:41:21.426306 containerd[1498]: time="2025-04-30T12:41:21.426299360Z" level=info msg="Start streaming server" Apr 30 12:41:21.427161 containerd[1498]: time="2025-04-30T12:41:21.427140440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 12:41:21.427260 containerd[1498]: time="2025-04-30T12:41:21.427247080Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 12:41:21.427443 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 12:41:21.428809 containerd[1498]: time="2025-04-30T12:41:21.428786320Z" level=info msg="containerd successfully booted in 0.097204s" Apr 30 12:41:21.664207 tar[1482]: linux-arm64/README.md Apr 30 12:41:21.677556 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 12:41:21.721071 systemd-networkd[1375]: eth1: Gained IPv6LL Apr 30 12:41:21.721638 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Apr 30 12:41:21.726244 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 12:41:21.728241 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 12:41:21.738551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:41:21.741343 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 12:41:21.788459 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 12:41:21.998931 sshd_keygen[1496]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 12:41:22.017987 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 12:41:22.024296 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 12:41:22.047219 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 12:41:22.048451 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 12:41:22.059950 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 12:41:22.068354 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 12:41:22.077268 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 12:41:22.080376 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 12:41:22.082050 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 12:41:22.297175 systemd-networkd[1375]: eth0: Gained IPv6LL Apr 30 12:41:22.298375 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Apr 30 12:41:22.449162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:41:22.451076 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 12:41:22.455861 systemd[1]: Startup finished in 751ms (kernel) + 5.341s (initrd) + 4.464s (userspace) = 10.557s. Apr 30 12:41:22.457303 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:41:22.940904 kubelet[1592]: E0430 12:41:22.940775 1592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:41:22.944234 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:41:22.944380 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:41:22.944739 systemd[1]: kubelet.service: Consumed 807ms CPU time, 250.2M memory peak. Apr 30 12:41:33.195431 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 12:41:33.206272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:41:33.307994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:41:33.312399 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:41:33.360325 kubelet[1610]: E0430 12:41:33.360195 1610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:41:33.363627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:41:33.363780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:41:33.364307 systemd[1]: kubelet.service: Consumed 142ms CPU time, 102.4M memory peak. Apr 30 12:41:43.448278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 12:41:43.456190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:41:43.569034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:41:43.573928 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:41:43.615942 kubelet[1626]: E0430 12:41:43.614939 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:41:43.618682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:41:43.618831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:41:43.619197 systemd[1]: kubelet.service: Consumed 134ms CPU time, 104.2M memory peak. Apr 30 12:41:53.083239 systemd-resolved[1338]: Clock change detected. Flushing caches. Apr 30 12:41:53.083363 systemd-timesyncd[1367]: Contacted time server 144.76.137.152:123 (2.flatcar.pool.ntp.org). Apr 30 12:41:53.083431 systemd-timesyncd[1367]: Initial clock synchronization to Wed 2025-04-30 12:41:53.083174 UTC. Apr 30 12:41:54.191673 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 12:41:54.209970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:41:54.333883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:41:54.333951 (kubelet)[1641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:41:54.379172 kubelet[1641]: E0430 12:41:54.379116 1641 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:41:54.382153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:41:54.382367 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:41:54.382857 systemd[1]: kubelet.service: Consumed 134ms CPU time, 100.8M memory peak. Apr 30 12:41:59.284878 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 12:41:59.294068 systemd[1]: Started sshd@0-91.99.82.124:22-139.178.89.65:45656.service - OpenSSH per-connection server daemon (139.178.89.65:45656). Apr 30 12:42:00.301428 sshd[1649]: Accepted publickey for core from 139.178.89.65 port 45656 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:42:00.303727 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:00.315159 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 12:42:00.327252 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 12:42:00.331781 systemd-logind[1472]: New session 1 of user core. Apr 30 12:42:00.343372 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 12:42:00.349887 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 12:42:00.362252 (systemd)[1653]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 12:42:00.366736 systemd-logind[1472]: New session c1 of user core. Apr 30 12:42:00.498315 systemd[1653]: Queued start job for default target default.target. Apr 30 12:42:00.510407 systemd[1653]: Created slice app.slice - User Application Slice. Apr 30 12:42:00.510463 systemd[1653]: Reached target paths.target - Paths. Apr 30 12:42:00.510531 systemd[1653]: Reached target timers.target - Timers. Apr 30 12:42:00.512741 systemd[1653]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 12:42:00.525740 systemd[1653]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 12:42:00.525868 systemd[1653]: Reached target sockets.target - Sockets. Apr 30 12:42:00.525918 systemd[1653]: Reached target basic.target - Basic System. Apr 30 12:42:00.525947 systemd[1653]: Reached target default.target - Main User Target. Apr 30 12:42:00.525987 systemd[1653]: Startup finished in 150ms. Apr 30 12:42:00.526358 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 12:42:00.534947 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 12:42:01.240950 systemd[1]: Started sshd@1-91.99.82.124:22-139.178.89.65:45670.service - OpenSSH per-connection server daemon (139.178.89.65:45670). Apr 30 12:42:02.224871 sshd[1664]: Accepted publickey for core from 139.178.89.65 port 45670 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:42:02.227234 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:02.233902 systemd-logind[1472]: New session 2 of user core. Apr 30 12:42:02.239869 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 12:42:02.907884 sshd[1666]: Connection closed by 139.178.89.65 port 45670 Apr 30 12:42:02.907761 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:02.912468 systemd[1]: sshd@1-91.99.82.124:22-139.178.89.65:45670.service: Deactivated successfully. Apr 30 12:42:02.915227 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 12:42:02.917884 systemd-logind[1472]: Session 2 logged out. Waiting for processes to exit. Apr 30 12:42:02.919255 systemd-logind[1472]: Removed session 2. Apr 30 12:42:03.085028 systemd[1]: Started sshd@2-91.99.82.124:22-139.178.89.65:45678.service - OpenSSH per-connection server daemon (139.178.89.65:45678). Apr 30 12:42:04.070816 sshd[1672]: Accepted publickey for core from 139.178.89.65 port 45678 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:42:04.072704 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:04.078606 systemd-logind[1472]: New session 3 of user core. Apr 30 12:42:04.084876 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 12:42:04.441467 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 12:42:04.447779 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:42:04.559789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:04.560101 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:42:04.599652 kubelet[1683]: E0430 12:42:04.599547 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:42:04.602839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:42:04.603199 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:42:04.603677 systemd[1]: kubelet.service: Consumed 128ms CPU time, 104.2M memory peak. Apr 30 12:42:04.747781 sshd[1674]: Connection closed by 139.178.89.65 port 45678 Apr 30 12:42:04.748740 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:04.753074 systemd[1]: sshd@2-91.99.82.124:22-139.178.89.65:45678.service: Deactivated successfully. Apr 30 12:42:04.757168 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 12:42:04.758364 systemd-logind[1472]: Session 3 logged out. Waiting for processes to exit. Apr 30 12:42:04.760635 systemd-logind[1472]: Removed session 3. Apr 30 12:42:04.936224 systemd[1]: Started sshd@3-91.99.82.124:22-139.178.89.65:45690.service - OpenSSH per-connection server daemon (139.178.89.65:45690). Apr 30 12:42:05.934312 sshd[1696]: Accepted publickey for core from 139.178.89.65 port 45690 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:42:05.936300 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:05.943506 systemd-logind[1472]: New session 4 of user core. Apr 30 12:42:05.948902 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 12:42:06.393570 update_engine[1473]: I20250430 12:42:06.392705 1473 update_attempter.cc:509] Updating boot flags... Apr 30 12:42:06.437622 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1708) Apr 30 12:42:06.504606 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1709) Apr 30 12:42:06.633668 sshd[1698]: Connection closed by 139.178.89.65 port 45690 Apr 30 12:42:06.634545 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:06.640702 systemd[1]: sshd@3-91.99.82.124:22-139.178.89.65:45690.service: Deactivated successfully. Apr 30 12:42:06.643688 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 12:42:06.644734 systemd-logind[1472]: Session 4 logged out. Waiting for processes to exit. Apr 30 12:42:06.645920 systemd-logind[1472]: Removed session 4. Apr 30 12:42:06.818125 systemd[1]: Started sshd@4-91.99.82.124:22-139.178.89.65:35838.service - OpenSSH per-connection server daemon (139.178.89.65:35838). Apr 30 12:42:07.806115 sshd[1722]: Accepted publickey for core from 139.178.89.65 port 35838 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:42:07.807989 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:07.814710 systemd-logind[1472]: New session 5 of user core. Apr 30 12:42:07.819826 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 12:42:08.336793 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 12:42:08.337084 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:42:08.353953 sudo[1725]: pam_unix(sudo:session): session closed for user root Apr 30 12:42:08.515615 sshd[1724]: Connection closed by 139.178.89.65 port 35838 Apr 30 12:42:08.514489 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:08.519641 systemd-logind[1472]: Session 5 logged out. Waiting for processes to exit. Apr 30 12:42:08.520184 systemd[1]: sshd@4-91.99.82.124:22-139.178.89.65:35838.service: Deactivated successfully. Apr 30 12:42:08.523259 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 12:42:08.524643 systemd-logind[1472]: Removed session 5. Apr 30 12:42:08.684199 systemd[1]: Started sshd@5-91.99.82.124:22-139.178.89.65:35840.service - OpenSSH per-connection server daemon (139.178.89.65:35840). Apr 30 12:42:09.675220 sshd[1731]: Accepted publickey for core from 139.178.89.65 port 35840 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:42:09.677247 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:09.683943 systemd-logind[1472]: New session 6 of user core. Apr 30 12:42:09.689939 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 12:42:10.194362 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 12:42:10.194934 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:42:10.199389 sudo[1735]: pam_unix(sudo:session): session closed for user root Apr 30 12:42:10.205821 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 12:42:10.206117 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:42:10.226137 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:42:10.255046 augenrules[1757]: No rules Apr 30 12:42:10.256722 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:42:10.257112 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:42:10.258376 sudo[1734]: pam_unix(sudo:session): session closed for user root Apr 30 12:42:10.417352 sshd[1733]: Connection closed by 139.178.89.65 port 35840 Apr 30 12:42:10.418125 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:10.424224 systemd-logind[1472]: Session 6 logged out. Waiting for processes to exit. Apr 30 12:42:10.425160 systemd[1]: sshd@5-91.99.82.124:22-139.178.89.65:35840.service: Deactivated successfully. Apr 30 12:42:10.427144 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 12:42:10.428391 systemd-logind[1472]: Removed session 6. Apr 30 12:42:10.603054 systemd[1]: Started sshd@6-91.99.82.124:22-139.178.89.65:35850.service - OpenSSH per-connection server daemon (139.178.89.65:35850). Apr 30 12:42:11.590644 sshd[1766]: Accepted publickey for core from 139.178.89.65 port 35850 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:42:11.592412 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:11.597521 systemd-logind[1472]: New session 7 of user core. Apr 30 12:42:11.604987 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 12:42:12.113937 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 12:42:12.114245 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:42:12.426953 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 12:42:12.427001 (dockerd)[1787]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 12:42:12.648899 dockerd[1787]: time="2025-04-30T12:42:12.648812547Z" level=info msg="Starting up" Apr 30 12:42:12.745474 dockerd[1787]: time="2025-04-30T12:42:12.745399587Z" level=info msg="Loading containers: start." Apr 30 12:42:12.908635 kernel: Initializing XFRM netlink socket Apr 30 12:42:12.988532 systemd-networkd[1375]: docker0: Link UP Apr 30 12:42:13.023749 dockerd[1787]: time="2025-04-30T12:42:13.023486587Z" level=info msg="Loading containers: done." Apr 30 12:42:13.042651 dockerd[1787]: time="2025-04-30T12:42:13.041989467Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 12:42:13.042651 dockerd[1787]: time="2025-04-30T12:42:13.042125787Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Apr 30 12:42:13.042651 dockerd[1787]: time="2025-04-30T12:42:13.042328547Z" level=info msg="Daemon has completed initialization" Apr 30 12:42:13.080903 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 12:42:13.082059 dockerd[1787]: time="2025-04-30T12:42:13.081648067Z" level=info msg="API listen on /run/docker.sock" Apr 30 12:42:14.086812 containerd[1498]: time="2025-04-30T12:42:14.086760027Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Apr 30 12:42:14.676410 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 12:42:14.685057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:42:14.698995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2438398247.mount: Deactivated successfully. Apr 30 12:42:14.814843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:14.817197 (kubelet)[1996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:42:14.861967 kubelet[1996]: E0430 12:42:14.858684 1996 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:42:14.862984 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:42:14.863131 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:42:14.864717 systemd[1]: kubelet.service: Consumed 135ms CPU time, 104.1M memory peak. Apr 30 12:42:16.206119 containerd[1498]: time="2025-04-30T12:42:16.206058267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:16.208834 containerd[1498]: time="2025-04-30T12:42:16.208740787Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233210" Apr 30 12:42:16.210008 containerd[1498]: time="2025-04-30T12:42:16.209919787Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:16.214979 containerd[1498]: time="2025-04-30T12:42:16.214885347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:16.216391 containerd[1498]: time="2025-04-30T12:42:16.216120147Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.1293126s" Apr 30 12:42:16.216391 containerd[1498]: time="2025-04-30T12:42:16.216155587Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" Apr 30 12:42:16.217048 containerd[1498]: time="2025-04-30T12:42:16.217028027Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Apr 30 12:42:17.980775 containerd[1498]: time="2025-04-30T12:42:17.980694427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:17.981664 containerd[1498]: time="2025-04-30T12:42:17.981628707Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529591" Apr 30 12:42:17.983253 containerd[1498]: time="2025-04-30T12:42:17.983207027Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:17.987390 containerd[1498]: time="2025-04-30T12:42:17.987346347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:17.988507 containerd[1498]: time="2025-04-30T12:42:17.988471067Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.77133956s" Apr 30 12:42:17.988507 containerd[1498]: time="2025-04-30T12:42:17.988505707Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" Apr 30 12:42:17.989705 containerd[1498]: time="2025-04-30T12:42:17.989680987Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Apr 30 12:42:19.416644 containerd[1498]: time="2025-04-30T12:42:19.416561627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:19.418518 containerd[1498]: time="2025-04-30T12:42:19.418463707Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482193" Apr 30 12:42:19.421060 containerd[1498]: time="2025-04-30T12:42:19.421013267Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:19.422757 containerd[1498]: time="2025-04-30T12:42:19.422712707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:19.424206 containerd[1498]: time="2025-04-30T12:42:19.424163107Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.43444316s" Apr 30 12:42:19.424469 containerd[1498]: time="2025-04-30T12:42:19.424320227Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" Apr 30 12:42:19.425097 containerd[1498]: time="2025-04-30T12:42:19.424990147Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 12:42:20.332732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3894381971.mount: Deactivated successfully. Apr 30 12:42:20.767684 containerd[1498]: time="2025-04-30T12:42:20.767628867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:20.768991 containerd[1498]: time="2025-04-30T12:42:20.768938467Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370377" Apr 30 12:42:20.769322 containerd[1498]: time="2025-04-30T12:42:20.769290107Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:20.772320 containerd[1498]: time="2025-04-30T12:42:20.772273987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:20.773228 containerd[1498]: time="2025-04-30T12:42:20.773192907Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.34816264s" Apr 30 12:42:20.773565 containerd[1498]: time="2025-04-30T12:42:20.773355467Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" Apr 30 12:42:20.773991 containerd[1498]: time="2025-04-30T12:42:20.773968147Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Apr 30 12:42:21.382017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount287089481.mount: Deactivated successfully. Apr 30 12:42:22.038620 containerd[1498]: time="2025-04-30T12:42:22.038491907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:22.040463 containerd[1498]: time="2025-04-30T12:42:22.040408867Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Apr 30 12:42:22.041116 containerd[1498]: time="2025-04-30T12:42:22.041016787Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:22.046413 containerd[1498]: time="2025-04-30T12:42:22.046376027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:22.048635 containerd[1498]: time="2025-04-30T12:42:22.048380947Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.27437636s" Apr 30 12:42:22.048635 containerd[1498]: time="2025-04-30T12:42:22.048428467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Apr 30 12:42:22.049809 containerd[1498]: time="2025-04-30T12:42:22.049451987Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 12:42:22.582941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1656172893.mount: Deactivated successfully. Apr 30 12:42:22.590686 containerd[1498]: time="2025-04-30T12:42:22.590629707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:22.591465 containerd[1498]: time="2025-04-30T12:42:22.591422947Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Apr 30 12:42:22.592566 containerd[1498]: time="2025-04-30T12:42:22.592184427Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:22.597608 containerd[1498]: time="2025-04-30T12:42:22.597506507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:22.599279 containerd[1498]: time="2025-04-30T12:42:22.598359467Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 548.86636ms" Apr 30 12:42:22.599279 containerd[1498]: time="2025-04-30T12:42:22.598404507Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 30 12:42:22.599803 containerd[1498]: time="2025-04-30T12:42:22.599776947Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Apr 30 12:42:23.229604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4291016638.mount: Deactivated successfully. Apr 30 12:42:24.682905 containerd[1498]: time="2025-04-30T12:42:24.682834427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:24.687608 containerd[1498]: time="2025-04-30T12:42:24.686560907Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812537" Apr 30 12:42:24.687608 containerd[1498]: time="2025-04-30T12:42:24.687255227Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:24.694385 containerd[1498]: time="2025-04-30T12:42:24.694329907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:24.695438 containerd[1498]: time="2025-04-30T12:42:24.695394347Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.09549756s" Apr 30 12:42:24.695438 containerd[1498]: time="2025-04-30T12:42:24.695431507Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Apr 30 12:42:24.941767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 30 12:42:24.950008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:42:25.063917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:25.063987 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:42:25.107444 kubelet[2184]: E0430 12:42:25.107381 2184 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:42:25.110432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:42:25.110798 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:42:25.111223 systemd[1]: kubelet.service: Consumed 134ms CPU time, 102M memory peak. Apr 30 12:42:28.985051 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:28.985220 systemd[1]: kubelet.service: Consumed 134ms CPU time, 102M memory peak. Apr 30 12:42:28.992088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:42:29.022397 systemd[1]: Reload requested from client PID 2211 ('systemctl') (unit session-7.scope)... Apr 30 12:42:29.022418 systemd[1]: Reloading... Apr 30 12:42:29.169610 zram_generator::config[2262]: No configuration found. Apr 30 12:42:29.255195 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:42:29.347260 systemd[1]: Reloading finished in 324 ms. Apr 30 12:42:29.392925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:29.399235 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:42:29.405526 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 12:42:29.405871 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:29.405951 systemd[1]: kubelet.service: Consumed 101ms CPU time, 90.1M memory peak. Apr 30 12:42:29.409223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:42:29.538803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:29.542896 (kubelet)[2306]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:42:29.583620 kubelet[2306]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:42:29.583620 kubelet[2306]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 12:42:29.583620 kubelet[2306]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:42:29.585440 kubelet[2306]: I0430 12:42:29.584155 2306 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:42:31.095399 kubelet[2306]: I0430 12:42:31.095353 2306 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 12:42:31.096418 kubelet[2306]: I0430 12:42:31.095860 2306 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:42:31.096700 kubelet[2306]: I0430 12:42:31.096681 2306 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 12:42:31.127210 kubelet[2306]: E0430 12:42:31.127165 2306 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.99.82.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.82.124:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:42:31.129355 kubelet[2306]: I0430 12:42:31.129327 2306 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:42:31.138799 kubelet[2306]: E0430 12:42:31.138725 2306 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 12:42:31.138799 kubelet[2306]: I0430 12:42:31.138786 2306 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 12:42:31.141839 kubelet[2306]: I0430 12:42:31.141731 2306 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:42:31.143997 kubelet[2306]: I0430 12:42:31.143923 2306 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:42:31.144192 kubelet[2306]: I0430 12:42:31.143989 2306 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-7-cef124738e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 12:42:31.144329 kubelet[2306]: I0430 12:42:31.144247 2306 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:42:31.144329 kubelet[2306]: I0430 12:42:31.144256 2306 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 12:42:31.144514 kubelet[2306]: I0430 12:42:31.144474 2306 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:42:31.147799 kubelet[2306]: I0430 12:42:31.147742 2306 kubelet.go:446] "Attempting to sync node with API server" Apr 30 12:42:31.147799 kubelet[2306]: I0430 12:42:31.147789 2306 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:42:31.147924 kubelet[2306]: I0430 12:42:31.147811 2306 kubelet.go:352] "Adding apiserver pod source" Apr 30 12:42:31.147924 kubelet[2306]: I0430 12:42:31.147822 2306 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:42:31.152627 kubelet[2306]: W0430 12:42:31.152184 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.82.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 12:42:31.152627 kubelet[2306]: E0430 12:42:31.152244 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.82.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.82.124:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:42:31.152627 kubelet[2306]: W0430 12:42:31.152312 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.82.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-7-cef124738e&limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 12:42:31.152627 kubelet[2306]: E0430 12:42:31.152335 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.82.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-7-cef124738e&limit=500&resourceVersion=0\": dial tcp 91.99.82.124:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:42:31.152976 kubelet[2306]: I0430 12:42:31.152834 2306 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:42:31.154632 kubelet[2306]: I0430 12:42:31.153855 2306 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:42:31.154632 kubelet[2306]: W0430 12:42:31.153994 2306 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 12:42:31.156226 kubelet[2306]: I0430 12:42:31.156191 2306 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 12:42:31.156226 kubelet[2306]: I0430 12:42:31.156234 2306 server.go:1287] "Started kubelet" Apr 30 12:42:31.159973 kubelet[2306]: I0430 12:42:31.159914 2306 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:42:31.161025 kubelet[2306]: I0430 12:42:31.160658 2306 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:42:31.161424 kubelet[2306]: I0430 12:42:31.161361 2306 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:42:31.166614 kubelet[2306]: I0430 12:42:31.165902 2306 server.go:490] "Adding debug handlers to kubelet server" Apr 30 12:42:31.168304 kubelet[2306]: E0430 12:42:31.168036 2306 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.82.124:6443/api/v1/namespaces/default/events\": dial tcp 91.99.82.124:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-1-1-7-cef124738e.183b192a917a0de1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-1-7-cef124738e,UID:ci-4230-1-1-7-cef124738e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-7-cef124738e,},FirstTimestamp:2025-04-30 12:42:31.156215265 +0000 UTC m=+1.609311291,LastTimestamp:2025-04-30 12:42:31.156215265 +0000 UTC m=+1.609311291,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-7-cef124738e,}" Apr 30 12:42:31.169881 kubelet[2306]: I0430 12:42:31.169826 2306 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:42:31.172336 kubelet[2306]: I0430 12:42:31.172300 2306 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 12:42:31.173545 kubelet[2306]: I0430 12:42:31.173519 2306 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 12:42:31.173941 kubelet[2306]: E0430 12:42:31.173913 2306 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-1-1-7-cef124738e\" not found" Apr 30 12:42:31.174460 kubelet[2306]: I0430 12:42:31.174436 2306 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 12:42:31.174527 kubelet[2306]: I0430 12:42:31.174512 2306 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:42:31.176562 kubelet[2306]: W0430 12:42:31.176510 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.82.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 12:42:31.176738 kubelet[2306]: E0430 12:42:31.176717 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.82.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.82.124:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:42:31.177449 kubelet[2306]: E0430 12:42:31.177429 2306 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 12:42:31.178381 kubelet[2306]: I0430 12:42:31.178360 2306 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:42:31.178561 kubelet[2306]: I0430 12:42:31.178543 2306 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:42:31.180545 kubelet[2306]: E0430 12:42:31.180503 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.82.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-7-cef124738e?timeout=10s\": dial tcp 91.99.82.124:6443: connect: connection refused" interval="200ms" Apr 30 12:42:31.181406 kubelet[2306]: I0430 12:42:31.181382 2306 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:42:31.191832 kubelet[2306]: I0430 12:42:31.191639 2306 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:42:31.193085 kubelet[2306]: I0430 12:42:31.193055 2306 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:42:31.193707 kubelet[2306]: I0430 12:42:31.193207 2306 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 12:42:31.193707 kubelet[2306]: I0430 12:42:31.193238 2306 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 12:42:31.193707 kubelet[2306]: I0430 12:42:31.193249 2306 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 12:42:31.193707 kubelet[2306]: E0430 12:42:31.193295 2306 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:42:31.201447 kubelet[2306]: W0430 12:42:31.201379 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.82.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 12:42:31.202154 kubelet[2306]: E0430 12:42:31.202082 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.82.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.82.124:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:42:31.214838 kubelet[2306]: I0430 12:42:31.214789 2306 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 12:42:31.214838 kubelet[2306]: I0430 12:42:31.214818 2306 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 12:42:31.214838 kubelet[2306]: I0430 12:42:31.214839 2306 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:42:31.216815 kubelet[2306]: I0430 12:42:31.216787 2306 policy_none.go:49] "None policy: Start" Apr 30 12:42:31.216815 kubelet[2306]: I0430 12:42:31.216815 2306 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 12:42:31.216926 kubelet[2306]: I0430 12:42:31.216828 2306 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:42:31.222804 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 12:42:31.234378 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 12:42:31.238422 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 12:42:31.243518 kubelet[2306]: I0430 12:42:31.243475 2306 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:42:31.244144 kubelet[2306]: I0430 12:42:31.244096 2306 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 12:42:31.244367 kubelet[2306]: I0430 12:42:31.244222 2306 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:42:31.245449 kubelet[2306]: I0430 12:42:31.245432 2306 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:42:31.247358 kubelet[2306]: E0430 12:42:31.247327 2306 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 12:42:31.247448 kubelet[2306]: E0430 12:42:31.247380 2306 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-1-1-7-cef124738e\" not found" Apr 30 12:42:31.311287 systemd[1]: Created slice kubepods-burstable-podc42c4048d6a34e0a250f51b3901677e4.slice - libcontainer container kubepods-burstable-podc42c4048d6a34e0a250f51b3901677e4.slice. Apr 30 12:42:31.323139 kubelet[2306]: E0430 12:42:31.323069 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-7-cef124738e\" not found" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.327341 systemd[1]: Created slice kubepods-burstable-podd81e906803847ef2e55b70d87a69caa1.slice - libcontainer container kubepods-burstable-podd81e906803847ef2e55b70d87a69caa1.slice. Apr 30 12:42:31.330152 kubelet[2306]: E0430 12:42:31.330115 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-7-cef124738e\" not found" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.333424 systemd[1]: Created slice kubepods-burstable-podbc3579d0a1d99d11ff29aa04a3ee990e.slice - libcontainer container kubepods-burstable-podbc3579d0a1d99d11ff29aa04a3ee990e.slice. Apr 30 12:42:31.335671 kubelet[2306]: E0430 12:42:31.335637 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-7-cef124738e\" not found" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.348159 kubelet[2306]: I0430 12:42:31.347974 2306 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.349358 kubelet[2306]: E0430 12:42:31.349314 2306 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://91.99.82.124:6443/api/v1/nodes\": dial tcp 91.99.82.124:6443: connect: connection refused" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.382275 kubelet[2306]: E0430 12:42:31.382205 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.82.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-7-cef124738e?timeout=10s\": dial tcp 91.99.82.124:6443: connect: connection refused" interval="400ms" Apr 30 12:42:31.476226 kubelet[2306]: I0430 12:42:31.475833 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d81e906803847ef2e55b70d87a69caa1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-7-cef124738e\" (UID: \"d81e906803847ef2e55b70d87a69caa1\") " pod="kube-system/kube-apiserver-ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.476226 kubelet[2306]: I0430 12:42:31.475902 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bc3579d0a1d99d11ff29aa04a3ee990e-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-7-cef124738e\" (UID: \"bc3579d0a1d99d11ff29aa04a3ee990e\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.476226 kubelet[2306]: I0430 12:42:31.475937 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bc3579d0a1d99d11ff29aa04a3ee990e-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-7-cef124738e\" (UID: \"bc3579d0a1d99d11ff29aa04a3ee990e\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.476226 kubelet[2306]: I0430 12:42:31.475967 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c42c4048d6a34e0a250f51b3901677e4-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-7-cef124738e\" (UID: \"c42c4048d6a34e0a250f51b3901677e4\") " pod="kube-system/kube-scheduler-ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.476226 kubelet[2306]: I0430 12:42:31.475999 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc3579d0a1d99d11ff29aa04a3ee990e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-7-cef124738e\" (UID: \"bc3579d0a1d99d11ff29aa04a3ee990e\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.476626 kubelet[2306]: I0430 12:42:31.476026 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d81e906803847ef2e55b70d87a69caa1-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-7-cef124738e\" (UID: \"d81e906803847ef2e55b70d87a69caa1\") " pod="kube-system/kube-apiserver-ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.476626 kubelet[2306]: I0430 12:42:31.476052 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d81e906803847ef2e55b70d87a69caa1-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-7-cef124738e\" (UID: \"d81e906803847ef2e55b70d87a69caa1\") " pod="kube-system/kube-apiserver-ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.476626 kubelet[2306]: I0430 12:42:31.476079 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc3579d0a1d99d11ff29aa04a3ee990e-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-7-cef124738e\" (UID: \"bc3579d0a1d99d11ff29aa04a3ee990e\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.476626 kubelet[2306]: I0430 12:42:31.476111 2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc3579d0a1d99d11ff29aa04a3ee990e-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-7-cef124738e\" (UID: \"bc3579d0a1d99d11ff29aa04a3ee990e\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.552698 kubelet[2306]: I0430 12:42:31.552638 2306 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.553185 kubelet[2306]: E0430 12:42:31.553146 2306 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://91.99.82.124:6443/api/v1/nodes\": dial tcp 91.99.82.124:6443: connect: connection refused" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.626188 containerd[1498]: time="2025-04-30T12:42:31.625516388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-7-cef124738e,Uid:c42c4048d6a34e0a250f51b3901677e4,Namespace:kube-system,Attempt:0,}" Apr 30 12:42:31.632504 containerd[1498]: time="2025-04-30T12:42:31.632191617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-7-cef124738e,Uid:d81e906803847ef2e55b70d87a69caa1,Namespace:kube-system,Attempt:0,}" Apr 30 12:42:31.637790 containerd[1498]: time="2025-04-30T12:42:31.637725149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-7-cef124738e,Uid:bc3579d0a1d99d11ff29aa04a3ee990e,Namespace:kube-system,Attempt:0,}" Apr 30 12:42:31.783194 kubelet[2306]: E0430 12:42:31.783136 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.82.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-7-cef124738e?timeout=10s\": dial tcp 91.99.82.124:6443: connect: connection refused" interval="800ms" Apr 30 12:42:31.956798 kubelet[2306]: I0430 12:42:31.956170 2306 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.956798 kubelet[2306]: E0430 12:42:31.956703 2306 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://91.99.82.124:6443/api/v1/nodes\": dial tcp 91.99.82.124:6443: connect: connection refused" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:31.972707 kubelet[2306]: W0430 12:42:31.972574 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.82.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-7-cef124738e&limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 12:42:31.973021 kubelet[2306]: E0430 12:42:31.972981 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.82.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-7-cef124738e&limit=500&resourceVersion=0\": dial tcp 91.99.82.124:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:42:32.075647 kubelet[2306]: W0430 12:42:32.075564 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.82.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 12:42:32.075770 kubelet[2306]: E0430 12:42:32.075652 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.82.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.82.124:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:42:32.152311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3119704310.mount: Deactivated successfully. Apr 30 12:42:32.160616 containerd[1498]: time="2025-04-30T12:42:32.159706695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:42:32.163418 containerd[1498]: time="2025-04-30T12:42:32.163336071Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 30 12:42:32.169308 containerd[1498]: time="2025-04-30T12:42:32.169245082Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:42:32.171371 containerd[1498]: time="2025-04-30T12:42:32.171319314Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:42:32.172931 containerd[1498]: time="2025-04-30T12:42:32.172727616Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:42:32.174340 containerd[1498]: time="2025-04-30T12:42:32.174023676Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:42:32.174340 containerd[1498]: time="2025-04-30T12:42:32.174226959Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:42:32.176642 containerd[1498]: time="2025-04-30T12:42:32.176516474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:42:32.181614 containerd[1498]: time="2025-04-30T12:42:32.179479040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 553.83609ms" Apr 30 12:42:32.184361 containerd[1498]: time="2025-04-30T12:42:32.184295434Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 552.010495ms" Apr 30 12:42:32.185848 containerd[1498]: time="2025-04-30T12:42:32.185787617Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.966547ms" Apr 30 12:42:32.315239 containerd[1498]: time="2025-04-30T12:42:32.314164638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:42:32.315239 containerd[1498]: time="2025-04-30T12:42:32.314434522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:42:32.315239 containerd[1498]: time="2025-04-30T12:42:32.314447482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:42:32.315239 containerd[1498]: time="2025-04-30T12:42:32.314532164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:42:32.319919 containerd[1498]: time="2025-04-30T12:42:32.319715644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:42:32.319919 containerd[1498]: time="2025-04-30T12:42:32.319865526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:42:32.320259 containerd[1498]: time="2025-04-30T12:42:32.319975328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:42:32.322688 containerd[1498]: time="2025-04-30T12:42:32.322319364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:42:32.322688 containerd[1498]: time="2025-04-30T12:42:32.322386925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:42:32.322688 containerd[1498]: time="2025-04-30T12:42:32.322398125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:42:32.322688 containerd[1498]: time="2025-04-30T12:42:32.322522207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:42:32.323005 containerd[1498]: time="2025-04-30T12:42:32.322227322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:42:32.344887 systemd[1]: Started cri-containerd-50c764033a1ec65c5b4b8fe4e42b423f24a237a4a1c50757cd219d1879185b9d.scope - libcontainer container 50c764033a1ec65c5b4b8fe4e42b423f24a237a4a1c50757cd219d1879185b9d. Apr 30 12:42:32.353168 systemd[1]: Started cri-containerd-227f19d8fa3d5aca668652abc1acfe07e44d8246e9b809cd1972f8d27e188bfa.scope - libcontainer container 227f19d8fa3d5aca668652abc1acfe07e44d8246e9b809cd1972f8d27e188bfa. Apr 30 12:42:32.355961 systemd[1]: Started cri-containerd-45e5535c512dafc916240b5ea5a88ce4eacd3928475ba3b56c07f2e610c3807d.scope - libcontainer container 45e5535c512dafc916240b5ea5a88ce4eacd3928475ba3b56c07f2e610c3807d. Apr 30 12:42:32.424148 containerd[1498]: time="2025-04-30T12:42:32.424035813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-7-cef124738e,Uid:d81e906803847ef2e55b70d87a69caa1,Namespace:kube-system,Attempt:0,} returns sandbox id \"50c764033a1ec65c5b4b8fe4e42b423f24a237a4a1c50757cd219d1879185b9d\"" Apr 30 12:42:32.425736 containerd[1498]: time="2025-04-30T12:42:32.425146590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-7-cef124738e,Uid:bc3579d0a1d99d11ff29aa04a3ee990e,Namespace:kube-system,Attempt:0,} returns sandbox id \"45e5535c512dafc916240b5ea5a88ce4eacd3928475ba3b56c07f2e610c3807d\"" Apr 30 12:42:32.429777 containerd[1498]: time="2025-04-30T12:42:32.429720901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-7-cef124738e,Uid:c42c4048d6a34e0a250f51b3901677e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"227f19d8fa3d5aca668652abc1acfe07e44d8246e9b809cd1972f8d27e188bfa\"" Apr 30 12:42:32.432132 containerd[1498]: time="2025-04-30T12:42:32.431872614Z" level=info msg="CreateContainer within sandbox \"50c764033a1ec65c5b4b8fe4e42b423f24a237a4a1c50757cd219d1879185b9d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 12:42:32.432881 containerd[1498]: time="2025-04-30T12:42:32.432446943Z" level=info msg="CreateContainer within sandbox \"45e5535c512dafc916240b5ea5a88ce4eacd3928475ba3b56c07f2e610c3807d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 12:42:32.448837 containerd[1498]: time="2025-04-30T12:42:32.448794715Z" level=info msg="CreateContainer within sandbox \"227f19d8fa3d5aca668652abc1acfe07e44d8246e9b809cd1972f8d27e188bfa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 12:42:32.456478 containerd[1498]: time="2025-04-30T12:42:32.456293751Z" level=info msg="CreateContainer within sandbox \"45e5535c512dafc916240b5ea5a88ce4eacd3928475ba3b56c07f2e610c3807d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"034c8bd0c126281d2bfa9aaa8a85c1f94a428118342aa7691e7f7a8b61eb3f5e\"" Apr 30 12:42:32.457271 containerd[1498]: time="2025-04-30T12:42:32.457238845Z" level=info msg="StartContainer for \"034c8bd0c126281d2bfa9aaa8a85c1f94a428118342aa7691e7f7a8b61eb3f5e\"" Apr 30 12:42:32.476944 kubelet[2306]: W0430 12:42:32.476827 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.82.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 12:42:32.476944 kubelet[2306]: E0430 12:42:32.476901 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.82.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.82.124:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:42:32.477916 containerd[1498]: time="2025-04-30T12:42:32.477527638Z" level=info msg="CreateContainer within sandbox \"227f19d8fa3d5aca668652abc1acfe07e44d8246e9b809cd1972f8d27e188bfa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cc2569cd29291a9b2e5370812690176c48451c90e96bbf6992db987fc2e0b16e\"" Apr 30 12:42:32.485687 containerd[1498]: time="2025-04-30T12:42:32.485366319Z" level=info msg="CreateContainer within sandbox \"50c764033a1ec65c5b4b8fe4e42b423f24a237a4a1c50757cd219d1879185b9d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9ec5ad2c2b2b6a60cc0a87074d61adbe7e8ab6bad3612f89dd1e0f2e02bcc783\"" Apr 30 12:42:32.488617 containerd[1498]: time="2025-04-30T12:42:32.487796637Z" level=info msg="StartContainer for \"9ec5ad2c2b2b6a60cc0a87074d61adbe7e8ab6bad3612f89dd1e0f2e02bcc783\"" Apr 30 12:42:32.489206 containerd[1498]: time="2025-04-30T12:42:32.489181338Z" level=info msg="StartContainer for \"cc2569cd29291a9b2e5370812690176c48451c90e96bbf6992db987fc2e0b16e\"" Apr 30 12:42:32.494368 systemd[1]: Started cri-containerd-034c8bd0c126281d2bfa9aaa8a85c1f94a428118342aa7691e7f7a8b61eb3f5e.scope - libcontainer container 034c8bd0c126281d2bfa9aaa8a85c1f94a428118342aa7691e7f7a8b61eb3f5e. Apr 30 12:42:32.525870 systemd[1]: Started cri-containerd-cc2569cd29291a9b2e5370812690176c48451c90e96bbf6992db987fc2e0b16e.scope - libcontainer container cc2569cd29291a9b2e5370812690176c48451c90e96bbf6992db987fc2e0b16e. Apr 30 12:42:32.530658 systemd[1]: Started cri-containerd-9ec5ad2c2b2b6a60cc0a87074d61adbe7e8ab6bad3612f89dd1e0f2e02bcc783.scope - libcontainer container 9ec5ad2c2b2b6a60cc0a87074d61adbe7e8ab6bad3612f89dd1e0f2e02bcc783. Apr 30 12:42:32.534171 kubelet[2306]: W0430 12:42:32.534005 2306 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.82.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.82.124:6443: connect: connection refused Apr 30 12:42:32.534171 kubelet[2306]: E0430 12:42:32.534123 2306 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.82.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.82.124:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:42:32.578418 containerd[1498]: time="2025-04-30T12:42:32.577741264Z" level=info msg="StartContainer for \"034c8bd0c126281d2bfa9aaa8a85c1f94a428118342aa7691e7f7a8b61eb3f5e\" returns successfully" Apr 30 12:42:32.584085 kubelet[2306]: E0430 12:42:32.584024 2306 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.82.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-7-cef124738e?timeout=10s\": dial tcp 91.99.82.124:6443: connect: connection refused" interval="1.6s" Apr 30 12:42:32.601225 containerd[1498]: time="2025-04-30T12:42:32.601150746Z" level=info msg="StartContainer for \"9ec5ad2c2b2b6a60cc0a87074d61adbe7e8ab6bad3612f89dd1e0f2e02bcc783\" returns successfully" Apr 30 12:42:32.605833 containerd[1498]: time="2025-04-30T12:42:32.605658055Z" level=info msg="StartContainer for \"cc2569cd29291a9b2e5370812690176c48451c90e96bbf6992db987fc2e0b16e\" returns successfully" Apr 30 12:42:32.759370 kubelet[2306]: I0430 12:42:32.759336 2306 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:33.229636 kubelet[2306]: E0430 12:42:33.229262 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-7-cef124738e\" not found" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:33.234170 kubelet[2306]: E0430 12:42:33.233967 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-7-cef124738e\" not found" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:33.238348 kubelet[2306]: E0430 12:42:33.238255 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-7-cef124738e\" not found" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:34.242290 kubelet[2306]: E0430 12:42:34.242115 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-7-cef124738e\" not found" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:34.242290 kubelet[2306]: E0430 12:42:34.242155 2306 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-7-cef124738e\" not found" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:34.609610 kubelet[2306]: E0430 12:42:34.609468 2306 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-1-1-7-cef124738e\" not found" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:34.683639 kubelet[2306]: I0430 12:42:34.683596 2306 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:34.683639 kubelet[2306]: E0430 12:42:34.683637 2306 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4230-1-1-7-cef124738e\": node \"ci-4230-1-1-7-cef124738e\" not found" Apr 30 12:42:34.775461 kubelet[2306]: I0430 12:42:34.775203 2306 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-1-1-7-cef124738e" Apr 30 12:42:34.787714 kubelet[2306]: E0430 12:42:34.787675 2306 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-1-1-7-cef124738e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-1-1-7-cef124738e" Apr 30 12:42:34.789606 kubelet[2306]: I0430 12:42:34.787902 2306 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-1-1-7-cef124738e" Apr 30 12:42:34.791997 kubelet[2306]: E0430 12:42:34.791957 2306 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-1-1-7-cef124738e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-1-1-7-cef124738e" Apr 30 12:42:34.791997 kubelet[2306]: I0430 12:42:34.791993 2306 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-1-1-7-cef124738e" Apr 30 12:42:34.797835 kubelet[2306]: E0430 12:42:34.797737 2306 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-1-1-7-cef124738e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-1-1-7-cef124738e" Apr 30 12:42:35.153118 kubelet[2306]: I0430 12:42:35.153064 2306 apiserver.go:52] "Watching apiserver" Apr 30 12:42:35.175164 kubelet[2306]: I0430 12:42:35.175117 2306 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 12:42:36.708824 kubelet[2306]: I0430 12:42:36.708749 2306 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-1-1-7-cef124738e" Apr 30 12:42:36.998422 systemd[1]: Reload requested from client PID 2581 ('systemctl') (unit session-7.scope)... Apr 30 12:42:36.998887 systemd[1]: Reloading... Apr 30 12:42:37.104617 zram_generator::config[2626]: No configuration found. Apr 30 12:42:37.211177 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:42:37.327495 systemd[1]: Reloading finished in 328 ms. Apr 30 12:42:37.352165 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:42:37.368208 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 12:42:37.368691 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:37.368806 systemd[1]: kubelet.service: Consumed 2.025s CPU time, 122M memory peak. Apr 30 12:42:37.377964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:42:37.527109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:37.538139 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:42:37.594543 kubelet[2671]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:42:37.594543 kubelet[2671]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 12:42:37.594543 kubelet[2671]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:42:37.594543 kubelet[2671]: I0430 12:42:37.593228 2671 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:42:37.604791 kubelet[2671]: I0430 12:42:37.604714 2671 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 12:42:37.604791 kubelet[2671]: I0430 12:42:37.604752 2671 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:42:37.605087 kubelet[2671]: I0430 12:42:37.605047 2671 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 12:42:37.606671 kubelet[2671]: I0430 12:42:37.606553 2671 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 12:42:37.610036 kubelet[2671]: I0430 12:42:37.609242 2671 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:42:37.615954 kubelet[2671]: E0430 12:42:37.615886 2671 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 12:42:37.615954 kubelet[2671]: I0430 12:42:37.615929 2671 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 12:42:37.619340 kubelet[2671]: I0430 12:42:37.619280 2671 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:42:37.619530 kubelet[2671]: I0430 12:42:37.619473 2671 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:42:37.620808 kubelet[2671]: I0430 12:42:37.619508 2671 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-7-cef124738e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 12:42:37.620808 kubelet[2671]: I0430 12:42:37.620799 2671 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:42:37.620808 kubelet[2671]: I0430 12:42:37.620813 2671 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 12:42:37.621002 kubelet[2671]: I0430 12:42:37.620870 2671 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:42:37.621038 kubelet[2671]: I0430 12:42:37.621032 2671 kubelet.go:446] "Attempting to sync node with API server" Apr 30 12:42:37.621063 kubelet[2671]: I0430 12:42:37.621044 2671 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:42:37.621084 kubelet[2671]: I0430 12:42:37.621064 2671 kubelet.go:352] "Adding apiserver pod source" Apr 30 12:42:37.621084 kubelet[2671]: I0430 12:42:37.621074 2671 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:42:37.623596 kubelet[2671]: I0430 12:42:37.623425 2671 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:42:37.624382 kubelet[2671]: I0430 12:42:37.624350 2671 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:42:37.633555 kubelet[2671]: I0430 12:42:37.630621 2671 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 12:42:37.633555 kubelet[2671]: I0430 12:42:37.630668 2671 server.go:1287] "Started kubelet" Apr 30 12:42:37.635480 kubelet[2671]: I0430 12:42:37.634859 2671 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:42:37.635977 kubelet[2671]: I0430 12:42:37.635859 2671 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:42:37.636478 kubelet[2671]: I0430 12:42:37.636289 2671 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:42:37.646193 kubelet[2671]: I0430 12:42:37.646141 2671 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:42:37.647861 kubelet[2671]: I0430 12:42:37.647749 2671 server.go:490] "Adding debug handlers to kubelet server" Apr 30 12:42:37.649492 kubelet[2671]: I0430 12:42:37.649466 2671 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 12:42:37.651468 kubelet[2671]: I0430 12:42:37.651447 2671 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 12:42:37.651862 kubelet[2671]: E0430 12:42:37.651837 2671 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-1-1-7-cef124738e\" not found" Apr 30 12:42:37.653859 kubelet[2671]: I0430 12:42:37.653837 2671 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 12:42:37.654071 kubelet[2671]: I0430 12:42:37.654060 2671 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:42:37.656091 kubelet[2671]: I0430 12:42:37.656053 2671 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:42:37.657276 kubelet[2671]: I0430 12:42:37.657251 2671 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:42:37.657378 kubelet[2671]: I0430 12:42:37.657365 2671 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 12:42:37.657445 kubelet[2671]: I0430 12:42:37.657436 2671 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 12:42:37.657492 kubelet[2671]: I0430 12:42:37.657484 2671 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 12:42:37.657596 kubelet[2671]: E0430 12:42:37.657561 2671 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:42:37.672648 kubelet[2671]: I0430 12:42:37.672622 2671 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:42:37.674275 kubelet[2671]: I0430 12:42:37.672829 2671 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:42:37.674275 kubelet[2671]: I0430 12:42:37.672913 2671 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:42:37.695204 kubelet[2671]: E0430 12:42:37.695148 2671 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 12:42:37.737765 kubelet[2671]: I0430 12:42:37.737732 2671 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 12:42:37.737765 kubelet[2671]: I0430 12:42:37.737760 2671 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 12:42:37.737941 kubelet[2671]: I0430 12:42:37.737810 2671 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:42:37.738093 kubelet[2671]: I0430 12:42:37.738072 2671 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 12:42:37.738132 kubelet[2671]: I0430 12:42:37.738099 2671 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 12:42:37.738132 kubelet[2671]: I0430 12:42:37.738129 2671 policy_none.go:49] "None policy: Start" Apr 30 12:42:37.738177 kubelet[2671]: I0430 12:42:37.738145 2671 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 12:42:37.738177 kubelet[2671]: I0430 12:42:37.738171 2671 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:42:37.738347 kubelet[2671]: I0430 12:42:37.738333 2671 state_mem.go:75] "Updated machine memory state" Apr 30 12:42:37.744657 kubelet[2671]: I0430 12:42:37.744564 2671 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:42:37.745097 kubelet[2671]: I0430 12:42:37.745054 2671 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 12:42:37.745150 kubelet[2671]: I0430 12:42:37.745093 2671 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:42:37.747644 kubelet[2671]: E0430 12:42:37.747613 2671 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 12:42:37.748951 kubelet[2671]: I0430 12:42:37.748880 2671 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:42:37.759487 kubelet[2671]: I0430 12:42:37.758559 2671 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-1-1-7-cef124738e" Apr 30 12:42:37.759487 kubelet[2671]: I0430 12:42:37.758916 2671 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-1-1-7-cef124738e" Apr 30 12:42:37.759487 kubelet[2671]: I0430 12:42:37.759117 2671 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-1-1-7-cef124738e" Apr 30 12:42:37.774146 kubelet[2671]: E0430 12:42:37.773673 2671 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-1-1-7-cef124738e\" already exists" pod="kube-system/kube-controller-manager-ci-4230-1-1-7-cef124738e" Apr 30 12:42:37.850378 kubelet[2671]: I0430 12:42:37.849137 2671 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:37.862559 kubelet[2671]: I0430 12:42:37.861873 2671 kubelet_node_status.go:125] "Node was previously registered" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:37.862559 kubelet[2671]: I0430 12:42:37.861966 2671 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230-1-1-7-cef124738e" Apr 30 12:42:37.955614 kubelet[2671]: I0430 12:42:37.955528 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d81e906803847ef2e55b70d87a69caa1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-7-cef124738e\" (UID: \"d81e906803847ef2e55b70d87a69caa1\") " pod="kube-system/kube-apiserver-ci-4230-1-1-7-cef124738e" Apr 30 12:42:37.955806 kubelet[2671]: I0430 12:42:37.955622 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bc3579d0a1d99d11ff29aa04a3ee990e-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-7-cef124738e\" (UID: \"bc3579d0a1d99d11ff29aa04a3ee990e\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-7-cef124738e" Apr 30 12:42:37.955806 kubelet[2671]: I0430 12:42:37.955658 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc3579d0a1d99d11ff29aa04a3ee990e-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-7-cef124738e\" (UID: \"bc3579d0a1d99d11ff29aa04a3ee990e\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-7-cef124738e" Apr 30 12:42:37.955806 kubelet[2671]: I0430 12:42:37.955717 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c42c4048d6a34e0a250f51b3901677e4-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-7-cef124738e\" (UID: \"c42c4048d6a34e0a250f51b3901677e4\") " pod="kube-system/kube-scheduler-ci-4230-1-1-7-cef124738e" Apr 30 12:42:37.955806 kubelet[2671]: I0430 12:42:37.955753 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d81e906803847ef2e55b70d87a69caa1-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-7-cef124738e\" (UID: \"d81e906803847ef2e55b70d87a69caa1\") " pod="kube-system/kube-apiserver-ci-4230-1-1-7-cef124738e" Apr 30 12:42:37.955953 kubelet[2671]: I0430 12:42:37.955810 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d81e906803847ef2e55b70d87a69caa1-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-7-cef124738e\" (UID: \"d81e906803847ef2e55b70d87a69caa1\") " pod="kube-system/kube-apiserver-ci-4230-1-1-7-cef124738e" Apr 30 12:42:37.955953 kubelet[2671]: I0430 12:42:37.955847 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc3579d0a1d99d11ff29aa04a3ee990e-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-7-cef124738e\" (UID: \"bc3579d0a1d99d11ff29aa04a3ee990e\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-7-cef124738e" Apr 30 12:42:37.955953 kubelet[2671]: I0430 12:42:37.955888 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bc3579d0a1d99d11ff29aa04a3ee990e-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-7-cef124738e\" (UID: \"bc3579d0a1d99d11ff29aa04a3ee990e\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-7-cef124738e" Apr 30 12:42:37.955953 kubelet[2671]: I0430 12:42:37.955925 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc3579d0a1d99d11ff29aa04a3ee990e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-7-cef124738e\" (UID: \"bc3579d0a1d99d11ff29aa04a3ee990e\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-7-cef124738e" Apr 30 12:42:38.001422 sudo[2705]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 12:42:38.001912 sudo[2705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 12:42:38.469446 sudo[2705]: pam_unix(sudo:session): session closed for user root Apr 30 12:42:38.622096 kubelet[2671]: I0430 12:42:38.622041 2671 apiserver.go:52] "Watching apiserver" Apr 30 12:42:38.655177 kubelet[2671]: I0430 12:42:38.655057 2671 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 12:42:38.719607 kubelet[2671]: I0430 12:42:38.718305 2671 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-1-1-7-cef124738e" Apr 30 12:42:38.720469 kubelet[2671]: I0430 12:42:38.720358 2671 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-1-1-7-cef124738e" Apr 30 12:42:38.731645 kubelet[2671]: E0430 12:42:38.731610 2671 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-1-1-7-cef124738e\" already exists" pod="kube-system/kube-scheduler-ci-4230-1-1-7-cef124738e" Apr 30 12:42:38.733925 kubelet[2671]: E0430 12:42:38.733787 2671 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-1-1-7-cef124738e\" already exists" pod="kube-system/kube-apiserver-ci-4230-1-1-7-cef124738e" Apr 30 12:42:38.773846 kubelet[2671]: I0430 12:42:38.773767 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-1-1-7-cef124738e" podStartSLOduration=2.773722753 podStartE2EDuration="2.773722753s" podCreationTimestamp="2025-04-30 12:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:42:38.75619893 +0000 UTC m=+1.213065667" watchObservedRunningTime="2025-04-30 12:42:38.773722753 +0000 UTC m=+1.230589450" Apr 30 12:42:38.774239 kubelet[2671]: I0430 12:42:38.774124 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-1-1-7-cef124738e" podStartSLOduration=1.774115837 podStartE2EDuration="1.774115837s" podCreationTimestamp="2025-04-30 12:42:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:42:38.77249138 +0000 UTC m=+1.229358117" watchObservedRunningTime="2025-04-30 12:42:38.774115837 +0000 UTC m=+1.230982574" Apr 30 12:42:38.800705 kubelet[2671]: I0430 12:42:38.800239 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-1-1-7-cef124738e" podStartSLOduration=1.800215751 podStartE2EDuration="1.800215751s" podCreationTimestamp="2025-04-30 12:42:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:42:38.785620158 +0000 UTC m=+1.242486895" watchObservedRunningTime="2025-04-30 12:42:38.800215751 +0000 UTC m=+1.257082488" Apr 30 12:42:40.534374 sudo[1769]: pam_unix(sudo:session): session closed for user root Apr 30 12:42:40.695227 sshd[1768]: Connection closed by 139.178.89.65 port 35850 Apr 30 12:42:40.694952 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:40.701272 systemd[1]: sshd@6-91.99.82.124:22-139.178.89.65:35850.service: Deactivated successfully. Apr 30 12:42:40.705400 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 12:42:40.706388 systemd[1]: session-7.scope: Consumed 6.473s CPU time, 262.9M memory peak. Apr 30 12:42:40.708752 systemd-logind[1472]: Session 7 logged out. Waiting for processes to exit. Apr 30 12:42:40.711860 systemd-logind[1472]: Removed session 7. Apr 30 12:42:41.967725 kubelet[2671]: I0430 12:42:41.967632 2671 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 12:42:41.968540 containerd[1498]: time="2025-04-30T12:42:41.968467113Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 12:42:41.968969 kubelet[2671]: I0430 12:42:41.968760 2671 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 12:42:42.718039 systemd[1]: Created slice kubepods-besteffort-pod3c8174a6_eda3_43b2_a8f4_e8d270964ad4.slice - libcontainer container kubepods-besteffort-pod3c8174a6_eda3_43b2_a8f4_e8d270964ad4.slice. Apr 30 12:42:42.739388 systemd[1]: Created slice kubepods-burstable-podd441d00c_2f62_43b9_abb7_3adc5519c894.slice - libcontainer container kubepods-burstable-podd441d00c_2f62_43b9_abb7_3adc5519c894.slice. Apr 30 12:42:42.790286 kubelet[2671]: I0430 12:42:42.789622 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d441d00c-2f62-43b9-abb7-3adc5519c894-cilium-config-path\") pod \"cilium-fqh44\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " pod="kube-system/cilium-fqh44" Apr 30 12:42:42.790286 kubelet[2671]: I0430 12:42:42.789685 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-etc-cni-netd\") pod \"cilium-fqh44\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " pod="kube-system/cilium-fqh44" Apr 30 12:42:42.790286 kubelet[2671]: I0430 12:42:42.789714 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-host-proc-sys-net\") pod \"cilium-fqh44\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " pod="kube-system/cilium-fqh44" Apr 30 12:42:42.790286 kubelet[2671]: I0430 12:42:42.789739 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-bpf-maps\") pod \"cilium-fqh44\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " pod="kube-system/cilium-fqh44" Apr 30 12:42:42.790286 kubelet[2671]: I0430 12:42:42.789778 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d441d00c-2f62-43b9-abb7-3adc5519c894-clustermesh-secrets\") pod \"cilium-fqh44\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " pod="kube-system/cilium-fqh44" Apr 30 12:42:42.790286 kubelet[2671]: I0430 12:42:42.789805 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c8174a6-eda3-43b2-a8f4-e8d270964ad4-kube-proxy\") pod \"kube-proxy-qhdkm\" (UID: \"3c8174a6-eda3-43b2-a8f4-e8d270964ad4\") " pod="kube-system/kube-proxy-qhdkm" Apr 30 12:42:42.790730 kubelet[2671]: I0430 12:42:42.789829 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-cilium-run\") pod \"cilium-fqh44\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " pod="kube-system/cilium-fqh44" Apr 30 12:42:42.790730 kubelet[2671]: I0430 12:42:42.789854 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d441d00c-2f62-43b9-abb7-3adc5519c894-hubble-tls\") pod \"cilium-fqh44\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " pod="kube-system/cilium-fqh44" Apr 30 12:42:42.790730 kubelet[2671]: I0430 12:42:42.789877 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7wxg\" (UniqueName: \"kubernetes.io/projected/3c8174a6-eda3-43b2-a8f4-e8d270964ad4-kube-api-access-v7wxg\") pod \"kube-proxy-qhdkm\" (UID: \"3c8174a6-eda3-43b2-a8f4-e8d270964ad4\") " pod="kube-system/kube-proxy-qhdkm" Apr 30 12:42:42.790730 kubelet[2671]: I0430 12:42:42.789906 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-lib-modules\") pod \"cilium-fqh44\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " pod="kube-system/cilium-fqh44" Apr 30 12:42:42.790730 kubelet[2671]: I0430 12:42:42.789931 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-xtables-lock\") pod \"cilium-fqh44\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " pod="kube-system/cilium-fqh44" Apr 30 12:42:42.790730 kubelet[2671]: I0430 12:42:42.789955 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c8174a6-eda3-43b2-a8f4-e8d270964ad4-lib-modules\") pod \"kube-proxy-qhdkm\" (UID: \"3c8174a6-eda3-43b2-a8f4-e8d270964ad4\") " pod="kube-system/kube-proxy-qhdkm" Apr 30 12:42:42.791165 kubelet[2671]: I0430 12:42:42.789983 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-hostproc\") pod \"cilium-fqh44\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " pod="kube-system/cilium-fqh44" Apr 30 12:42:42.791165 kubelet[2671]: I0430 12:42:42.790011 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-cni-path\") pod \"cilium-fqh44\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " pod="kube-system/cilium-fqh44" Apr 30 12:42:42.791165 kubelet[2671]: I0430 12:42:42.790041 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-host-proc-sys-kernel\") pod \"cilium-fqh44\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " pod="kube-system/cilium-fqh44" Apr 30 12:42:42.791165 kubelet[2671]: I0430 12:42:42.790074 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khf6f\" (UniqueName: \"kubernetes.io/projected/d441d00c-2f62-43b9-abb7-3adc5519c894-kube-api-access-khf6f\") pod \"cilium-fqh44\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " pod="kube-system/cilium-fqh44" Apr 30 12:42:42.791165 kubelet[2671]: I0430 12:42:42.790104 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c8174a6-eda3-43b2-a8f4-e8d270964ad4-xtables-lock\") pod \"kube-proxy-qhdkm\" (UID: \"3c8174a6-eda3-43b2-a8f4-e8d270964ad4\") " pod="kube-system/kube-proxy-qhdkm" Apr 30 12:42:42.791165 kubelet[2671]: I0430 12:42:42.790131 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-cilium-cgroup\") pod \"cilium-fqh44\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " pod="kube-system/cilium-fqh44" Apr 30 12:42:43.034501 containerd[1498]: time="2025-04-30T12:42:43.033405290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qhdkm,Uid:3c8174a6-eda3-43b2-a8f4-e8d270964ad4,Namespace:kube-system,Attempt:0,}" Apr 30 12:42:43.050341 containerd[1498]: time="2025-04-30T12:42:43.050299859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fqh44,Uid:d441d00c-2f62-43b9-abb7-3adc5519c894,Namespace:kube-system,Attempt:0,}" Apr 30 12:42:43.077555 containerd[1498]: time="2025-04-30T12:42:43.077205943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:42:43.077555 containerd[1498]: time="2025-04-30T12:42:43.077349424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:42:43.077555 containerd[1498]: time="2025-04-30T12:42:43.077379584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:42:43.081109 containerd[1498]: time="2025-04-30T12:42:43.077730027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:42:43.114617 containerd[1498]: time="2025-04-30T12:42:43.113022014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:42:43.114617 containerd[1498]: time="2025-04-30T12:42:43.113152495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:42:43.114617 containerd[1498]: time="2025-04-30T12:42:43.113169416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:42:43.114818 systemd[1]: Started cri-containerd-021a62a5155ad88d6ddb96cb172e8da4fcb1a23e181f230cd389f72bc039d498.scope - libcontainer container 021a62a5155ad88d6ddb96cb172e8da4fcb1a23e181f230cd389f72bc039d498. Apr 30 12:42:43.117306 containerd[1498]: time="2025-04-30T12:42:43.115616634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:42:43.130788 systemd[1]: Created slice kubepods-besteffort-poda0c6fd2c_a39d_43ba_a1c7_2795e8fdb80f.slice - libcontainer container kubepods-besteffort-poda0c6fd2c_a39d_43ba_a1c7_2795e8fdb80f.slice. Apr 30 12:42:43.157987 systemd[1]: Started cri-containerd-3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33.scope - libcontainer container 3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33. Apr 30 12:42:43.179856 containerd[1498]: time="2025-04-30T12:42:43.179759561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qhdkm,Uid:3c8174a6-eda3-43b2-a8f4-e8d270964ad4,Namespace:kube-system,Attempt:0,} returns sandbox id \"021a62a5155ad88d6ddb96cb172e8da4fcb1a23e181f230cd389f72bc039d498\"" Apr 30 12:42:43.187695 containerd[1498]: time="2025-04-30T12:42:43.187517460Z" level=info msg="CreateContainer within sandbox \"021a62a5155ad88d6ddb96cb172e8da4fcb1a23e181f230cd389f72bc039d498\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 12:42:43.192727 kubelet[2671]: I0430 12:42:43.192507 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rzs7l\" (UID: \"a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f\") " pod="kube-system/cilium-operator-6c4d7847fc-rzs7l" Apr 30 12:42:43.192727 kubelet[2671]: I0430 12:42:43.192561 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wz4m\" (UniqueName: \"kubernetes.io/projected/a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f-kube-api-access-5wz4m\") pod \"cilium-operator-6c4d7847fc-rzs7l\" (UID: \"a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f\") " pod="kube-system/cilium-operator-6c4d7847fc-rzs7l" Apr 30 12:42:43.194140 containerd[1498]: time="2025-04-30T12:42:43.194104990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fqh44,Uid:d441d00c-2f62-43b9-abb7-3adc5519c894,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\"" Apr 30 12:42:43.196202 containerd[1498]: time="2025-04-30T12:42:43.196167445Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 12:42:43.208419 containerd[1498]: time="2025-04-30T12:42:43.208347978Z" level=info msg="CreateContainer within sandbox \"021a62a5155ad88d6ddb96cb172e8da4fcb1a23e181f230cd389f72bc039d498\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"54256ee78611b85bc1c4df460c3133921fdb8db94c49606f138dec645d49369d\"" Apr 30 12:42:43.209415 containerd[1498]: time="2025-04-30T12:42:43.209357985Z" level=info msg="StartContainer for \"54256ee78611b85bc1c4df460c3133921fdb8db94c49606f138dec645d49369d\"" Apr 30 12:42:43.236811 systemd[1]: Started cri-containerd-54256ee78611b85bc1c4df460c3133921fdb8db94c49606f138dec645d49369d.scope - libcontainer container 54256ee78611b85bc1c4df460c3133921fdb8db94c49606f138dec645d49369d. Apr 30 12:42:43.271856 containerd[1498]: time="2025-04-30T12:42:43.271809459Z" level=info msg="StartContainer for \"54256ee78611b85bc1c4df460c3133921fdb8db94c49606f138dec645d49369d\" returns successfully" Apr 30 12:42:43.436274 containerd[1498]: time="2025-04-30T12:42:43.435785023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rzs7l,Uid:a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f,Namespace:kube-system,Attempt:0,}" Apr 30 12:42:43.471736 containerd[1498]: time="2025-04-30T12:42:43.470943130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:42:43.471736 containerd[1498]: time="2025-04-30T12:42:43.471052170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:42:43.471736 containerd[1498]: time="2025-04-30T12:42:43.471081171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:42:43.471736 containerd[1498]: time="2025-04-30T12:42:43.471370973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:42:43.493829 systemd[1]: Started cri-containerd-5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a.scope - libcontainer container 5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a. Apr 30 12:42:43.526885 containerd[1498]: time="2025-04-30T12:42:43.526700552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rzs7l,Uid:a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a\"" Apr 30 12:42:43.780472 kubelet[2671]: I0430 12:42:43.780183 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qhdkm" podStartSLOduration=1.780163515 podStartE2EDuration="1.780163515s" podCreationTimestamp="2025-04-30 12:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:42:43.765542004 +0000 UTC m=+6.222408741" watchObservedRunningTime="2025-04-30 12:42:43.780163515 +0000 UTC m=+6.237030252" Apr 30 12:42:49.075489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4232152066.mount: Deactivated successfully. Apr 30 12:42:52.467616 containerd[1498]: time="2025-04-30T12:42:52.466452339Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:52.468695 containerd[1498]: time="2025-04-30T12:42:52.468624468Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 30 12:42:52.469103 containerd[1498]: time="2025-04-30T12:42:52.469065910Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:52.471521 containerd[1498]: time="2025-04-30T12:42:52.471481880Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.275273155s" Apr 30 12:42:52.471701 containerd[1498]: time="2025-04-30T12:42:52.471523200Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 30 12:42:52.474789 containerd[1498]: time="2025-04-30T12:42:52.474739174Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 12:42:52.476141 containerd[1498]: time="2025-04-30T12:42:52.476106820Z" level=info msg="CreateContainer within sandbox \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:42:52.490687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2101344683.mount: Deactivated successfully. Apr 30 12:42:52.493876 containerd[1498]: time="2025-04-30T12:42:52.493757015Z" level=info msg="CreateContainer within sandbox \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631\"" Apr 30 12:42:52.495802 containerd[1498]: time="2025-04-30T12:42:52.495761823Z" level=info msg="StartContainer for \"f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631\"" Apr 30 12:42:52.529838 systemd[1]: Started cri-containerd-f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631.scope - libcontainer container f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631. Apr 30 12:42:52.556699 containerd[1498]: time="2025-04-30T12:42:52.556638921Z" level=info msg="StartContainer for \"f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631\" returns successfully" Apr 30 12:42:52.574744 systemd[1]: cri-containerd-f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631.scope: Deactivated successfully. Apr 30 12:42:52.759557 containerd[1498]: time="2025-04-30T12:42:52.759378022Z" level=info msg="shim disconnected" id=f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631 namespace=k8s.io Apr 30 12:42:52.759557 containerd[1498]: time="2025-04-30T12:42:52.759447902Z" level=warning msg="cleaning up after shim disconnected" id=f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631 namespace=k8s.io Apr 30 12:42:52.759557 containerd[1498]: time="2025-04-30T12:42:52.759459142Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:53.487570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631-rootfs.mount: Deactivated successfully. Apr 30 12:42:53.782215 containerd[1498]: time="2025-04-30T12:42:53.782080835Z" level=info msg="CreateContainer within sandbox \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:42:53.802966 containerd[1498]: time="2025-04-30T12:42:53.802876317Z" level=info msg="CreateContainer within sandbox \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5\"" Apr 30 12:42:53.804071 containerd[1498]: time="2025-04-30T12:42:53.803961322Z" level=info msg="StartContainer for \"d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5\"" Apr 30 12:42:53.835950 systemd[1]: Started cri-containerd-d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5.scope - libcontainer container d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5. Apr 30 12:42:53.867312 containerd[1498]: time="2025-04-30T12:42:53.866341770Z" level=info msg="StartContainer for \"d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5\" returns successfully" Apr 30 12:42:53.880774 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:42:53.881813 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:42:53.882120 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:42:53.889208 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:42:53.889613 systemd[1]: cri-containerd-d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5.scope: Deactivated successfully. Apr 30 12:42:53.912653 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:42:53.927794 containerd[1498]: time="2025-04-30T12:42:53.927680774Z" level=info msg="shim disconnected" id=d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5 namespace=k8s.io Apr 30 12:42:53.927794 containerd[1498]: time="2025-04-30T12:42:53.927781494Z" level=warning msg="cleaning up after shim disconnected" id=d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5 namespace=k8s.io Apr 30 12:42:53.927794 containerd[1498]: time="2025-04-30T12:42:53.927798454Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:54.487793 systemd[1]: run-containerd-runc-k8s.io-d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5-runc.PwhyY5.mount: Deactivated successfully. Apr 30 12:42:54.488387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5-rootfs.mount: Deactivated successfully. Apr 30 12:42:54.779255 containerd[1498]: time="2025-04-30T12:42:54.778702887Z" level=info msg="CreateContainer within sandbox \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:42:54.802010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444527379.mount: Deactivated successfully. Apr 30 12:42:54.808173 containerd[1498]: time="2025-04-30T12:42:54.808111036Z" level=info msg="CreateContainer within sandbox \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982\"" Apr 30 12:42:54.809696 containerd[1498]: time="2025-04-30T12:42:54.809661082Z" level=info msg="StartContainer for \"88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982\"" Apr 30 12:42:54.846960 systemd[1]: Started cri-containerd-88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982.scope - libcontainer container 88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982. Apr 30 12:42:54.880085 containerd[1498]: time="2025-04-30T12:42:54.879964104Z" level=info msg="StartContainer for \"88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982\" returns successfully" Apr 30 12:42:54.884006 systemd[1]: cri-containerd-88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982.scope: Deactivated successfully. Apr 30 12:42:54.911325 containerd[1498]: time="2025-04-30T12:42:54.911207621Z" level=info msg="shim disconnected" id=88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982 namespace=k8s.io Apr 30 12:42:54.911325 containerd[1498]: time="2025-04-30T12:42:54.911321541Z" level=warning msg="cleaning up after shim disconnected" id=88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982 namespace=k8s.io Apr 30 12:42:54.911325 containerd[1498]: time="2025-04-30T12:42:54.911329261Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:55.490220 systemd[1]: run-containerd-runc-k8s.io-88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982-runc.GhgA3Q.mount: Deactivated successfully. Apr 30 12:42:55.490336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982-rootfs.mount: Deactivated successfully. Apr 30 12:42:55.784777 containerd[1498]: time="2025-04-30T12:42:55.784623737Z" level=info msg="CreateContainer within sandbox \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:42:55.801546 containerd[1498]: time="2025-04-30T12:42:55.801444156Z" level=info msg="CreateContainer within sandbox \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee\"" Apr 30 12:42:55.803170 containerd[1498]: time="2025-04-30T12:42:55.803110681Z" level=info msg="StartContainer for \"ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee\"" Apr 30 12:42:55.833917 systemd[1]: Started cri-containerd-ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee.scope - libcontainer container ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee. Apr 30 12:42:55.859846 systemd[1]: cri-containerd-ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee.scope: Deactivated successfully. Apr 30 12:42:55.863938 containerd[1498]: time="2025-04-30T12:42:55.863493132Z" level=info msg="StartContainer for \"ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee\" returns successfully" Apr 30 12:42:55.893816 containerd[1498]: time="2025-04-30T12:42:55.893756718Z" level=info msg="shim disconnected" id=ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee namespace=k8s.io Apr 30 12:42:55.894195 containerd[1498]: time="2025-04-30T12:42:55.894022319Z" level=warning msg="cleaning up after shim disconnected" id=ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee namespace=k8s.io Apr 30 12:42:55.894195 containerd[1498]: time="2025-04-30T12:42:55.894038559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:56.491189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee-rootfs.mount: Deactivated successfully. Apr 30 12:42:56.791564 containerd[1498]: time="2025-04-30T12:42:56.791317405Z" level=info msg="CreateContainer within sandbox \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:42:56.815138 containerd[1498]: time="2025-04-30T12:42:56.815045563Z" level=info msg="CreateContainer within sandbox \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e\"" Apr 30 12:42:56.816347 containerd[1498]: time="2025-04-30T12:42:56.815675885Z" level=info msg="StartContainer for \"aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e\"" Apr 30 12:42:56.846951 systemd[1]: Started cri-containerd-aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e.scope - libcontainer container aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e. Apr 30 12:42:56.886203 containerd[1498]: time="2025-04-30T12:42:56.885598914Z" level=info msg="StartContainer for \"aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e\" returns successfully" Apr 30 12:42:56.993716 kubelet[2671]: I0430 12:42:56.992057 2671 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 12:42:57.037308 systemd[1]: Created slice kubepods-burstable-pod665314ca_fdfb_4b8f_a987_f4d4e4ca9f74.slice - libcontainer container kubepods-burstable-pod665314ca_fdfb_4b8f_a987_f4d4e4ca9f74.slice. Apr 30 12:42:57.046491 systemd[1]: Created slice kubepods-burstable-pod3073d321_b54a_4218_987f_1791f6c84e49.slice - libcontainer container kubepods-burstable-pod3073d321_b54a_4218_987f_1791f6c84e49.slice. Apr 30 12:42:57.090610 kubelet[2671]: I0430 12:42:57.090560 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3073d321-b54a-4218-987f-1791f6c84e49-config-volume\") pod \"coredns-668d6bf9bc-9746k\" (UID: \"3073d321-b54a-4218-987f-1791f6c84e49\") " pod="kube-system/coredns-668d6bf9bc-9746k" Apr 30 12:42:57.090896 kubelet[2671]: I0430 12:42:57.090881 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/665314ca-fdfb-4b8f-a987-f4d4e4ca9f74-config-volume\") pod \"coredns-668d6bf9bc-ml5pz\" (UID: \"665314ca-fdfb-4b8f-a987-f4d4e4ca9f74\") " pod="kube-system/coredns-668d6bf9bc-ml5pz" Apr 30 12:42:57.091034 kubelet[2671]: I0430 12:42:57.091012 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6862\" (UniqueName: \"kubernetes.io/projected/665314ca-fdfb-4b8f-a987-f4d4e4ca9f74-kube-api-access-x6862\") pod \"coredns-668d6bf9bc-ml5pz\" (UID: \"665314ca-fdfb-4b8f-a987-f4d4e4ca9f74\") " pod="kube-system/coredns-668d6bf9bc-ml5pz" Apr 30 12:42:57.091141 kubelet[2671]: I0430 12:42:57.091130 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5ktf\" (UniqueName: \"kubernetes.io/projected/3073d321-b54a-4218-987f-1791f6c84e49-kube-api-access-f5ktf\") pod \"coredns-668d6bf9bc-9746k\" (UID: \"3073d321-b54a-4218-987f-1791f6c84e49\") " pod="kube-system/coredns-668d6bf9bc-9746k" Apr 30 12:42:57.343505 containerd[1498]: time="2025-04-30T12:42:57.343381065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ml5pz,Uid:665314ca-fdfb-4b8f-a987-f4d4e4ca9f74,Namespace:kube-system,Attempt:0,}" Apr 30 12:42:57.354931 containerd[1498]: time="2025-04-30T12:42:57.354882220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9746k,Uid:3073d321-b54a-4218-987f-1791f6c84e49,Namespace:kube-system,Attempt:0,}" Apr 30 12:42:58.782440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2669245533.mount: Deactivated successfully. Apr 30 12:42:59.202427 containerd[1498]: time="2025-04-30T12:42:59.202354631Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:59.204114 containerd[1498]: time="2025-04-30T12:42:59.204035235Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 30 12:42:59.204840 containerd[1498]: time="2025-04-30T12:42:59.204434956Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:59.206367 containerd[1498]: time="2025-04-30T12:42:59.206217801Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 6.731434707s" Apr 30 12:42:59.206367 containerd[1498]: time="2025-04-30T12:42:59.206260841Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 30 12:42:59.210880 containerd[1498]: time="2025-04-30T12:42:59.210737773Z" level=info msg="CreateContainer within sandbox \"5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 12:42:59.228346 containerd[1498]: time="2025-04-30T12:42:59.228294221Z" level=info msg="CreateContainer within sandbox \"5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945\"" Apr 30 12:42:59.229831 containerd[1498]: time="2025-04-30T12:42:59.229793505Z" level=info msg="StartContainer for \"d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945\"" Apr 30 12:42:59.257809 systemd[1]: Started cri-containerd-d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945.scope - libcontainer container d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945. Apr 30 12:42:59.285846 containerd[1498]: time="2025-04-30T12:42:59.285602816Z" level=info msg="StartContainer for \"d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945\" returns successfully" Apr 30 12:42:59.841604 kubelet[2671]: I0430 12:42:59.841531 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fqh44" podStartSLOduration=8.563538909 podStartE2EDuration="17.841511077s" podCreationTimestamp="2025-04-30 12:42:42 +0000 UTC" firstStartedPulling="2025-04-30 12:42:43.19542608 +0000 UTC m=+5.652292777" lastFinishedPulling="2025-04-30 12:42:52.473398208 +0000 UTC m=+14.930264945" observedRunningTime="2025-04-30 12:42:57.823046579 +0000 UTC m=+20.279913356" watchObservedRunningTime="2025-04-30 12:42:59.841511077 +0000 UTC m=+22.298377774" Apr 30 12:43:03.097684 systemd-networkd[1375]: cilium_host: Link UP Apr 30 12:43:03.099056 systemd-networkd[1375]: cilium_net: Link UP Apr 30 12:43:03.099354 systemd-networkd[1375]: cilium_net: Gained carrier Apr 30 12:43:03.099510 systemd-networkd[1375]: cilium_host: Gained carrier Apr 30 12:43:03.123849 systemd-networkd[1375]: cilium_host: Gained IPv6LL Apr 30 12:43:03.217171 systemd-networkd[1375]: cilium_vxlan: Link UP Apr 30 12:43:03.217180 systemd-networkd[1375]: cilium_vxlan: Gained carrier Apr 30 12:43:03.512864 kernel: NET: Registered PF_ALG protocol family Apr 30 12:43:04.039277 systemd-networkd[1375]: cilium_net: Gained IPv6LL Apr 30 12:43:04.230480 systemd-networkd[1375]: lxc_health: Link UP Apr 30 12:43:04.232057 systemd-networkd[1375]: lxc_health: Gained carrier Apr 30 12:43:04.358811 systemd-networkd[1375]: cilium_vxlan: Gained IPv6LL Apr 30 12:43:04.411914 kernel: eth0: renamed from tmpb498e Apr 30 12:43:04.410414 systemd-networkd[1375]: lxc95566374b0f7: Link UP Apr 30 12:43:04.419119 systemd-networkd[1375]: lxc95566374b0f7: Gained carrier Apr 30 12:43:04.448012 kernel: eth0: renamed from tmp9eac8 Apr 30 12:43:04.453334 systemd-networkd[1375]: lxc5114537c4b75: Link UP Apr 30 12:43:04.459814 systemd-networkd[1375]: lxc5114537c4b75: Gained carrier Apr 30 12:43:05.083291 kubelet[2671]: I0430 12:43:05.083190 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rzs7l" podStartSLOduration=6.403935555 podStartE2EDuration="22.083172753s" podCreationTimestamp="2025-04-30 12:42:43 +0000 UTC" firstStartedPulling="2025-04-30 12:42:43.528704688 +0000 UTC m=+5.985571425" lastFinishedPulling="2025-04-30 12:42:59.207941886 +0000 UTC m=+21.664808623" observedRunningTime="2025-04-30 12:42:59.842854921 +0000 UTC m=+22.299721658" watchObservedRunningTime="2025-04-30 12:43:05.083172753 +0000 UTC m=+27.540039450" Apr 30 12:43:05.254821 systemd-networkd[1375]: lxc_health: Gained IPv6LL Apr 30 12:43:05.446844 systemd-networkd[1375]: lxc95566374b0f7: Gained IPv6LL Apr 30 12:43:06.343778 systemd-networkd[1375]: lxc5114537c4b75: Gained IPv6LL Apr 30 12:43:06.787295 kubelet[2671]: I0430 12:43:06.787237 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 12:43:08.459645 containerd[1498]: time="2025-04-30T12:43:08.457342177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:43:08.460735 containerd[1498]: time="2025-04-30T12:43:08.457478497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:43:08.460855 containerd[1498]: time="2025-04-30T12:43:08.460792902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:08.460949 containerd[1498]: time="2025-04-30T12:43:08.460915863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:08.498165 systemd[1]: Started cri-containerd-b498e8b8bacfd5c79b9d69c5acf843d7bdd6d9e7f8d0fb5f8aca897d0fbd6be8.scope - libcontainer container b498e8b8bacfd5c79b9d69c5acf843d7bdd6d9e7f8d0fb5f8aca897d0fbd6be8. Apr 30 12:43:08.521471 containerd[1498]: time="2025-04-30T12:43:08.520379592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:43:08.522770 containerd[1498]: time="2025-04-30T12:43:08.521146954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:43:08.522770 containerd[1498]: time="2025-04-30T12:43:08.521166994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:08.522770 containerd[1498]: time="2025-04-30T12:43:08.521273354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:08.551847 systemd[1]: Started cri-containerd-9eac81c04689a61db3f1fc593865999e7dfac7fed1e5f0825a0fde20bfdd7d2a.scope - libcontainer container 9eac81c04689a61db3f1fc593865999e7dfac7fed1e5f0825a0fde20bfdd7d2a. Apr 30 12:43:08.605176 containerd[1498]: time="2025-04-30T12:43:08.605135760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ml5pz,Uid:665314ca-fdfb-4b8f-a987-f4d4e4ca9f74,Namespace:kube-system,Attempt:0,} returns sandbox id \"b498e8b8bacfd5c79b9d69c5acf843d7bdd6d9e7f8d0fb5f8aca897d0fbd6be8\"" Apr 30 12:43:08.611309 containerd[1498]: time="2025-04-30T12:43:08.611261930Z" level=info msg="CreateContainer within sandbox \"b498e8b8bacfd5c79b9d69c5acf843d7bdd6d9e7f8d0fb5f8aca897d0fbd6be8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:43:08.616483 containerd[1498]: time="2025-04-30T12:43:08.616416257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9746k,Uid:3073d321-b54a-4218-987f-1791f6c84e49,Namespace:kube-system,Attempt:0,} returns sandbox id \"9eac81c04689a61db3f1fc593865999e7dfac7fed1e5f0825a0fde20bfdd7d2a\"" Apr 30 12:43:08.621019 containerd[1498]: time="2025-04-30T12:43:08.620970664Z" level=info msg="CreateContainer within sandbox \"9eac81c04689a61db3f1fc593865999e7dfac7fed1e5f0825a0fde20bfdd7d2a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:43:08.635656 containerd[1498]: time="2025-04-30T12:43:08.635532366Z" level=info msg="CreateContainer within sandbox \"b498e8b8bacfd5c79b9d69c5acf843d7bdd6d9e7f8d0fb5f8aca897d0fbd6be8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f0aa65f5115ec3fe01c7ed53f6e12445467741121bb6cb69df3e031c889c76a\"" Apr 30 12:43:08.638640 containerd[1498]: time="2025-04-30T12:43:08.638414571Z" level=info msg="StartContainer for \"1f0aa65f5115ec3fe01c7ed53f6e12445467741121bb6cb69df3e031c889c76a\"" Apr 30 12:43:08.645411 containerd[1498]: time="2025-04-30T12:43:08.645295621Z" level=info msg="CreateContainer within sandbox \"9eac81c04689a61db3f1fc593865999e7dfac7fed1e5f0825a0fde20bfdd7d2a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"76efce01574f07891033f814ce9191346e793a17b6726e55f6ac1bce80041f32\"" Apr 30 12:43:08.647613 containerd[1498]: time="2025-04-30T12:43:08.646657223Z" level=info msg="StartContainer for \"76efce01574f07891033f814ce9191346e793a17b6726e55f6ac1bce80041f32\"" Apr 30 12:43:08.682734 systemd[1]: Started cri-containerd-1f0aa65f5115ec3fe01c7ed53f6e12445467741121bb6cb69df3e031c889c76a.scope - libcontainer container 1f0aa65f5115ec3fe01c7ed53f6e12445467741121bb6cb69df3e031c889c76a. Apr 30 12:43:08.688876 systemd[1]: Started cri-containerd-76efce01574f07891033f814ce9191346e793a17b6726e55f6ac1bce80041f32.scope - libcontainer container 76efce01574f07891033f814ce9191346e793a17b6726e55f6ac1bce80041f32. Apr 30 12:43:08.735542 containerd[1498]: time="2025-04-30T12:43:08.735438877Z" level=info msg="StartContainer for \"1f0aa65f5115ec3fe01c7ed53f6e12445467741121bb6cb69df3e031c889c76a\" returns successfully" Apr 30 12:43:08.740756 containerd[1498]: time="2025-04-30T12:43:08.740715645Z" level=info msg="StartContainer for \"76efce01574f07891033f814ce9191346e793a17b6726e55f6ac1bce80041f32\" returns successfully" Apr 30 12:43:08.842450 kubelet[2671]: I0430 12:43:08.842362 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9746k" podStartSLOduration=25.842343639 podStartE2EDuration="25.842343639s" podCreationTimestamp="2025-04-30 12:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:43:08.841100277 +0000 UTC m=+31.297966974" watchObservedRunningTime="2025-04-30 12:43:08.842343639 +0000 UTC m=+31.299210336" Apr 30 12:43:08.864192 kubelet[2671]: I0430 12:43:08.864121 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ml5pz" podStartSLOduration=25.864104032 podStartE2EDuration="25.864104032s" podCreationTimestamp="2025-04-30 12:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:43:08.863757271 +0000 UTC m=+31.320624008" watchObservedRunningTime="2025-04-30 12:43:08.864104032 +0000 UTC m=+31.320970769" Apr 30 12:45:09.078019 systemd[1]: Started sshd@7-91.99.82.124:22-139.178.89.65:48772.service - OpenSSH per-connection server daemon (139.178.89.65:48772). Apr 30 12:45:10.064567 sshd[4067]: Accepted publickey for core from 139.178.89.65 port 48772 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:45:10.066421 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:45:10.071451 systemd-logind[1472]: New session 8 of user core. Apr 30 12:45:10.080956 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 12:45:10.844325 sshd[4069]: Connection closed by 139.178.89.65 port 48772 Apr 30 12:45:10.845394 sshd-session[4067]: pam_unix(sshd:session): session closed for user core Apr 30 12:45:10.850452 systemd[1]: sshd@7-91.99.82.124:22-139.178.89.65:48772.service: Deactivated successfully. Apr 30 12:45:10.852824 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 12:45:10.854004 systemd-logind[1472]: Session 8 logged out. Waiting for processes to exit. Apr 30 12:45:10.855074 systemd-logind[1472]: Removed session 8. Apr 30 12:45:16.022052 systemd[1]: Started sshd@8-91.99.82.124:22-139.178.89.65:48780.service - OpenSSH per-connection server daemon (139.178.89.65:48780). Apr 30 12:45:17.005925 sshd[4085]: Accepted publickey for core from 139.178.89.65 port 48780 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:45:17.007956 sshd-session[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:45:17.013370 systemd-logind[1472]: New session 9 of user core. Apr 30 12:45:17.020855 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 12:45:17.762944 sshd[4087]: Connection closed by 139.178.89.65 port 48780 Apr 30 12:45:17.763976 sshd-session[4085]: pam_unix(sshd:session): session closed for user core Apr 30 12:45:17.769986 systemd[1]: sshd@8-91.99.82.124:22-139.178.89.65:48780.service: Deactivated successfully. Apr 30 12:45:17.774915 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 12:45:17.776240 systemd-logind[1472]: Session 9 logged out. Waiting for processes to exit. Apr 30 12:45:17.779358 systemd-logind[1472]: Removed session 9. Apr 30 12:45:22.943078 systemd[1]: Started sshd@9-91.99.82.124:22-139.178.89.65:54890.service - OpenSSH per-connection server daemon (139.178.89.65:54890). Apr 30 12:45:23.935127 sshd[4100]: Accepted publickey for core from 139.178.89.65 port 54890 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:45:23.937153 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:45:23.943554 systemd-logind[1472]: New session 10 of user core. Apr 30 12:45:23.946784 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 12:45:24.688044 sshd[4102]: Connection closed by 139.178.89.65 port 54890 Apr 30 12:45:24.688971 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Apr 30 12:45:24.694087 systemd[1]: sshd@9-91.99.82.124:22-139.178.89.65:54890.service: Deactivated successfully. Apr 30 12:45:24.696096 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 12:45:24.697075 systemd-logind[1472]: Session 10 logged out. Waiting for processes to exit. Apr 30 12:45:24.698305 systemd-logind[1472]: Removed session 10. Apr 30 12:45:29.865106 systemd[1]: Started sshd@10-91.99.82.124:22-139.178.89.65:35762.service - OpenSSH per-connection server daemon (139.178.89.65:35762). Apr 30 12:45:30.848366 sshd[4115]: Accepted publickey for core from 139.178.89.65 port 35762 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:45:30.850370 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:45:30.856265 systemd-logind[1472]: New session 11 of user core. Apr 30 12:45:30.858776 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 12:45:31.596565 sshd[4117]: Connection closed by 139.178.89.65 port 35762 Apr 30 12:45:31.595402 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Apr 30 12:45:31.600484 systemd-logind[1472]: Session 11 logged out. Waiting for processes to exit. Apr 30 12:45:31.602168 systemd[1]: sshd@10-91.99.82.124:22-139.178.89.65:35762.service: Deactivated successfully. Apr 30 12:45:31.605123 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 12:45:31.606767 systemd-logind[1472]: Removed session 11. Apr 30 12:45:31.773021 systemd[1]: Started sshd@11-91.99.82.124:22-139.178.89.65:35764.service - OpenSSH per-connection server daemon (139.178.89.65:35764). Apr 30 12:45:32.757630 sshd[4129]: Accepted publickey for core from 139.178.89.65 port 35764 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:45:32.760091 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:45:32.765725 systemd-logind[1472]: New session 12 of user core. Apr 30 12:45:32.772009 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 12:45:33.556685 sshd[4131]: Connection closed by 139.178.89.65 port 35764 Apr 30 12:45:33.557219 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Apr 30 12:45:33.563466 systemd[1]: sshd@11-91.99.82.124:22-139.178.89.65:35764.service: Deactivated successfully. Apr 30 12:45:33.565937 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 12:45:33.566907 systemd-logind[1472]: Session 12 logged out. Waiting for processes to exit. Apr 30 12:45:33.569058 systemd-logind[1472]: Removed session 12. Apr 30 12:45:33.745979 systemd[1]: Started sshd@12-91.99.82.124:22-139.178.89.65:35774.service - OpenSSH per-connection server daemon (139.178.89.65:35774). Apr 30 12:45:34.742887 sshd[4141]: Accepted publickey for core from 139.178.89.65 port 35774 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:45:34.745170 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:45:34.750850 systemd-logind[1472]: New session 13 of user core. Apr 30 12:45:34.757894 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 12:45:35.506154 sshd[4143]: Connection closed by 139.178.89.65 port 35774 Apr 30 12:45:35.507375 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Apr 30 12:45:35.512811 systemd[1]: sshd@12-91.99.82.124:22-139.178.89.65:35774.service: Deactivated successfully. Apr 30 12:45:35.515529 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 12:45:35.517347 systemd-logind[1472]: Session 13 logged out. Waiting for processes to exit. Apr 30 12:45:35.519003 systemd-logind[1472]: Removed session 13. Apr 30 12:45:40.687472 systemd[1]: Started sshd@13-91.99.82.124:22-139.178.89.65:39312.service - OpenSSH per-connection server daemon (139.178.89.65:39312). Apr 30 12:45:41.678401 sshd[4157]: Accepted publickey for core from 139.178.89.65 port 39312 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:45:41.680389 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:45:41.686719 systemd-logind[1472]: New session 14 of user core. Apr 30 12:45:41.693995 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 12:45:42.434787 sshd[4159]: Connection closed by 139.178.89.65 port 39312 Apr 30 12:45:42.434675 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Apr 30 12:45:42.439964 systemd-logind[1472]: Session 14 logged out. Waiting for processes to exit. Apr 30 12:45:42.440090 systemd[1]: sshd@13-91.99.82.124:22-139.178.89.65:39312.service: Deactivated successfully. Apr 30 12:45:42.442810 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 12:45:42.444769 systemd-logind[1472]: Removed session 14. Apr 30 12:45:42.613060 systemd[1]: Started sshd@14-91.99.82.124:22-139.178.89.65:39328.service - OpenSSH per-connection server daemon (139.178.89.65:39328). Apr 30 12:45:43.604181 sshd[4170]: Accepted publickey for core from 139.178.89.65 port 39328 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:45:43.606152 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:45:43.612991 systemd-logind[1472]: New session 15 of user core. Apr 30 12:45:43.622918 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 12:45:44.411143 sshd[4174]: Connection closed by 139.178.89.65 port 39328 Apr 30 12:45:44.412158 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Apr 30 12:45:44.416473 systemd[1]: sshd@14-91.99.82.124:22-139.178.89.65:39328.service: Deactivated successfully. Apr 30 12:45:44.419258 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 12:45:44.420835 systemd-logind[1472]: Session 15 logged out. Waiting for processes to exit. Apr 30 12:45:44.422034 systemd-logind[1472]: Removed session 15. Apr 30 12:45:44.592038 systemd[1]: Started sshd@15-91.99.82.124:22-139.178.89.65:39332.service - OpenSSH per-connection server daemon (139.178.89.65:39332). Apr 30 12:45:45.583897 sshd[4183]: Accepted publickey for core from 139.178.89.65 port 39332 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:45:45.585527 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:45:45.590756 systemd-logind[1472]: New session 16 of user core. Apr 30 12:45:45.599898 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 12:45:47.271863 sshd[4185]: Connection closed by 139.178.89.65 port 39332 Apr 30 12:45:47.272767 sshd-session[4183]: pam_unix(sshd:session): session closed for user core Apr 30 12:45:47.278118 systemd[1]: sshd@15-91.99.82.124:22-139.178.89.65:39332.service: Deactivated successfully. Apr 30 12:45:47.280614 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 12:45:47.284474 systemd-logind[1472]: Session 16 logged out. Waiting for processes to exit. Apr 30 12:45:47.285719 systemd-logind[1472]: Removed session 16. Apr 30 12:45:47.457149 systemd[1]: Started sshd@16-91.99.82.124:22-139.178.89.65:52208.service - OpenSSH per-connection server daemon (139.178.89.65:52208). Apr 30 12:45:48.444659 sshd[4202]: Accepted publickey for core from 139.178.89.65 port 52208 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:45:48.446881 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:45:48.454114 systemd-logind[1472]: New session 17 of user core. Apr 30 12:45:48.462935 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 12:45:49.324939 sshd[4204]: Connection closed by 139.178.89.65 port 52208 Apr 30 12:45:49.324038 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Apr 30 12:45:49.331395 systemd[1]: sshd@16-91.99.82.124:22-139.178.89.65:52208.service: Deactivated successfully. Apr 30 12:45:49.334088 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 12:45:49.335106 systemd-logind[1472]: Session 17 logged out. Waiting for processes to exit. Apr 30 12:45:49.336316 systemd-logind[1472]: Removed session 17. Apr 30 12:45:49.502410 systemd[1]: Started sshd@17-91.99.82.124:22-139.178.89.65:52214.service - OpenSSH per-connection server daemon (139.178.89.65:52214). Apr 30 12:45:50.503546 sshd[4214]: Accepted publickey for core from 139.178.89.65 port 52214 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:45:50.506691 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:45:50.512537 systemd-logind[1472]: New session 18 of user core. Apr 30 12:45:50.515765 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 12:45:51.259662 sshd[4216]: Connection closed by 139.178.89.65 port 52214 Apr 30 12:45:51.260507 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Apr 30 12:45:51.266201 systemd[1]: sshd@17-91.99.82.124:22-139.178.89.65:52214.service: Deactivated successfully. Apr 30 12:45:51.269710 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 12:45:51.271156 systemd-logind[1472]: Session 18 logged out. Waiting for processes to exit. Apr 30 12:45:51.274510 systemd-logind[1472]: Removed session 18. Apr 30 12:45:56.437080 systemd[1]: Started sshd@18-91.99.82.124:22-139.178.89.65:52222.service - OpenSSH per-connection server daemon (139.178.89.65:52222). Apr 30 12:45:57.442778 sshd[4229]: Accepted publickey for core from 139.178.89.65 port 52222 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:45:57.445021 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:45:57.452559 systemd-logind[1472]: New session 19 of user core. Apr 30 12:45:57.459174 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 12:45:58.199508 sshd[4231]: Connection closed by 139.178.89.65 port 52222 Apr 30 12:45:58.199641 sshd-session[4229]: pam_unix(sshd:session): session closed for user core Apr 30 12:45:58.204261 systemd[1]: sshd@18-91.99.82.124:22-139.178.89.65:52222.service: Deactivated successfully. Apr 30 12:45:58.206230 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 12:45:58.207479 systemd-logind[1472]: Session 19 logged out. Waiting for processes to exit. Apr 30 12:45:58.209050 systemd-logind[1472]: Removed session 19. Apr 30 12:46:03.383175 systemd[1]: Started sshd@19-91.99.82.124:22-139.178.89.65:34892.service - OpenSSH per-connection server daemon (139.178.89.65:34892). Apr 30 12:46:04.384307 sshd[4242]: Accepted publickey for core from 139.178.89.65 port 34892 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:46:04.386248 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:46:04.391891 systemd-logind[1472]: New session 20 of user core. Apr 30 12:46:04.397781 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 12:46:05.140057 sshd[4244]: Connection closed by 139.178.89.65 port 34892 Apr 30 12:46:05.140844 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Apr 30 12:46:05.144917 systemd-logind[1472]: Session 20 logged out. Waiting for processes to exit. Apr 30 12:46:05.145187 systemd[1]: sshd@19-91.99.82.124:22-139.178.89.65:34892.service: Deactivated successfully. Apr 30 12:46:05.148028 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 12:46:05.151414 systemd-logind[1472]: Removed session 20. Apr 30 12:46:05.319663 systemd[1]: Started sshd@20-91.99.82.124:22-139.178.89.65:34908.service - OpenSSH per-connection server daemon (139.178.89.65:34908). Apr 30 12:46:06.310106 sshd[4256]: Accepted publickey for core from 139.178.89.65 port 34908 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:46:06.312128 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:46:06.322249 systemd-logind[1472]: New session 21 of user core. Apr 30 12:46:06.324935 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 12:46:08.667355 containerd[1498]: time="2025-04-30T12:46:08.667135631Z" level=info msg="StopContainer for \"d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945\" with timeout 30 (s)" Apr 30 12:46:08.670490 containerd[1498]: time="2025-04-30T12:46:08.669855940Z" level=info msg="Stop container \"d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945\" with signal terminated" Apr 30 12:46:08.690356 systemd[1]: cri-containerd-d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945.scope: Deactivated successfully. Apr 30 12:46:08.707895 containerd[1498]: time="2025-04-30T12:46:08.707792020Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:46:08.727429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945-rootfs.mount: Deactivated successfully. Apr 30 12:46:08.729909 containerd[1498]: time="2025-04-30T12:46:08.729843453Z" level=info msg="StopContainer for \"aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e\" with timeout 2 (s)" Apr 30 12:46:08.730334 containerd[1498]: time="2025-04-30T12:46:08.730309218Z" level=info msg="Stop container \"aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e\" with signal terminated" Apr 30 12:46:08.737382 containerd[1498]: time="2025-04-30T12:46:08.736486643Z" level=info msg="shim disconnected" id=d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945 namespace=k8s.io Apr 30 12:46:08.737382 containerd[1498]: time="2025-04-30T12:46:08.737328452Z" level=warning msg="cleaning up after shim disconnected" id=d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945 namespace=k8s.io Apr 30 12:46:08.737382 containerd[1498]: time="2025-04-30T12:46:08.737343092Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:08.741454 systemd-networkd[1375]: lxc_health: Link DOWN Apr 30 12:46:08.741462 systemd-networkd[1375]: lxc_health: Lost carrier Apr 30 12:46:08.760720 systemd[1]: cri-containerd-aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e.scope: Deactivated successfully. Apr 30 12:46:08.761122 systemd[1]: cri-containerd-aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e.scope: Consumed 7.412s CPU time, 124.6M memory peak, 136K read from disk, 12.9M written to disk. Apr 30 12:46:08.773490 containerd[1498]: time="2025-04-30T12:46:08.772991148Z" level=info msg="StopContainer for \"d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945\" returns successfully" Apr 30 12:46:08.774384 containerd[1498]: time="2025-04-30T12:46:08.774208721Z" level=info msg="StopPodSandbox for \"5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a\"" Apr 30 12:46:08.774384 containerd[1498]: time="2025-04-30T12:46:08.774253281Z" level=info msg="Container to stop \"d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:46:08.776125 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a-shm.mount: Deactivated successfully. Apr 30 12:46:08.787771 systemd[1]: cri-containerd-5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a.scope: Deactivated successfully. Apr 30 12:46:08.802220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e-rootfs.mount: Deactivated successfully. Apr 30 12:46:08.811505 containerd[1498]: time="2025-04-30T12:46:08.811362713Z" level=info msg="shim disconnected" id=aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e namespace=k8s.io Apr 30 12:46:08.811505 containerd[1498]: time="2025-04-30T12:46:08.811432074Z" level=warning msg="cleaning up after shim disconnected" id=aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e namespace=k8s.io Apr 30 12:46:08.811505 containerd[1498]: time="2025-04-30T12:46:08.811440874Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:08.816760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a-rootfs.mount: Deactivated successfully. Apr 30 12:46:08.821324 containerd[1498]: time="2025-04-30T12:46:08.821267337Z" level=info msg="shim disconnected" id=5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a namespace=k8s.io Apr 30 12:46:08.822665 containerd[1498]: time="2025-04-30T12:46:08.822497150Z" level=warning msg="cleaning up after shim disconnected" id=5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a namespace=k8s.io Apr 30 12:46:08.822665 containerd[1498]: time="2025-04-30T12:46:08.822532551Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:08.842467 containerd[1498]: time="2025-04-30T12:46:08.842386120Z" level=info msg="StopContainer for \"aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e\" returns successfully" Apr 30 12:46:08.843116 containerd[1498]: time="2025-04-30T12:46:08.842985846Z" level=info msg="StopPodSandbox for \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\"" Apr 30 12:46:08.843116 containerd[1498]: time="2025-04-30T12:46:08.843033487Z" level=info msg="Container to stop \"ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:46:08.843116 containerd[1498]: time="2025-04-30T12:46:08.843045207Z" level=info msg="Container to stop \"aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:46:08.843116 containerd[1498]: time="2025-04-30T12:46:08.843053767Z" level=info msg="Container to stop \"f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:46:08.843116 containerd[1498]: time="2025-04-30T12:46:08.843063647Z" level=info msg="Container to stop \"d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:46:08.843116 containerd[1498]: time="2025-04-30T12:46:08.843071887Z" level=info msg="Container to stop \"88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:46:08.852151 containerd[1498]: time="2025-04-30T12:46:08.851992142Z" level=info msg="TearDown network for sandbox \"5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a\" successfully" Apr 30 12:46:08.852151 containerd[1498]: time="2025-04-30T12:46:08.852031182Z" level=info msg="StopPodSandbox for \"5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a\" returns successfully" Apr 30 12:46:08.853488 systemd[1]: cri-containerd-3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33.scope: Deactivated successfully. Apr 30 12:46:08.894994 containerd[1498]: time="2025-04-30T12:46:08.894342748Z" level=info msg="shim disconnected" id=3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33 namespace=k8s.io Apr 30 12:46:08.894994 containerd[1498]: time="2025-04-30T12:46:08.895063956Z" level=warning msg="cleaning up after shim disconnected" id=3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33 namespace=k8s.io Apr 30 12:46:08.895921 containerd[1498]: time="2025-04-30T12:46:08.895091796Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:08.909868 containerd[1498]: time="2025-04-30T12:46:08.909798312Z" level=info msg="TearDown network for sandbox \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\" successfully" Apr 30 12:46:08.909868 containerd[1498]: time="2025-04-30T12:46:08.909847632Z" level=info msg="StopPodSandbox for \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\" returns successfully" Apr 30 12:46:08.959915 kubelet[2671]: I0430 12:46:08.959028 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d441d00c-2f62-43b9-abb7-3adc5519c894-cilium-config-path\") pod \"d441d00c-2f62-43b9-abb7-3adc5519c894\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " Apr 30 12:46:08.959915 kubelet[2671]: I0430 12:46:08.959081 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-lib-modules\") pod \"d441d00c-2f62-43b9-abb7-3adc5519c894\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " Apr 30 12:46:08.959915 kubelet[2671]: I0430 12:46:08.959136 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-hostproc\") pod \"d441d00c-2f62-43b9-abb7-3adc5519c894\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " Apr 30 12:46:08.959915 kubelet[2671]: I0430 12:46:08.959155 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-cni-path\") pod \"d441d00c-2f62-43b9-abb7-3adc5519c894\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " Apr 30 12:46:08.959915 kubelet[2671]: I0430 12:46:08.959179 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d441d00c-2f62-43b9-abb7-3adc5519c894-hubble-tls\") pod \"d441d00c-2f62-43b9-abb7-3adc5519c894\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " Apr 30 12:46:08.959915 kubelet[2671]: I0430 12:46:08.959198 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-xtables-lock\") pod \"d441d00c-2f62-43b9-abb7-3adc5519c894\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " Apr 30 12:46:08.960477 kubelet[2671]: I0430 12:46:08.959217 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-host-proc-sys-kernel\") pod \"d441d00c-2f62-43b9-abb7-3adc5519c894\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " Apr 30 12:46:08.960477 kubelet[2671]: I0430 12:46:08.959238 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f-cilium-config-path\") pod \"a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f\" (UID: \"a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f\") " Apr 30 12:46:08.960477 kubelet[2671]: I0430 12:46:08.959257 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-host-proc-sys-net\") pod \"d441d00c-2f62-43b9-abb7-3adc5519c894\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " Apr 30 12:46:08.960477 kubelet[2671]: I0430 12:46:08.959280 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d441d00c-2f62-43b9-abb7-3adc5519c894-clustermesh-secrets\") pod \"d441d00c-2f62-43b9-abb7-3adc5519c894\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " Apr 30 12:46:08.960477 kubelet[2671]: I0430 12:46:08.959299 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-cilium-cgroup\") pod \"d441d00c-2f62-43b9-abb7-3adc5519c894\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " Apr 30 12:46:08.960477 kubelet[2671]: I0430 12:46:08.959321 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wz4m\" (UniqueName: \"kubernetes.io/projected/a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f-kube-api-access-5wz4m\") pod \"a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f\" (UID: \"a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f\") " Apr 30 12:46:08.960726 kubelet[2671]: I0430 12:46:08.959341 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-etc-cni-netd\") pod \"d441d00c-2f62-43b9-abb7-3adc5519c894\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " Apr 30 12:46:08.960726 kubelet[2671]: I0430 12:46:08.959357 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-bpf-maps\") pod \"d441d00c-2f62-43b9-abb7-3adc5519c894\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " Apr 30 12:46:08.960726 kubelet[2671]: I0430 12:46:08.959377 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-cilium-run\") pod \"d441d00c-2f62-43b9-abb7-3adc5519c894\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " Apr 30 12:46:08.960726 kubelet[2671]: I0430 12:46:08.959398 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khf6f\" (UniqueName: \"kubernetes.io/projected/d441d00c-2f62-43b9-abb7-3adc5519c894-kube-api-access-khf6f\") pod \"d441d00c-2f62-43b9-abb7-3adc5519c894\" (UID: \"d441d00c-2f62-43b9-abb7-3adc5519c894\") " Apr 30 12:46:08.962607 kubelet[2671]: I0430 12:46:08.962041 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f" (UID: "a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 12:46:08.962817 kubelet[2671]: I0430 12:46:08.962790 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d441d00c-2f62-43b9-abb7-3adc5519c894" (UID: "d441d00c-2f62-43b9-abb7-3adc5519c894"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:46:08.967425 kubelet[2671]: I0430 12:46:08.967356 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d441d00c-2f62-43b9-abb7-3adc5519c894-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d441d00c-2f62-43b9-abb7-3adc5519c894" (UID: "d441d00c-2f62-43b9-abb7-3adc5519c894"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 12:46:08.968064 kubelet[2671]: I0430 12:46:08.967758 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d441d00c-2f62-43b9-abb7-3adc5519c894" (UID: "d441d00c-2f62-43b9-abb7-3adc5519c894"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:46:08.968064 kubelet[2671]: I0430 12:46:08.967796 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-hostproc" (OuterVolumeSpecName: "hostproc") pod "d441d00c-2f62-43b9-abb7-3adc5519c894" (UID: "d441d00c-2f62-43b9-abb7-3adc5519c894"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:46:08.968064 kubelet[2671]: I0430 12:46:08.967840 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-cni-path" (OuterVolumeSpecName: "cni-path") pod "d441d00c-2f62-43b9-abb7-3adc5519c894" (UID: "d441d00c-2f62-43b9-abb7-3adc5519c894"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:46:08.968491 kubelet[2671]: I0430 12:46:08.968449 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d441d00c-2f62-43b9-abb7-3adc5519c894-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d441d00c-2f62-43b9-abb7-3adc5519c894" (UID: "d441d00c-2f62-43b9-abb7-3adc5519c894"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 30 12:46:08.968601 kubelet[2671]: I0430 12:46:08.968511 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d441d00c-2f62-43b9-abb7-3adc5519c894" (UID: "d441d00c-2f62-43b9-abb7-3adc5519c894"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:46:08.971273 kubelet[2671]: I0430 12:46:08.971215 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f-kube-api-access-5wz4m" (OuterVolumeSpecName: "kube-api-access-5wz4m") pod "a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f" (UID: "a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f"). InnerVolumeSpecName "kube-api-access-5wz4m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 12:46:08.971493 kubelet[2671]: I0430 12:46:08.971294 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d441d00c-2f62-43b9-abb7-3adc5519c894" (UID: "d441d00c-2f62-43b9-abb7-3adc5519c894"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:46:08.971493 kubelet[2671]: I0430 12:46:08.971317 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d441d00c-2f62-43b9-abb7-3adc5519c894" (UID: "d441d00c-2f62-43b9-abb7-3adc5519c894"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:46:08.971493 kubelet[2671]: I0430 12:46:08.971332 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d441d00c-2f62-43b9-abb7-3adc5519c894" (UID: "d441d00c-2f62-43b9-abb7-3adc5519c894"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:46:08.971493 kubelet[2671]: I0430 12:46:08.971358 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d441d00c-2f62-43b9-abb7-3adc5519c894-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d441d00c-2f62-43b9-abb7-3adc5519c894" (UID: "d441d00c-2f62-43b9-abb7-3adc5519c894"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 12:46:08.971493 kubelet[2671]: I0430 12:46:08.971401 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d441d00c-2f62-43b9-abb7-3adc5519c894" (UID: "d441d00c-2f62-43b9-abb7-3adc5519c894"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:46:08.971793 kubelet[2671]: I0430 12:46:08.971407 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d441d00c-2f62-43b9-abb7-3adc5519c894-kube-api-access-khf6f" (OuterVolumeSpecName: "kube-api-access-khf6f") pod "d441d00c-2f62-43b9-abb7-3adc5519c894" (UID: "d441d00c-2f62-43b9-abb7-3adc5519c894"). InnerVolumeSpecName "kube-api-access-khf6f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 12:46:08.971793 kubelet[2671]: I0430 12:46:08.971670 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d441d00c-2f62-43b9-abb7-3adc5519c894" (UID: "d441d00c-2f62-43b9-abb7-3adc5519c894"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:46:09.060405 kubelet[2671]: I0430 12:46:09.060138 2671 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-host-proc-sys-kernel\") on node \"ci-4230-1-1-7-cef124738e\" DevicePath \"\"" Apr 30 12:46:09.060405 kubelet[2671]: I0430 12:46:09.060192 2671 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f-cilium-config-path\") on node \"ci-4230-1-1-7-cef124738e\" DevicePath \"\"" Apr 30 12:46:09.060405 kubelet[2671]: I0430 12:46:09.060207 2671 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-host-proc-sys-net\") on node \"ci-4230-1-1-7-cef124738e\" DevicePath \"\"" Apr 30 12:46:09.060405 kubelet[2671]: I0430 12:46:09.060222 2671 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d441d00c-2f62-43b9-abb7-3adc5519c894-clustermesh-secrets\") on node \"ci-4230-1-1-7-cef124738e\" DevicePath \"\"" Apr 30 12:46:09.060405 kubelet[2671]: I0430 12:46:09.060237 2671 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-cilium-cgroup\") on node \"ci-4230-1-1-7-cef124738e\" DevicePath \"\"" Apr 30 12:46:09.060405 kubelet[2671]: I0430 12:46:09.060250 2671 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-etc-cni-netd\") on node \"ci-4230-1-1-7-cef124738e\" DevicePath \"\"" Apr 30 12:46:09.060405 kubelet[2671]: I0430 12:46:09.060261 2671 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-bpf-maps\") on node \"ci-4230-1-1-7-cef124738e\" DevicePath \"\"" Apr 30 12:46:09.060405 kubelet[2671]: I0430 12:46:09.060274 2671 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-cilium-run\") on node \"ci-4230-1-1-7-cef124738e\" DevicePath \"\"" Apr 30 12:46:09.060897 kubelet[2671]: I0430 12:46:09.060288 2671 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-khf6f\" (UniqueName: \"kubernetes.io/projected/d441d00c-2f62-43b9-abb7-3adc5519c894-kube-api-access-khf6f\") on node \"ci-4230-1-1-7-cef124738e\" DevicePath \"\"" Apr 30 12:46:09.060897 kubelet[2671]: I0430 12:46:09.060302 2671 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5wz4m\" (UniqueName: \"kubernetes.io/projected/a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f-kube-api-access-5wz4m\") on node \"ci-4230-1-1-7-cef124738e\" DevicePath \"\"" Apr 30 12:46:09.060897 kubelet[2671]: I0430 12:46:09.060316 2671 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d441d00c-2f62-43b9-abb7-3adc5519c894-cilium-config-path\") on node \"ci-4230-1-1-7-cef124738e\" DevicePath \"\"" Apr 30 12:46:09.060897 kubelet[2671]: I0430 12:46:09.060328 2671 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-lib-modules\") on node \"ci-4230-1-1-7-cef124738e\" DevicePath \"\"" Apr 30 12:46:09.060897 kubelet[2671]: I0430 12:46:09.060341 2671 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-hostproc\") on node \"ci-4230-1-1-7-cef124738e\" DevicePath \"\"" Apr 30 12:46:09.060897 kubelet[2671]: I0430 12:46:09.060352 2671 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-cni-path\") on node \"ci-4230-1-1-7-cef124738e\" DevicePath \"\"" Apr 30 12:46:09.060897 kubelet[2671]: I0430 12:46:09.060365 2671 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d441d00c-2f62-43b9-abb7-3adc5519c894-hubble-tls\") on node \"ci-4230-1-1-7-cef124738e\" DevicePath \"\"" Apr 30 12:46:09.060897 kubelet[2671]: I0430 12:46:09.060377 2671 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d441d00c-2f62-43b9-abb7-3adc5519c894-xtables-lock\") on node \"ci-4230-1-1-7-cef124738e\" DevicePath \"\"" Apr 30 12:46:09.262702 kubelet[2671]: I0430 12:46:09.262012 2671 scope.go:117] "RemoveContainer" containerID="d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945" Apr 30 12:46:09.268209 containerd[1498]: time="2025-04-30T12:46:09.268146321Z" level=info msg="RemoveContainer for \"d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945\"" Apr 30 12:46:09.273657 systemd[1]: Removed slice kubepods-besteffort-poda0c6fd2c_a39d_43ba_a1c7_2795e8fdb80f.slice - libcontainer container kubepods-besteffort-poda0c6fd2c_a39d_43ba_a1c7_2795e8fdb80f.slice. Apr 30 12:46:09.275930 containerd[1498]: time="2025-04-30T12:46:09.275892082Z" level=info msg="RemoveContainer for \"d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945\" returns successfully" Apr 30 12:46:09.282205 kubelet[2671]: I0430 12:46:09.281031 2671 scope.go:117] "RemoveContainer" containerID="d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945" Apr 30 12:46:09.281684 systemd[1]: Removed slice kubepods-burstable-podd441d00c_2f62_43b9_abb7_3adc5519c894.slice - libcontainer container kubepods-burstable-podd441d00c_2f62_43b9_abb7_3adc5519c894.slice. Apr 30 12:46:09.281876 systemd[1]: kubepods-burstable-podd441d00c_2f62_43b9_abb7_3adc5519c894.slice: Consumed 7.502s CPU time, 125.1M memory peak, 136K read from disk, 12.9M written to disk. Apr 30 12:46:09.283252 containerd[1498]: time="2025-04-30T12:46:09.282733794Z" level=error msg="ContainerStatus for \"d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945\": not found" Apr 30 12:46:09.283715 kubelet[2671]: E0430 12:46:09.283668 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945\": not found" containerID="d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945" Apr 30 12:46:09.283810 kubelet[2671]: I0430 12:46:09.283709 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945"} err="failed to get container status \"d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945\": rpc error: code = NotFound desc = an error occurred when try to find container \"d170a89c167e549d5ff5eb8f4cf997e5faa0c12ab1f8574d6931af19b7ecb945\": not found" Apr 30 12:46:09.283810 kubelet[2671]: I0430 12:46:09.283790 2671 scope.go:117] "RemoveContainer" containerID="aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e" Apr 30 12:46:09.287310 containerd[1498]: time="2025-04-30T12:46:09.287223361Z" level=info msg="RemoveContainer for \"aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e\"" Apr 30 12:46:09.294386 containerd[1498]: time="2025-04-30T12:46:09.294331556Z" level=info msg="RemoveContainer for \"aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e\" returns successfully" Apr 30 12:46:09.295201 kubelet[2671]: I0430 12:46:09.295164 2671 scope.go:117] "RemoveContainer" containerID="ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee" Apr 30 12:46:09.297429 containerd[1498]: time="2025-04-30T12:46:09.297231947Z" level=info msg="RemoveContainer for \"ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee\"" Apr 30 12:46:09.301613 containerd[1498]: time="2025-04-30T12:46:09.301523232Z" level=info msg="RemoveContainer for \"ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee\" returns successfully" Apr 30 12:46:09.301980 kubelet[2671]: I0430 12:46:09.301876 2671 scope.go:117] "RemoveContainer" containerID="88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982" Apr 30 12:46:09.303764 containerd[1498]: time="2025-04-30T12:46:09.303658814Z" level=info msg="RemoveContainer for \"88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982\"" Apr 30 12:46:09.308466 containerd[1498]: time="2025-04-30T12:46:09.307623296Z" level=info msg="RemoveContainer for \"88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982\" returns successfully" Apr 30 12:46:09.308963 kubelet[2671]: I0430 12:46:09.307860 2671 scope.go:117] "RemoveContainer" containerID="d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5" Apr 30 12:46:09.312220 containerd[1498]: time="2025-04-30T12:46:09.311796060Z" level=info msg="RemoveContainer for \"d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5\"" Apr 30 12:46:09.316229 containerd[1498]: time="2025-04-30T12:46:09.316090185Z" level=info msg="RemoveContainer for \"d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5\" returns successfully" Apr 30 12:46:09.316640 kubelet[2671]: I0430 12:46:09.316608 2671 scope.go:117] "RemoveContainer" containerID="f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631" Apr 30 12:46:09.318148 containerd[1498]: time="2025-04-30T12:46:09.318073726Z" level=info msg="RemoveContainer for \"f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631\"" Apr 30 12:46:09.325667 containerd[1498]: time="2025-04-30T12:46:09.323833106Z" level=info msg="RemoveContainer for \"f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631\" returns successfully" Apr 30 12:46:09.326421 kubelet[2671]: I0430 12:46:09.326391 2671 scope.go:117] "RemoveContainer" containerID="aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e" Apr 30 12:46:09.326767 containerd[1498]: time="2025-04-30T12:46:09.326732537Z" level=error msg="ContainerStatus for \"aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e\": not found" Apr 30 12:46:09.327137 kubelet[2671]: E0430 12:46:09.327108 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e\": not found" containerID="aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e" Apr 30 12:46:09.327188 kubelet[2671]: I0430 12:46:09.327143 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e"} err="failed to get container status \"aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa4c2e7d99b929eca3e304c760696d2749a6e189fd090142ebe2b186f29cfb1e\": not found" Apr 30 12:46:09.327188 kubelet[2671]: I0430 12:46:09.327167 2671 scope.go:117] "RemoveContainer" containerID="ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee" Apr 30 12:46:09.327413 containerd[1498]: time="2025-04-30T12:46:09.327353463Z" level=error msg="ContainerStatus for \"ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee\": not found" Apr 30 12:46:09.327605 kubelet[2671]: E0430 12:46:09.327561 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee\": not found" containerID="ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee" Apr 30 12:46:09.327702 kubelet[2671]: I0430 12:46:09.327633 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee"} err="failed to get container status \"ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad7a0163cdf085efdc65d6de857959f8403038a720341f80717edee71e24bcee\": not found" Apr 30 12:46:09.327702 kubelet[2671]: I0430 12:46:09.327652 2671 scope.go:117] "RemoveContainer" containerID="88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982" Apr 30 12:46:09.327930 containerd[1498]: time="2025-04-30T12:46:09.327890429Z" level=error msg="ContainerStatus for \"88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982\": not found" Apr 30 12:46:09.328089 kubelet[2671]: E0430 12:46:09.328066 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982\": not found" containerID="88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982" Apr 30 12:46:09.328431 kubelet[2671]: I0430 12:46:09.328400 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982"} err="failed to get container status \"88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982\": rpc error: code = NotFound desc = an error occurred when try to find container \"88ea973aac6420ae6f8437f071d2984afe0f3cf393af9aa30e66612b41109982\": not found" Apr 30 12:46:09.328431 kubelet[2671]: I0430 12:46:09.328429 2671 scope.go:117] "RemoveContainer" containerID="d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5" Apr 30 12:46:09.328934 containerd[1498]: time="2025-04-30T12:46:09.328899079Z" level=error msg="ContainerStatus for \"d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5\": not found" Apr 30 12:46:09.330014 kubelet[2671]: E0430 12:46:09.329187 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5\": not found" containerID="d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5" Apr 30 12:46:09.330014 kubelet[2671]: I0430 12:46:09.329220 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5"} err="failed to get container status \"d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7880e471ce52aaa4fc3b07fa630890d2aea0b0986caf0682a2bd2764a1b27a5\": not found" Apr 30 12:46:09.330014 kubelet[2671]: I0430 12:46:09.329237 2671 scope.go:117] "RemoveContainer" containerID="f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631" Apr 30 12:46:09.330123 containerd[1498]: time="2025-04-30T12:46:09.329390325Z" level=error msg="ContainerStatus for \"f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631\": not found" Apr 30 12:46:09.330308 kubelet[2671]: E0430 12:46:09.330284 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631\": not found" containerID="f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631" Apr 30 12:46:09.330352 kubelet[2671]: I0430 12:46:09.330312 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631"} err="failed to get container status \"f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0a461a49903e0bb1cd032514decabd42bd4fe7757ede9ed94ff0c2927f03631\": not found" Apr 30 12:46:09.662844 kubelet[2671]: I0430 12:46:09.662031 2671 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f" path="/var/lib/kubelet/pods/a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f/volumes" Apr 30 12:46:09.662844 kubelet[2671]: I0430 12:46:09.662679 2671 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d441d00c-2f62-43b9-abb7-3adc5519c894" path="/var/lib/kubelet/pods/d441d00c-2f62-43b9-abb7-3adc5519c894/volumes" Apr 30 12:46:09.685112 systemd[1]: var-lib-kubelet-pods-a0c6fd2c\x2da39d\x2d43ba\x2da1c7\x2d2795e8fdb80f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5wz4m.mount: Deactivated successfully. Apr 30 12:46:09.685245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33-rootfs.mount: Deactivated successfully. Apr 30 12:46:09.685314 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33-shm.mount: Deactivated successfully. Apr 30 12:46:09.685379 systemd[1]: var-lib-kubelet-pods-d441d00c\x2d2f62\x2d43b9\x2dabb7\x2d3adc5519c894-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkhf6f.mount: Deactivated successfully. Apr 30 12:46:09.685452 systemd[1]: var-lib-kubelet-pods-d441d00c\x2d2f62\x2d43b9\x2dabb7\x2d3adc5519c894-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 12:46:09.685512 systemd[1]: var-lib-kubelet-pods-d441d00c\x2d2f62\x2d43b9\x2dabb7\x2d3adc5519c894-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 12:46:10.745035 sshd[4258]: Connection closed by 139.178.89.65 port 34908 Apr 30 12:46:10.746184 sshd-session[4256]: pam_unix(sshd:session): session closed for user core Apr 30 12:46:10.752756 systemd[1]: sshd@20-91.99.82.124:22-139.178.89.65:34908.service: Deactivated successfully. Apr 30 12:46:10.755150 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 12:46:10.755556 systemd[1]: session-21.scope: Consumed 1.188s CPU time, 23.6M memory peak. Apr 30 12:46:10.756559 systemd-logind[1472]: Session 21 logged out. Waiting for processes to exit. Apr 30 12:46:10.758165 systemd-logind[1472]: Removed session 21. Apr 30 12:46:10.931668 systemd[1]: Started sshd@21-91.99.82.124:22-139.178.89.65:53280.service - OpenSSH per-connection server daemon (139.178.89.65:53280). Apr 30 12:46:11.930541 sshd[4423]: Accepted publickey for core from 139.178.89.65 port 53280 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:46:11.933078 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:46:11.937910 systemd-logind[1472]: New session 22 of user core. Apr 30 12:46:11.944930 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 12:46:12.807030 kubelet[2671]: E0430 12:46:12.806952 2671 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 12:46:13.785078 kubelet[2671]: I0430 12:46:13.785028 2671 memory_manager.go:355] "RemoveStaleState removing state" podUID="d441d00c-2f62-43b9-abb7-3adc5519c894" containerName="cilium-agent" Apr 30 12:46:13.785078 kubelet[2671]: I0430 12:46:13.785058 2671 memory_manager.go:355] "RemoveStaleState removing state" podUID="a0c6fd2c-a39d-43ba-a1c7-2795e8fdb80f" containerName="cilium-operator" Apr 30 12:46:13.789914 kubelet[2671]: I0430 12:46:13.789880 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c23f77c2-cbe7-49a4-b782-896502a23d34-xtables-lock\") pod \"cilium-rwpmx\" (UID: \"c23f77c2-cbe7-49a4-b782-896502a23d34\") " pod="kube-system/cilium-rwpmx" Apr 30 12:46:13.790205 kubelet[2671]: I0430 12:46:13.790072 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c23f77c2-cbe7-49a4-b782-896502a23d34-lib-modules\") pod \"cilium-rwpmx\" (UID: \"c23f77c2-cbe7-49a4-b782-896502a23d34\") " pod="kube-system/cilium-rwpmx" Apr 30 12:46:13.790205 kubelet[2671]: I0430 12:46:13.790100 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c23f77c2-cbe7-49a4-b782-896502a23d34-etc-cni-netd\") pod \"cilium-rwpmx\" (UID: \"c23f77c2-cbe7-49a4-b782-896502a23d34\") " pod="kube-system/cilium-rwpmx" Apr 30 12:46:13.790364 kubelet[2671]: I0430 12:46:13.790289 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c23f77c2-cbe7-49a4-b782-896502a23d34-cilium-ipsec-secrets\") pod \"cilium-rwpmx\" (UID: \"c23f77c2-cbe7-49a4-b782-896502a23d34\") " pod="kube-system/cilium-rwpmx" Apr 30 12:46:13.790612 kubelet[2671]: I0430 12:46:13.790351 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c23f77c2-cbe7-49a4-b782-896502a23d34-cilium-cgroup\") pod \"cilium-rwpmx\" (UID: \"c23f77c2-cbe7-49a4-b782-896502a23d34\") " pod="kube-system/cilium-rwpmx" Apr 30 12:46:13.790612 kubelet[2671]: I0430 12:46:13.790547 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c23f77c2-cbe7-49a4-b782-896502a23d34-host-proc-sys-net\") pod \"cilium-rwpmx\" (UID: \"c23f77c2-cbe7-49a4-b782-896502a23d34\") " pod="kube-system/cilium-rwpmx" Apr 30 12:46:13.790612 kubelet[2671]: I0430 12:46:13.790570 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c23f77c2-cbe7-49a4-b782-896502a23d34-host-proc-sys-kernel\") pod \"cilium-rwpmx\" (UID: \"c23f77c2-cbe7-49a4-b782-896502a23d34\") " pod="kube-system/cilium-rwpmx" Apr 30 12:46:13.791155 kubelet[2671]: I0430 12:46:13.790734 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c23f77c2-cbe7-49a4-b782-896502a23d34-hostproc\") pod \"cilium-rwpmx\" (UID: \"c23f77c2-cbe7-49a4-b782-896502a23d34\") " pod="kube-system/cilium-rwpmx" Apr 30 12:46:13.791155 kubelet[2671]: I0430 12:46:13.790760 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c23f77c2-cbe7-49a4-b782-896502a23d34-cilium-run\") pod \"cilium-rwpmx\" (UID: \"c23f77c2-cbe7-49a4-b782-896502a23d34\") " pod="kube-system/cilium-rwpmx" Apr 30 12:46:13.791155 kubelet[2671]: I0430 12:46:13.790775 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c23f77c2-cbe7-49a4-b782-896502a23d34-bpf-maps\") pod \"cilium-rwpmx\" (UID: \"c23f77c2-cbe7-49a4-b782-896502a23d34\") " pod="kube-system/cilium-rwpmx" Apr 30 12:46:13.791155 kubelet[2671]: I0430 12:46:13.791098 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c23f77c2-cbe7-49a4-b782-896502a23d34-cilium-config-path\") pod \"cilium-rwpmx\" (UID: \"c23f77c2-cbe7-49a4-b782-896502a23d34\") " pod="kube-system/cilium-rwpmx" Apr 30 12:46:13.791155 kubelet[2671]: I0430 12:46:13.791125 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c23f77c2-cbe7-49a4-b782-896502a23d34-hubble-tls\") pod \"cilium-rwpmx\" (UID: \"c23f77c2-cbe7-49a4-b782-896502a23d34\") " pod="kube-system/cilium-rwpmx" Apr 30 12:46:13.791436 kubelet[2671]: I0430 12:46:13.791142 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c23f77c2-cbe7-49a4-b782-896502a23d34-cni-path\") pod \"cilium-rwpmx\" (UID: \"c23f77c2-cbe7-49a4-b782-896502a23d34\") " pod="kube-system/cilium-rwpmx" Apr 30 12:46:13.791436 kubelet[2671]: I0430 12:46:13.791336 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww2gj\" (UniqueName: \"kubernetes.io/projected/c23f77c2-cbe7-49a4-b782-896502a23d34-kube-api-access-ww2gj\") pod \"cilium-rwpmx\" (UID: \"c23f77c2-cbe7-49a4-b782-896502a23d34\") " pod="kube-system/cilium-rwpmx" Apr 30 12:46:13.791436 kubelet[2671]: I0430 12:46:13.791354 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c23f77c2-cbe7-49a4-b782-896502a23d34-clustermesh-secrets\") pod \"cilium-rwpmx\" (UID: \"c23f77c2-cbe7-49a4-b782-896502a23d34\") " pod="kube-system/cilium-rwpmx" Apr 30 12:46:13.794543 systemd[1]: Created slice kubepods-burstable-podc23f77c2_cbe7_49a4_b782_896502a23d34.slice - libcontainer container kubepods-burstable-podc23f77c2_cbe7_49a4_b782_896502a23d34.slice. Apr 30 12:46:13.967701 sshd[4425]: Connection closed by 139.178.89.65 port 53280 Apr 30 12:46:13.968662 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Apr 30 12:46:13.973568 systemd[1]: sshd@21-91.99.82.124:22-139.178.89.65:53280.service: Deactivated successfully. Apr 30 12:46:13.978194 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 12:46:13.978415 systemd[1]: session-22.scope: Consumed 1.220s CPU time, 26.6M memory peak. Apr 30 12:46:13.979335 systemd-logind[1472]: Session 22 logged out. Waiting for processes to exit. Apr 30 12:46:13.980280 systemd-logind[1472]: Removed session 22. Apr 30 12:46:14.100631 containerd[1498]: time="2025-04-30T12:46:14.099164031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rwpmx,Uid:c23f77c2-cbe7-49a4-b782-896502a23d34,Namespace:kube-system,Attempt:0,}" Apr 30 12:46:14.122986 containerd[1498]: time="2025-04-30T12:46:14.122876915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:46:14.123171 containerd[1498]: time="2025-04-30T12:46:14.122998716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:46:14.123171 containerd[1498]: time="2025-04-30T12:46:14.123026837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:46:14.123171 containerd[1498]: time="2025-04-30T12:46:14.123148718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:46:14.151935 systemd[1]: Started cri-containerd-9c4d0c9718a2dd4bcb393384be8f240bfa0fa053817d0d95da70998e96687b69.scope - libcontainer container 9c4d0c9718a2dd4bcb393384be8f240bfa0fa053817d0d95da70998e96687b69. Apr 30 12:46:14.156003 systemd[1]: Started sshd@22-91.99.82.124:22-139.178.89.65:53286.service - OpenSSH per-connection server daemon (139.178.89.65:53286). Apr 30 12:46:14.187029 containerd[1498]: time="2025-04-30T12:46:14.186943255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rwpmx,Uid:c23f77c2-cbe7-49a4-b782-896502a23d34,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c4d0c9718a2dd4bcb393384be8f240bfa0fa053817d0d95da70998e96687b69\"" Apr 30 12:46:14.192863 containerd[1498]: time="2025-04-30T12:46:14.191594543Z" level=info msg="CreateContainer within sandbox \"9c4d0c9718a2dd4bcb393384be8f240bfa0fa053817d0d95da70998e96687b69\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:46:14.204436 containerd[1498]: time="2025-04-30T12:46:14.204354954Z" level=info msg="CreateContainer within sandbox \"9c4d0c9718a2dd4bcb393384be8f240bfa0fa053817d0d95da70998e96687b69\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"91d0da5237b8fb083ec7f338af1eb22e0ddcc100235561dae06230402ab62d26\"" Apr 30 12:46:14.204917 containerd[1498]: time="2025-04-30T12:46:14.204890040Z" level=info msg="StartContainer for \"91d0da5237b8fb083ec7f338af1eb22e0ddcc100235561dae06230402ab62d26\"" Apr 30 12:46:14.231952 systemd[1]: Started cri-containerd-91d0da5237b8fb083ec7f338af1eb22e0ddcc100235561dae06230402ab62d26.scope - libcontainer container 91d0da5237b8fb083ec7f338af1eb22e0ddcc100235561dae06230402ab62d26. Apr 30 12:46:14.263149 containerd[1498]: time="2025-04-30T12:46:14.263108359Z" level=info msg="StartContainer for \"91d0da5237b8fb083ec7f338af1eb22e0ddcc100235561dae06230402ab62d26\" returns successfully" Apr 30 12:46:14.272471 systemd[1]: cri-containerd-91d0da5237b8fb083ec7f338af1eb22e0ddcc100235561dae06230402ab62d26.scope: Deactivated successfully. Apr 30 12:46:14.314024 containerd[1498]: time="2025-04-30T12:46:14.312628269Z" level=info msg="shim disconnected" id=91d0da5237b8fb083ec7f338af1eb22e0ddcc100235561dae06230402ab62d26 namespace=k8s.io Apr 30 12:46:14.314024 containerd[1498]: time="2025-04-30T12:46:14.312699989Z" level=warning msg="cleaning up after shim disconnected" id=91d0da5237b8fb083ec7f338af1eb22e0ddcc100235561dae06230402ab62d26 namespace=k8s.io Apr 30 12:46:14.314024 containerd[1498]: time="2025-04-30T12:46:14.312714310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:15.164517 sshd[4469]: Accepted publickey for core from 139.178.89.65 port 53286 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:46:15.166418 sshd-session[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:46:15.172635 systemd-logind[1472]: New session 23 of user core. Apr 30 12:46:15.181979 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 12:46:15.302133 containerd[1498]: time="2025-04-30T12:46:15.302093243Z" level=info msg="CreateContainer within sandbox \"9c4d0c9718a2dd4bcb393384be8f240bfa0fa053817d0d95da70998e96687b69\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:46:15.319960 containerd[1498]: time="2025-04-30T12:46:15.319891106Z" level=info msg="CreateContainer within sandbox \"9c4d0c9718a2dd4bcb393384be8f240bfa0fa053817d0d95da70998e96687b69\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"da7bb7036dcaff6a8939f437390f22960485982478ce834251a3558430305784\"" Apr 30 12:46:15.321171 containerd[1498]: time="2025-04-30T12:46:15.321000397Z" level=info msg="StartContainer for \"da7bb7036dcaff6a8939f437390f22960485982478ce834251a3558430305784\"" Apr 30 12:46:15.361783 systemd[1]: Started cri-containerd-da7bb7036dcaff6a8939f437390f22960485982478ce834251a3558430305784.scope - libcontainer container da7bb7036dcaff6a8939f437390f22960485982478ce834251a3558430305784. Apr 30 12:46:15.390366 containerd[1498]: time="2025-04-30T12:46:15.390229187Z" level=info msg="StartContainer for \"da7bb7036dcaff6a8939f437390f22960485982478ce834251a3558430305784\" returns successfully" Apr 30 12:46:15.397135 systemd[1]: cri-containerd-da7bb7036dcaff6a8939f437390f22960485982478ce834251a3558430305784.scope: Deactivated successfully. Apr 30 12:46:15.427797 containerd[1498]: time="2025-04-30T12:46:15.426886683Z" level=info msg="shim disconnected" id=da7bb7036dcaff6a8939f437390f22960485982478ce834251a3558430305784 namespace=k8s.io Apr 30 12:46:15.427797 containerd[1498]: time="2025-04-30T12:46:15.426967804Z" level=warning msg="cleaning up after shim disconnected" id=da7bb7036dcaff6a8939f437390f22960485982478ce834251a3558430305784 namespace=k8s.io Apr 30 12:46:15.427797 containerd[1498]: time="2025-04-30T12:46:15.426984844Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:15.846643 sshd[4549]: Connection closed by 139.178.89.65 port 53286 Apr 30 12:46:15.845753 sshd-session[4469]: pam_unix(sshd:session): session closed for user core Apr 30 12:46:15.851213 systemd[1]: sshd@22-91.99.82.124:22-139.178.89.65:53286.service: Deactivated successfully. Apr 30 12:46:15.854715 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 12:46:15.856292 systemd-logind[1472]: Session 23 logged out. Waiting for processes to exit. Apr 30 12:46:15.857301 systemd-logind[1472]: Removed session 23. Apr 30 12:46:15.903639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da7bb7036dcaff6a8939f437390f22960485982478ce834251a3558430305784-rootfs.mount: Deactivated successfully. Apr 30 12:46:16.024158 systemd[1]: Started sshd@23-91.99.82.124:22-139.178.89.65:53302.service - OpenSSH per-connection server daemon (139.178.89.65:53302). Apr 30 12:46:16.309155 containerd[1498]: time="2025-04-30T12:46:16.308970076Z" level=info msg="CreateContainer within sandbox \"9c4d0c9718a2dd4bcb393384be8f240bfa0fa053817d0d95da70998e96687b69\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:46:16.334963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3078353858.mount: Deactivated successfully. Apr 30 12:46:16.342354 containerd[1498]: time="2025-04-30T12:46:16.342313737Z" level=info msg="CreateContainer within sandbox \"9c4d0c9718a2dd4bcb393384be8f240bfa0fa053817d0d95da70998e96687b69\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"50c66a76ede066150592c562ea85ff486c4051f9c449acf2bb67026e5473f9ee\"" Apr 30 12:46:16.344492 containerd[1498]: time="2025-04-30T12:46:16.344450238Z" level=info msg="StartContainer for \"50c66a76ede066150592c562ea85ff486c4051f9c449acf2bb67026e5473f9ee\"" Apr 30 12:46:16.374975 systemd[1]: Started cri-containerd-50c66a76ede066150592c562ea85ff486c4051f9c449acf2bb67026e5473f9ee.scope - libcontainer container 50c66a76ede066150592c562ea85ff486c4051f9c449acf2bb67026e5473f9ee. Apr 30 12:46:16.412673 containerd[1498]: time="2025-04-30T12:46:16.412628055Z" level=info msg="StartContainer for \"50c66a76ede066150592c562ea85ff486c4051f9c449acf2bb67026e5473f9ee\" returns successfully" Apr 30 12:46:16.415251 systemd[1]: cri-containerd-50c66a76ede066150592c562ea85ff486c4051f9c449acf2bb67026e5473f9ee.scope: Deactivated successfully. Apr 30 12:46:16.441359 containerd[1498]: time="2025-04-30T12:46:16.441281067Z" level=info msg="shim disconnected" id=50c66a76ede066150592c562ea85ff486c4051f9c449acf2bb67026e5473f9ee namespace=k8s.io Apr 30 12:46:16.441604 containerd[1498]: time="2025-04-30T12:46:16.441359668Z" level=warning msg="cleaning up after shim disconnected" id=50c66a76ede066150592c562ea85ff486c4051f9c449acf2bb67026e5473f9ee namespace=k8s.io Apr 30 12:46:16.441604 containerd[1498]: time="2025-04-30T12:46:16.441376428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:17.001294 sshd[4616]: Accepted publickey for core from 139.178.89.65 port 53302 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:46:17.003278 sshd-session[4616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:46:17.008986 systemd-logind[1472]: New session 24 of user core. Apr 30 12:46:17.016967 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 12:46:17.315519 containerd[1498]: time="2025-04-30T12:46:17.315162341Z" level=info msg="CreateContainer within sandbox \"9c4d0c9718a2dd4bcb393384be8f240bfa0fa053817d0d95da70998e96687b69\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:46:17.342979 containerd[1498]: time="2025-04-30T12:46:17.342926544Z" level=info msg="CreateContainer within sandbox \"9c4d0c9718a2dd4bcb393384be8f240bfa0fa053817d0d95da70998e96687b69\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"05a142f5fc490ef4764465e042542ebaf4b66bafc8358509da2bd507e407644b\"" Apr 30 12:46:17.344655 containerd[1498]: time="2025-04-30T12:46:17.343937314Z" level=info msg="StartContainer for \"05a142f5fc490ef4764465e042542ebaf4b66bafc8358509da2bd507e407644b\"" Apr 30 12:46:17.381883 systemd[1]: Started cri-containerd-05a142f5fc490ef4764465e042542ebaf4b66bafc8358509da2bd507e407644b.scope - libcontainer container 05a142f5fc490ef4764465e042542ebaf4b66bafc8358509da2bd507e407644b. Apr 30 12:46:17.409114 systemd[1]: cri-containerd-05a142f5fc490ef4764465e042542ebaf4b66bafc8358509da2bd507e407644b.scope: Deactivated successfully. Apr 30 12:46:17.411533 containerd[1498]: time="2025-04-30T12:46:17.410887996Z" level=info msg="StartContainer for \"05a142f5fc490ef4764465e042542ebaf4b66bafc8358509da2bd507e407644b\" returns successfully" Apr 30 12:46:17.435503 containerd[1498]: time="2025-04-30T12:46:17.435415125Z" level=info msg="shim disconnected" id=05a142f5fc490ef4764465e042542ebaf4b66bafc8358509da2bd507e407644b namespace=k8s.io Apr 30 12:46:17.436158 containerd[1498]: time="2025-04-30T12:46:17.435902050Z" level=warning msg="cleaning up after shim disconnected" id=05a142f5fc490ef4764465e042542ebaf4b66bafc8358509da2bd507e407644b namespace=k8s.io Apr 30 12:46:17.436158 containerd[1498]: time="2025-04-30T12:46:17.435940770Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:17.808636 kubelet[2671]: E0430 12:46:17.808458 2671 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 12:46:17.903760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05a142f5fc490ef4764465e042542ebaf4b66bafc8358509da2bd507e407644b-rootfs.mount: Deactivated successfully. Apr 30 12:46:18.322599 containerd[1498]: time="2025-04-30T12:46:18.320288037Z" level=info msg="CreateContainer within sandbox \"9c4d0c9718a2dd4bcb393384be8f240bfa0fa053817d0d95da70998e96687b69\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:46:18.350552 containerd[1498]: time="2025-04-30T12:46:18.349043368Z" level=info msg="CreateContainer within sandbox \"9c4d0c9718a2dd4bcb393384be8f240bfa0fa053817d0d95da70998e96687b69\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"442e24dce30c7add4d6c2596ff8cfe37e28ed18858f60732772b3db4a566c8dc\"" Apr 30 12:46:18.357501 containerd[1498]: time="2025-04-30T12:46:18.357441613Z" level=info msg="StartContainer for \"442e24dce30c7add4d6c2596ff8cfe37e28ed18858f60732772b3db4a566c8dc\"" Apr 30 12:46:18.393986 systemd[1]: Started cri-containerd-442e24dce30c7add4d6c2596ff8cfe37e28ed18858f60732772b3db4a566c8dc.scope - libcontainer container 442e24dce30c7add4d6c2596ff8cfe37e28ed18858f60732772b3db4a566c8dc. Apr 30 12:46:18.427073 containerd[1498]: time="2025-04-30T12:46:18.427011519Z" level=info msg="StartContainer for \"442e24dce30c7add4d6c2596ff8cfe37e28ed18858f60732772b3db4a566c8dc\" returns successfully" Apr 30 12:46:18.748479 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 30 12:46:19.351275 kubelet[2671]: I0430 12:46:19.351205 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rwpmx" podStartSLOduration=6.3511814730000005 podStartE2EDuration="6.351181473s" podCreationTimestamp="2025-04-30 12:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:46:19.349233054 +0000 UTC m=+221.806099791" watchObservedRunningTime="2025-04-30 12:46:19.351181473 +0000 UTC m=+221.808048210" Apr 30 12:46:21.644940 systemd-networkd[1375]: lxc_health: Link UP Apr 30 12:46:21.651055 systemd-networkd[1375]: lxc_health: Gained carrier Apr 30 12:46:21.920163 systemd[1]: run-containerd-runc-k8s.io-442e24dce30c7add4d6c2596ff8cfe37e28ed18858f60732772b3db4a566c8dc-runc.hN79PQ.mount: Deactivated successfully. Apr 30 12:46:22.549371 kubelet[2671]: I0430 12:46:22.549090 2671 setters.go:602] "Node became not ready" node="ci-4230-1-1-7-cef124738e" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T12:46:22Z","lastTransitionTime":"2025-04-30T12:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 12:46:23.462883 systemd-networkd[1375]: lxc_health: Gained IPv6LL Apr 30 12:46:24.152239 systemd[1]: run-containerd-runc-k8s.io-442e24dce30c7add4d6c2596ff8cfe37e28ed18858f60732772b3db4a566c8dc-runc.aU6oAH.mount: Deactivated successfully. Apr 30 12:46:30.830961 sshd[4673]: Connection closed by 139.178.89.65 port 53302 Apr 30 12:46:30.830813 sshd-session[4616]: pam_unix(sshd:session): session closed for user core Apr 30 12:46:30.837210 systemd[1]: sshd@23-91.99.82.124:22-139.178.89.65:53302.service: Deactivated successfully. Apr 30 12:46:30.840254 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 12:46:30.841903 systemd-logind[1472]: Session 24 logged out. Waiting for processes to exit. Apr 30 12:46:30.843057 systemd-logind[1472]: Removed session 24. Apr 30 12:46:37.687409 containerd[1498]: time="2025-04-30T12:46:37.687341554Z" level=info msg="StopPodSandbox for \"5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a\"" Apr 30 12:46:37.687917 containerd[1498]: time="2025-04-30T12:46:37.687622437Z" level=info msg="TearDown network for sandbox \"5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a\" successfully" Apr 30 12:46:37.687917 containerd[1498]: time="2025-04-30T12:46:37.687642877Z" level=info msg="StopPodSandbox for \"5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a\" returns successfully" Apr 30 12:46:37.688575 containerd[1498]: time="2025-04-30T12:46:37.688550166Z" level=info msg="RemovePodSandbox for \"5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a\"" Apr 30 12:46:37.688687 containerd[1498]: time="2025-04-30T12:46:37.688601926Z" level=info msg="Forcibly stopping sandbox \"5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a\"" Apr 30 12:46:37.688687 containerd[1498]: time="2025-04-30T12:46:37.688673727Z" level=info msg="TearDown network for sandbox \"5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a\" successfully" Apr 30 12:46:37.692725 containerd[1498]: time="2025-04-30T12:46:37.692612044Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 12:46:37.692967 containerd[1498]: time="2025-04-30T12:46:37.692737405Z" level=info msg="RemovePodSandbox \"5423e54cd891e71190a89378635624c96bff6ff4ab180f8c4217b69cfe5e5e2a\" returns successfully" Apr 30 12:46:37.693872 containerd[1498]: time="2025-04-30T12:46:37.693477573Z" level=info msg="StopPodSandbox for \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\"" Apr 30 12:46:37.693872 containerd[1498]: time="2025-04-30T12:46:37.693592934Z" level=info msg="TearDown network for sandbox \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\" successfully" Apr 30 12:46:37.693872 containerd[1498]: time="2025-04-30T12:46:37.693610454Z" level=info msg="StopPodSandbox for \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\" returns successfully" Apr 30 12:46:37.694046 containerd[1498]: time="2025-04-30T12:46:37.693930217Z" level=info msg="RemovePodSandbox for \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\"" Apr 30 12:46:37.694046 containerd[1498]: time="2025-04-30T12:46:37.693953297Z" level=info msg="Forcibly stopping sandbox \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\"" Apr 30 12:46:37.694046 containerd[1498]: time="2025-04-30T12:46:37.694011338Z" level=info msg="TearDown network for sandbox \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\" successfully" Apr 30 12:46:37.698862 containerd[1498]: time="2025-04-30T12:46:37.698715822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 12:46:37.698998 containerd[1498]: time="2025-04-30T12:46:37.698894024Z" level=info msg="RemovePodSandbox \"3d1a304ae25b172e47a24f30a1f496e1545663bf60c9c0834a9b6aac9432fe33\" returns successfully" Apr 30 12:46:46.111751 systemd[1]: cri-containerd-034c8bd0c126281d2bfa9aaa8a85c1f94a428118342aa7691e7f7a8b61eb3f5e.scope: Deactivated successfully. Apr 30 12:46:46.112204 systemd[1]: cri-containerd-034c8bd0c126281d2bfa9aaa8a85c1f94a428118342aa7691e7f7a8b61eb3f5e.scope: Consumed 4.815s CPU time, 54M memory peak. Apr 30 12:46:46.137595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-034c8bd0c126281d2bfa9aaa8a85c1f94a428118342aa7691e7f7a8b61eb3f5e-rootfs.mount: Deactivated successfully. Apr 30 12:46:46.147874 containerd[1498]: time="2025-04-30T12:46:46.147758729Z" level=info msg="shim disconnected" id=034c8bd0c126281d2bfa9aaa8a85c1f94a428118342aa7691e7f7a8b61eb3f5e namespace=k8s.io Apr 30 12:46:46.147874 containerd[1498]: time="2025-04-30T12:46:46.147865650Z" level=warning msg="cleaning up after shim disconnected" id=034c8bd0c126281d2bfa9aaa8a85c1f94a428118342aa7691e7f7a8b61eb3f5e namespace=k8s.io Apr 30 12:46:46.147874 containerd[1498]: time="2025-04-30T12:46:46.147875090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:46.398763 kubelet[2671]: I0430 12:46:46.397910 2671 scope.go:117] "RemoveContainer" containerID="034c8bd0c126281d2bfa9aaa8a85c1f94a428118342aa7691e7f7a8b61eb3f5e" Apr 30 12:46:46.400597 containerd[1498]: time="2025-04-30T12:46:46.400397191Z" level=info msg="CreateContainer within sandbox \"45e5535c512dafc916240b5ea5a88ce4eacd3928475ba3b56c07f2e610c3807d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 12:46:46.417125 containerd[1498]: time="2025-04-30T12:46:46.417063465Z" level=info msg="CreateContainer within sandbox \"45e5535c512dafc916240b5ea5a88ce4eacd3928475ba3b56c07f2e610c3807d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8437133f30d65dc57811d398f42fc8fccca64933e26bbb9ff4333042ffe9796b\"" Apr 30 12:46:46.417701 containerd[1498]: time="2025-04-30T12:46:46.417679311Z" level=info msg="StartContainer for \"8437133f30d65dc57811d398f42fc8fccca64933e26bbb9ff4333042ffe9796b\"" Apr 30 12:46:46.448788 systemd[1]: Started cri-containerd-8437133f30d65dc57811d398f42fc8fccca64933e26bbb9ff4333042ffe9796b.scope - libcontainer container 8437133f30d65dc57811d398f42fc8fccca64933e26bbb9ff4333042ffe9796b. Apr 30 12:46:46.485451 containerd[1498]: time="2025-04-30T12:46:46.485377418Z" level=info msg="StartContainer for \"8437133f30d65dc57811d398f42fc8fccca64933e26bbb9ff4333042ffe9796b\" returns successfully" Apr 30 12:46:46.570217 kubelet[2671]: E0430 12:46:46.570146 2671 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:60542->10.0.0.2:2379: read: connection timed out" Apr 30 12:46:48.394625 update_engine[1473]: I20250430 12:46:48.392760 1473 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 12:46:48.394625 update_engine[1473]: I20250430 12:46:48.392849 1473 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 12:46:48.394625 update_engine[1473]: I20250430 12:46:48.393162 1473 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 12:46:48.394625 update_engine[1473]: I20250430 12:46:48.393872 1473 omaha_request_params.cc:62] Current group set to beta Apr 30 12:46:48.394625 update_engine[1473]: I20250430 12:46:48.394017 1473 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 12:46:48.394625 update_engine[1473]: I20250430 12:46:48.394032 1473 update_attempter.cc:643] Scheduling an action processor start. Apr 30 12:46:48.394625 update_engine[1473]: I20250430 12:46:48.394058 1473 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 12:46:48.394625 update_engine[1473]: I20250430 12:46:48.394106 1473 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 12:46:48.394625 update_engine[1473]: I20250430 12:46:48.394189 1473 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 12:46:48.394625 update_engine[1473]: I20250430 12:46:48.394203 1473 omaha_request_action.cc:272] Request: Apr 30 12:46:48.394625 update_engine[1473]: Apr 30 12:46:48.394625 update_engine[1473]: Apr 30 12:46:48.394625 update_engine[1473]: Apr 30 12:46:48.394625 update_engine[1473]: Apr 30 12:46:48.394625 update_engine[1473]: Apr 30 12:46:48.394625 update_engine[1473]: Apr 30 12:46:48.394625 update_engine[1473]: Apr 30 12:46:48.394625 update_engine[1473]: Apr 30 12:46:48.394625 update_engine[1473]: I20250430 12:46:48.394214 1473 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 12:46:48.396221 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 12:46:48.397902 update_engine[1473]: I20250430 12:46:48.397825 1473 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 12:46:48.398402 update_engine[1473]: I20250430 12:46:48.398360 1473 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 12:46:48.401668 update_engine[1473]: E20250430 12:46:48.401613 1473 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 12:46:48.401779 update_engine[1473]: I20250430 12:46:48.401701 1473 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 12:46:50.921611 kubelet[2671]: E0430 12:46:50.921327 2671 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:60366->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-1-1-7-cef124738e.183b19649f122dd8 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-1-1-7-cef124738e,UID:d81e906803847ef2e55b70d87a69caa1,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-7-cef124738e,},FirstTimestamp:2025-04-30 12:46:40.492391896 +0000 UTC m=+242.949258673,LastTimestamp:2025-04-30 12:46:40.492391896 +0000 UTC m=+242.949258673,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-7-cef124738e,}" Apr 30 12:46:52.022021 systemd[1]: cri-containerd-cc2569cd29291a9b2e5370812690176c48451c90e96bbf6992db987fc2e0b16e.scope: Deactivated successfully. Apr 30 12:46:52.022327 systemd[1]: cri-containerd-cc2569cd29291a9b2e5370812690176c48451c90e96bbf6992db987fc2e0b16e.scope: Consumed 4.793s CPU time, 22.6M memory peak. Apr 30 12:46:52.045937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc2569cd29291a9b2e5370812690176c48451c90e96bbf6992db987fc2e0b16e-rootfs.mount: Deactivated successfully. Apr 30 12:46:52.054123 containerd[1498]: time="2025-04-30T12:46:52.054044262Z" level=info msg="shim disconnected" id=cc2569cd29291a9b2e5370812690176c48451c90e96bbf6992db987fc2e0b16e namespace=k8s.io Apr 30 12:46:52.054123 containerd[1498]: time="2025-04-30T12:46:52.054111102Z" level=warning msg="cleaning up after shim disconnected" id=cc2569cd29291a9b2e5370812690176c48451c90e96bbf6992db987fc2e0b16e namespace=k8s.io Apr 30 12:46:52.054123 containerd[1498]: time="2025-04-30T12:46:52.054124062Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:52.417374 kubelet[2671]: I0430 12:46:52.417264 2671 scope.go:117] "RemoveContainer" containerID="cc2569cd29291a9b2e5370812690176c48451c90e96bbf6992db987fc2e0b16e" Apr 30 12:46:52.419970 containerd[1498]: time="2025-04-30T12:46:52.419766399Z" level=info msg="CreateContainer within sandbox \"227f19d8fa3d5aca668652abc1acfe07e44d8246e9b809cd1972f8d27e188bfa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 12:46:52.439402 containerd[1498]: time="2025-04-30T12:46:52.439349498Z" level=info msg="CreateContainer within sandbox \"227f19d8fa3d5aca668652abc1acfe07e44d8246e9b809cd1972f8d27e188bfa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1fff23d7bb7c7626b74b42e525a05062789dd2e72262409c062339809443e031\"" Apr 30 12:46:52.440093 containerd[1498]: time="2025-04-30T12:46:52.439934463Z" level=info msg="StartContainer for \"1fff23d7bb7c7626b74b42e525a05062789dd2e72262409c062339809443e031\"" Apr 30 12:46:52.477952 systemd[1]: Started cri-containerd-1fff23d7bb7c7626b74b42e525a05062789dd2e72262409c062339809443e031.scope - libcontainer container 1fff23d7bb7c7626b74b42e525a05062789dd2e72262409c062339809443e031. Apr 30 12:46:52.516053 containerd[1498]: time="2025-04-30T12:46:52.515989237Z" level=info msg="StartContainer for \"1fff23d7bb7c7626b74b42e525a05062789dd2e72262409c062339809443e031\" returns successfully"