Apr 30 12:41:44.900708 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 12:41:44.900742 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Tue Apr 29 22:28:35 -00 2025 Apr 30 12:41:44.900754 kernel: KASLR enabled Apr 30 12:41:44.900761 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 30 12:41:44.900767 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Apr 30 12:41:44.900773 kernel: random: crng init done Apr 30 12:41:44.900781 kernel: secureboot: Secure boot disabled Apr 30 12:41:44.900787 kernel: ACPI: Early table checksum verification disabled Apr 30 12:41:44.900794 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 30 12:41:44.900802 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 30 12:41:44.900809 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:44.900815 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:44.900826 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:44.900833 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:44.900841 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:44.900849 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:44.900856 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:44.900863 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:44.900870 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:41:44.900877 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 30 12:41:44.900884 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 30 12:41:44.900890 kernel: NUMA: Failed to initialise from firmware Apr 30 12:41:44.900897 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 12:41:44.900904 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Apr 30 12:41:44.900910 kernel: Zone ranges: Apr 30 12:41:44.900919 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 30 12:41:44.900926 kernel: DMA32 empty Apr 30 12:41:44.900933 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 30 12:41:44.900940 kernel: Movable zone start for each node Apr 30 12:41:44.900947 kernel: Early memory node ranges Apr 30 12:41:44.900953 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Apr 30 12:41:44.900960 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Apr 30 12:41:44.900967 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Apr 30 12:41:44.900974 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 30 12:41:44.900981 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 30 12:41:44.900988 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 30 12:41:44.900995 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 30 12:41:44.901004 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 30 12:41:44.901011 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 30 12:41:44.901018 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 30 12:41:44.901028 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 30 12:41:44.901035 kernel: psci: probing for conduit method from ACPI. Apr 30 12:41:44.901043 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 12:41:44.901052 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 12:41:44.901059 kernel: psci: Trusted OS migration not required Apr 30 12:41:44.901066 kernel: psci: SMC Calling Convention v1.1 Apr 30 12:41:44.901073 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 30 12:41:44.901080 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 12:41:44.901088 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 12:41:44.901096 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 12:41:44.901103 kernel: Detected PIPT I-cache on CPU0 Apr 30 12:41:44.901110 kernel: CPU features: detected: GIC system register CPU interface Apr 30 12:41:44.901117 kernel: CPU features: detected: Hardware dirty bit management Apr 30 12:41:44.901126 kernel: CPU features: detected: Spectre-v4 Apr 30 12:41:44.901134 kernel: CPU features: detected: Spectre-BHB Apr 30 12:41:44.901141 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 12:41:44.901148 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 12:41:44.901155 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 12:41:44.901162 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 12:41:44.901169 kernel: alternatives: applying boot alternatives Apr 30 12:41:44.901178 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=984055eb0c340c9cf0fb51b368030ed72e75b7f2e065edc13766888ef0b42074 Apr 30 12:41:44.901185 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 12:41:44.901193 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 12:41:44.901200 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 12:41:44.901209 kernel: Fallback order for Node 0: 0 Apr 30 12:41:44.901216 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 30 12:41:44.901224 kernel: Policy zone: Normal Apr 30 12:41:44.901231 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 12:41:44.901238 kernel: software IO TLB: area num 2. Apr 30 12:41:44.901245 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 30 12:41:44.901253 kernel: Memory: 3883832K/4096000K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 212168K reserved, 0K cma-reserved) Apr 30 12:41:44.901260 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 12:41:44.901268 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 12:41:44.901276 kernel: rcu: RCU event tracing is enabled. Apr 30 12:41:44.901283 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 12:41:44.901291 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 12:41:44.901300 kernel: Tracing variant of Tasks RCU enabled. Apr 30 12:41:44.901308 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 12:41:44.901316 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 12:41:44.901323 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 12:41:44.901330 kernel: GICv3: 256 SPIs implemented Apr 30 12:41:44.901337 kernel: GICv3: 0 Extended SPIs implemented Apr 30 12:41:44.901394 kernel: Root IRQ handler: gic_handle_irq Apr 30 12:41:44.901403 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 12:41:44.901411 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 30 12:41:44.901417 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 30 12:41:44.901425 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 12:41:44.901435 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 30 12:41:44.901675 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 30 12:41:44.901684 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 30 12:41:44.901691 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 12:41:44.901698 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 12:41:44.901706 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 12:41:44.901713 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 12:41:44.901720 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 12:41:44.901728 kernel: Console: colour dummy device 80x25 Apr 30 12:41:44.901736 kernel: ACPI: Core revision 20230628 Apr 30 12:41:44.901746 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 12:41:44.901756 kernel: pid_max: default: 32768 minimum: 301 Apr 30 12:41:44.901764 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 12:41:44.901771 kernel: landlock: Up and running. Apr 30 12:41:44.901778 kernel: SELinux: Initializing. Apr 30 12:41:44.901786 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 12:41:44.901794 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 12:41:44.901801 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:41:44.901809 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:41:44.901816 kernel: rcu: Hierarchical SRCU implementation. Apr 30 12:41:44.901826 kernel: rcu: Max phase no-delay instances is 400. Apr 30 12:41:44.901833 kernel: Platform MSI: ITS@0x8080000 domain created Apr 30 12:41:44.901840 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 30 12:41:44.901848 kernel: Remapping and enabling EFI services. Apr 30 12:41:44.901855 kernel: smp: Bringing up secondary CPUs ... Apr 30 12:41:44.901863 kernel: Detected PIPT I-cache on CPU1 Apr 30 12:41:44.901870 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 30 12:41:44.901878 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 30 12:41:44.901886 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 12:41:44.901895 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 12:41:44.901902 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 12:41:44.901916 kernel: SMP: Total of 2 processors activated. Apr 30 12:41:44.901926 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 12:41:44.901934 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 12:41:44.901942 kernel: CPU features: detected: Common not Private translations Apr 30 12:41:44.901949 kernel: CPU features: detected: CRC32 instructions Apr 30 12:41:44.901957 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 30 12:41:44.901965 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 12:41:44.901975 kernel: CPU features: detected: LSE atomic instructions Apr 30 12:41:44.901983 kernel: CPU features: detected: Privileged Access Never Apr 30 12:41:44.901990 kernel: CPU features: detected: RAS Extension Support Apr 30 12:41:44.901998 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 30 12:41:44.902006 kernel: CPU: All CPU(s) started at EL1 Apr 30 12:41:44.902014 kernel: alternatives: applying system-wide alternatives Apr 30 12:41:44.902022 kernel: devtmpfs: initialized Apr 30 12:41:44.902030 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 12:41:44.902040 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 12:41:44.902048 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 12:41:44.902056 kernel: SMBIOS 3.0.0 present. Apr 30 12:41:44.902064 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 30 12:41:44.902072 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 12:41:44.902079 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 12:41:44.902087 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 12:41:44.902096 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 12:41:44.902103 kernel: audit: initializing netlink subsys (disabled) Apr 30 12:41:44.902113 kernel: audit: type=2000 audit(0.011:1): state=initialized audit_enabled=0 res=1 Apr 30 12:41:44.902121 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 12:41:44.902130 kernel: cpuidle: using governor menu Apr 30 12:41:44.902138 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 12:41:44.902146 kernel: ASID allocator initialised with 32768 entries Apr 30 12:41:44.902156 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 12:41:44.902164 kernel: Serial: AMBA PL011 UART driver Apr 30 12:41:44.902172 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 12:41:44.902180 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 12:41:44.902190 kernel: Modules: 509264 pages in range for PLT usage Apr 30 12:41:44.902199 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 12:41:44.902208 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 12:41:44.902216 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 12:41:44.902224 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 12:41:44.902232 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 12:41:44.902240 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 12:41:44.902248 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 12:41:44.902256 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 12:41:44.902266 kernel: ACPI: Added _OSI(Module Device) Apr 30 12:41:44.902273 kernel: ACPI: Added _OSI(Processor Device) Apr 30 12:41:44.902281 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 12:41:44.902289 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 12:41:44.902297 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 12:41:44.902305 kernel: ACPI: Interpreter enabled Apr 30 12:41:44.902312 kernel: ACPI: Using GIC for interrupt routing Apr 30 12:41:44.902320 kernel: ACPI: MCFG table detected, 1 entries Apr 30 12:41:44.902328 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 30 12:41:44.902338 kernel: printk: console [ttyAMA0] enabled Apr 30 12:41:44.902354 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 12:41:44.902514 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 12:41:44.904881 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 12:41:44.904986 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 12:41:44.905059 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 30 12:41:44.905130 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 30 12:41:44.905145 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 30 12:41:44.905153 kernel: PCI host bridge to bus 0000:00 Apr 30 12:41:44.905235 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 30 12:41:44.905301 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 12:41:44.905390 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 30 12:41:44.905457 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 12:41:44.905545 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 30 12:41:44.906714 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 30 12:41:44.906798 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 30 12:41:44.906874 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 12:41:44.906957 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:44.907031 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 30 12:41:44.907111 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:44.907189 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 30 12:41:44.907269 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:44.907341 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 30 12:41:44.907539 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:44.908710 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 30 12:41:44.908806 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:44.908897 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 30 12:41:44.908987 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:44.909061 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 30 12:41:44.909141 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:44.909211 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 30 12:41:44.909289 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:44.909414 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 30 12:41:44.909508 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 30 12:41:44.910626 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 30 12:41:44.910731 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 30 12:41:44.910807 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 30 12:41:44.910889 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 12:41:44.910964 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 30 12:41:44.911041 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 12:41:44.911114 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 12:41:44.911196 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 12:41:44.911270 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 30 12:41:44.911397 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 30 12:41:44.911484 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 30 12:41:44.912643 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 30 12:41:44.912768 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 30 12:41:44.912846 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 30 12:41:44.912934 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 12:41:44.913014 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 30 12:41:44.913095 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 30 12:41:44.913179 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 30 12:41:44.913263 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 30 12:41:44.913338 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 12:41:44.913499 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 30 12:41:44.913958 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 30 12:41:44.914078 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 30 12:41:44.914159 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 30 12:41:44.914253 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 30 12:41:44.914327 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 30 12:41:44.914432 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 30 12:41:44.914514 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 30 12:41:44.915340 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 30 12:41:44.915454 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 30 12:41:44.915531 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 30 12:41:44.915635 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 30 12:41:44.915784 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 30 12:41:44.915879 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 30 12:41:44.915950 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 30 12:41:44.916022 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 30 12:41:44.916101 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 30 12:41:44.916170 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 30 12:41:44.916243 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 30 12:41:44.916322 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 30 12:41:44.916459 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 30 12:41:44.916538 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 30 12:41:44.917718 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 12:41:44.917807 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 30 12:41:44.917881 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 30 12:41:44.917956 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 12:41:44.918035 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 30 12:41:44.918108 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 30 12:41:44.918183 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 12:41:44.918259 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 30 12:41:44.918339 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 30 12:41:44.918447 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 30 12:41:44.918522 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 12:41:44.919726 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 30 12:41:44.919815 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 12:41:44.919891 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 30 12:41:44.919962 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 12:41:44.920038 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 30 12:41:44.920109 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 12:41:44.920186 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 30 12:41:44.920263 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 12:41:44.920338 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 30 12:41:44.920438 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 12:41:44.920516 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 30 12:41:44.920616 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 12:41:44.920694 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 30 12:41:44.920766 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 12:41:44.920847 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 30 12:41:44.920919 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 12:41:44.920995 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 30 12:41:44.921066 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 30 12:41:44.921141 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 30 12:41:44.921211 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 12:41:44.921280 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 30 12:41:44.921415 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 12:41:44.921507 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 30 12:41:44.923651 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 12:41:44.923757 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 30 12:41:44.923830 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 30 12:41:44.923904 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 30 12:41:44.923977 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 30 12:41:44.924050 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 30 12:41:44.924127 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 30 12:41:44.924201 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 30 12:41:44.924273 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 30 12:41:44.924386 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 30 12:41:44.924471 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 30 12:41:44.924549 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 30 12:41:44.926702 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 30 12:41:44.926790 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 30 12:41:44.926879 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 30 12:41:44.926956 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 12:41:44.927028 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 30 12:41:44.927103 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 30 12:41:44.927173 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 30 12:41:44.927242 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 30 12:41:44.927311 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 12:41:44.927435 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 30 12:41:44.927522 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 30 12:41:44.927646 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 30 12:41:44.927720 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 30 12:41:44.927790 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 12:41:44.927871 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 30 12:41:44.927945 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 30 12:41:44.928016 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 30 12:41:44.928087 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 30 12:41:44.928156 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 30 12:41:44.928230 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 12:41:44.928321 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 30 12:41:44.928424 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 30 12:41:44.928518 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 30 12:41:44.928619 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 30 12:41:44.928708 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 12:41:44.928788 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 30 12:41:44.928863 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 30 12:41:44.928938 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 30 12:41:44.929010 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 30 12:41:44.929079 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 30 12:41:44.929151 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 12:41:44.929230 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 30 12:41:44.929303 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 30 12:41:44.929391 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 30 12:41:44.929465 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 30 12:41:44.929536 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 30 12:41:44.931750 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 12:41:44.931845 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 30 12:41:44.931926 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 30 12:41:44.932000 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 30 12:41:44.932074 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 30 12:41:44.932144 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 30 12:41:44.932216 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 30 12:41:44.932286 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 12:41:44.932411 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 30 12:41:44.932495 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 30 12:41:44.932590 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 30 12:41:44.932669 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 12:41:44.932745 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 30 12:41:44.932815 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 30 12:41:44.932886 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 30 12:41:44.932956 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 12:41:44.933030 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 30 12:41:44.933094 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 12:41:44.933162 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 30 12:41:44.933241 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 30 12:41:44.933306 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 30 12:41:44.933391 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 30 12:41:44.933469 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 30 12:41:44.933535 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 30 12:41:44.937842 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 30 12:41:44.937972 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 30 12:41:44.938046 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 30 12:41:44.938113 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 30 12:41:44.938190 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 30 12:41:44.938258 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 30 12:41:44.938325 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 30 12:41:44.938425 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 30 12:41:44.938496 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 30 12:41:44.938577 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 30 12:41:44.938657 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 30 12:41:44.938736 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 30 12:41:44.938806 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 30 12:41:44.938881 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 30 12:41:44.938947 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 30 12:41:44.939013 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 30 12:41:44.939087 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 30 12:41:44.939155 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 30 12:41:44.939222 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 30 12:41:44.939299 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 30 12:41:44.939410 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 30 12:41:44.939482 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 30 12:41:44.939493 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 12:41:44.939502 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 12:41:44.939510 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 12:41:44.939523 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 12:41:44.939533 kernel: iommu: Default domain type: Translated Apr 30 12:41:44.939542 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 12:41:44.939550 kernel: efivars: Registered efivars operations Apr 30 12:41:44.939558 kernel: vgaarb: loaded Apr 30 12:41:44.940617 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 12:41:44.940629 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 12:41:44.940638 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 12:41:44.940647 kernel: pnp: PnP ACPI init Apr 30 12:41:44.940790 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 30 12:41:44.940804 kernel: pnp: PnP ACPI: found 1 devices Apr 30 12:41:44.940813 kernel: NET: Registered PF_INET protocol family Apr 30 12:41:44.940822 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 12:41:44.940830 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 12:41:44.940839 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 12:41:44.940848 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 12:41:44.940856 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 12:41:44.940864 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 12:41:44.940876 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 12:41:44.940884 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 12:41:44.940892 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 12:41:44.940982 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 30 12:41:44.940995 kernel: PCI: CLS 0 bytes, default 64 Apr 30 12:41:44.941003 kernel: kvm [1]: HYP mode not available Apr 30 12:41:44.941011 kernel: Initialise system trusted keyrings Apr 30 12:41:44.941020 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 12:41:44.941028 kernel: Key type asymmetric registered Apr 30 12:41:44.941038 kernel: Asymmetric key parser 'x509' registered Apr 30 12:41:44.942668 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 12:41:44.942689 kernel: io scheduler mq-deadline registered Apr 30 12:41:44.942698 kernel: io scheduler kyber registered Apr 30 12:41:44.942707 kernel: io scheduler bfq registered Apr 30 12:41:44.942716 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 30 12:41:44.942850 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 30 12:41:44.942931 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 30 12:41:44.943013 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:44.943094 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 30 12:41:44.943167 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 30 12:41:44.943240 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:44.943318 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 30 12:41:44.943411 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 30 12:41:44.943492 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:44.943594 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 30 12:41:44.943669 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 30 12:41:44.943747 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:44.943833 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 30 12:41:44.943916 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 30 12:41:44.943999 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:44.944087 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 30 12:41:44.944170 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 30 12:41:44.944255 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:44.944340 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 30 12:41:44.944486 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 30 12:41:44.946038 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:44.946152 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 30 12:41:44.946229 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 30 12:41:44.946302 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:44.946314 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 30 12:41:44.946412 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 30 12:41:44.946497 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 30 12:41:44.946593 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 12:41:44.946606 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 12:41:44.946615 kernel: ACPI: button: Power Button [PWRB] Apr 30 12:41:44.946623 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 12:41:44.946703 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 30 12:41:44.946783 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 30 12:41:44.946795 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 12:41:44.946807 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 30 12:41:44.946884 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 30 12:41:44.946896 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 30 12:41:44.946905 kernel: thunder_xcv, ver 1.0 Apr 30 12:41:44.946913 kernel: thunder_bgx, ver 1.0 Apr 30 12:41:44.946921 kernel: nicpf, ver 1.0 Apr 30 12:41:44.946930 kernel: nicvf, ver 1.0 Apr 30 12:41:44.947023 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 12:41:44.947099 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T12:41:44 UTC (1746016904) Apr 30 12:41:44.947110 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 12:41:44.947119 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 30 12:41:44.947127 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 12:41:44.947136 kernel: watchdog: Hard watchdog permanently disabled Apr 30 12:41:44.947144 kernel: NET: Registered PF_INET6 protocol family Apr 30 12:41:44.947153 kernel: Segment Routing with IPv6 Apr 30 12:41:44.947161 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 12:41:44.947169 kernel: NET: Registered PF_PACKET protocol family Apr 30 12:41:44.947180 kernel: Key type dns_resolver registered Apr 30 12:41:44.947188 kernel: registered taskstats version 1 Apr 30 12:41:44.947196 kernel: Loading compiled-in X.509 certificates Apr 30 12:41:44.947204 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4e3d8be893bce81adbd52ab54fa98214a1a14a2e' Apr 30 12:41:44.947213 kernel: Key type .fscrypt registered Apr 30 12:41:44.947221 kernel: Key type fscrypt-provisioning registered Apr 30 12:41:44.947229 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 12:41:44.947237 kernel: ima: Allocated hash algorithm: sha1 Apr 30 12:41:44.947246 kernel: ima: No architecture policies found Apr 30 12:41:44.947255 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 12:41:44.947265 kernel: clk: Disabling unused clocks Apr 30 12:41:44.947274 kernel: Freeing unused kernel memory: 38336K Apr 30 12:41:44.947281 kernel: Run /init as init process Apr 30 12:41:44.947290 kernel: with arguments: Apr 30 12:41:44.947299 kernel: /init Apr 30 12:41:44.947306 kernel: with environment: Apr 30 12:41:44.947314 kernel: HOME=/ Apr 30 12:41:44.947322 kernel: TERM=linux Apr 30 12:41:44.947332 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 12:41:44.947341 systemd[1]: Successfully made /usr/ read-only. Apr 30 12:41:44.947369 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:41:44.947379 systemd[1]: Detected virtualization kvm. Apr 30 12:41:44.947387 systemd[1]: Detected architecture arm64. Apr 30 12:41:44.947396 systemd[1]: Running in initrd. Apr 30 12:41:44.947405 systemd[1]: No hostname configured, using default hostname. Apr 30 12:41:44.947417 systemd[1]: Hostname set to . Apr 30 12:41:44.947426 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:41:44.947434 systemd[1]: Queued start job for default target initrd.target. Apr 30 12:41:44.947443 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:41:44.947452 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:41:44.947463 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 12:41:44.947472 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:41:44.947481 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 12:41:44.947494 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 12:41:44.947505 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 12:41:44.947514 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 12:41:44.947522 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:41:44.947532 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:41:44.947541 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:41:44.947549 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:41:44.950091 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:41:44.950117 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:41:44.950128 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:41:44.950137 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:41:44.950146 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 12:41:44.950156 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 30 12:41:44.950165 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:41:44.950174 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:41:44.950183 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:41:44.950199 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:41:44.950208 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 12:41:44.950217 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:41:44.950226 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 12:41:44.950235 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 12:41:44.950244 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:41:44.950252 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:41:44.950261 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:41:44.950272 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 12:41:44.950320 systemd-journald[237]: Collecting audit messages is disabled. Apr 30 12:41:44.950354 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:41:44.950367 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 12:41:44.950377 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 12:41:44.950386 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:41:44.950395 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:41:44.950405 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:41:44.950415 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 12:41:44.950424 kernel: Bridge firewalling registered Apr 30 12:41:44.950434 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:41:44.950443 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:41:44.950452 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:41:44.950462 systemd-journald[237]: Journal started Apr 30 12:41:44.950482 systemd-journald[237]: Runtime Journal (/run/log/journal/b6b65da6779246da94df4fc48f58ce4a) is 8M, max 76.6M, 68.6M free. Apr 30 12:41:44.916819 systemd-modules-load[238]: Inserted module 'overlay' Apr 30 12:41:44.941257 systemd-modules-load[238]: Inserted module 'br_netfilter' Apr 30 12:41:44.964613 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 12:41:44.968263 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:41:44.972585 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:41:44.972781 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:41:44.979963 dracut-cmdline[259]: dracut-dracut-053 Apr 30 12:41:44.983090 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:41:44.986189 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=984055eb0c340c9cf0fb51b368030ed72e75b7f2e065edc13766888ef0b42074 Apr 30 12:41:44.997617 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:41:45.008581 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:41:45.014222 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:41:45.055117 systemd-resolved[299]: Positive Trust Anchors: Apr 30 12:41:45.055944 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:41:45.055980 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:41:45.066005 systemd-resolved[299]: Defaulting to hostname 'linux'. Apr 30 12:41:45.067881 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:41:45.069443 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:41:45.079626 kernel: SCSI subsystem initialized Apr 30 12:41:45.084599 kernel: Loading iSCSI transport class v2.0-870. Apr 30 12:41:45.094588 kernel: iscsi: registered transport (tcp) Apr 30 12:41:45.106593 kernel: iscsi: registered transport (qla4xxx) Apr 30 12:41:45.106651 kernel: QLogic iSCSI HBA Driver Apr 30 12:41:45.148867 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 12:41:45.156822 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 12:41:45.172679 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 12:41:45.172741 kernel: device-mapper: uevent: version 1.0.3 Apr 30 12:41:45.173633 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 12:41:45.222618 kernel: raid6: neonx8 gen() 15663 MB/s Apr 30 12:41:45.239630 kernel: raid6: neonx4 gen() 15750 MB/s Apr 30 12:41:45.256605 kernel: raid6: neonx2 gen() 13208 MB/s Apr 30 12:41:45.273627 kernel: raid6: neonx1 gen() 10489 MB/s Apr 30 12:41:45.290652 kernel: raid6: int64x8 gen() 6758 MB/s Apr 30 12:41:45.307636 kernel: raid6: int64x4 gen() 7327 MB/s Apr 30 12:41:45.324623 kernel: raid6: int64x2 gen() 6083 MB/s Apr 30 12:41:45.341623 kernel: raid6: int64x1 gen() 5037 MB/s Apr 30 12:41:45.341697 kernel: raid6: using algorithm neonx4 gen() 15750 MB/s Apr 30 12:41:45.358614 kernel: raid6: .... xor() 12364 MB/s, rmw enabled Apr 30 12:41:45.358692 kernel: raid6: using neon recovery algorithm Apr 30 12:41:45.362634 kernel: xor: measuring software checksum speed Apr 30 12:41:45.363907 kernel: 8regs : 21533 MB/sec Apr 30 12:41:45.363977 kernel: 32regs : 21710 MB/sec Apr 30 12:41:45.364009 kernel: arm64_neon : 27917 MB/sec Apr 30 12:41:45.364037 kernel: xor: using function: arm64_neon (27917 MB/sec) Apr 30 12:41:45.414644 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 12:41:45.426958 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:41:45.434871 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:41:45.450066 systemd-udevd[457]: Using default interface naming scheme 'v255'. Apr 30 12:41:45.454190 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:41:45.464002 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 12:41:45.477137 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Apr 30 12:41:45.511205 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:41:45.516775 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:41:45.569589 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:41:45.578771 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 12:41:45.598328 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 12:41:45.599527 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:41:45.602024 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:41:45.603131 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:41:45.609857 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 12:41:45.630156 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:41:45.682636 kernel: scsi host0: Virtio SCSI HBA Apr 30 12:41:45.685086 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 12:41:45.685170 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 30 12:41:45.709788 kernel: ACPI: bus type USB registered Apr 30 12:41:45.709850 kernel: usbcore: registered new interface driver usbfs Apr 30 12:41:45.709865 kernel: usbcore: registered new interface driver hub Apr 30 12:41:45.710575 kernel: usbcore: registered new device driver usb Apr 30 12:41:45.724129 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 12:41:45.730091 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 30 12:41:45.730239 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 12:41:45.730433 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 30 12:41:45.730765 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 30 12:41:45.730869 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 30 12:41:45.730954 kernel: hub 1-0:1.0: USB hub found Apr 30 12:41:45.731058 kernel: hub 1-0:1.0: 4 ports detected Apr 30 12:41:45.731138 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 12:41:45.731231 kernel: hub 2-0:1.0: USB hub found Apr 30 12:41:45.731718 kernel: hub 2-0:1.0: 4 ports detected Apr 30 12:41:45.724912 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:41:45.725026 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:41:45.728613 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:41:45.730544 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:41:45.730736 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:41:45.733447 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:41:45.741917 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:41:45.750584 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 30 12:41:45.752857 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 30 12:41:45.752991 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 12:41:45.753002 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 30 12:41:45.758217 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:41:45.763837 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 30 12:41:45.772122 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 30 12:41:45.772238 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 30 12:41:45.772323 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 30 12:41:45.772429 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 12:41:45.772521 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 12:41:45.772532 kernel: GPT:17805311 != 80003071 Apr 30 12:41:45.772549 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 12:41:45.772582 kernel: GPT:17805311 != 80003071 Apr 30 12:41:45.772594 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 12:41:45.772604 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:41:45.772614 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 30 12:41:45.764189 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:41:45.804667 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:41:45.823687 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (504) Apr 30 12:41:45.828596 kernel: BTRFS: device fsid 8f86a166-b3d6-49f7-a49d-597eaeb9f5e5 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (509) Apr 30 12:41:45.838400 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 30 12:41:45.846993 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 30 12:41:45.862676 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 30 12:41:45.863392 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 30 12:41:45.876572 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 12:41:45.884845 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 12:41:45.898617 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:41:45.898689 disk-uuid[577]: Primary Header is updated. Apr 30 12:41:45.898689 disk-uuid[577]: Secondary Entries is updated. Apr 30 12:41:45.898689 disk-uuid[577]: Secondary Header is updated. Apr 30 12:41:45.969590 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 12:41:46.211682 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 30 12:41:46.346859 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 30 12:41:46.346922 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 30 12:41:46.348269 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 30 12:41:46.402789 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 30 12:41:46.403199 kernel: usbcore: registered new interface driver usbhid Apr 30 12:41:46.403605 kernel: usbhid: USB HID core driver Apr 30 12:41:46.915706 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 12:41:46.917704 disk-uuid[578]: The operation has completed successfully. Apr 30 12:41:46.975613 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 12:41:46.975715 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 12:41:47.005023 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 12:41:47.011079 sh[593]: Success Apr 30 12:41:47.028837 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 12:41:47.093869 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 12:41:47.095623 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 12:41:47.107844 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 12:41:47.121943 kernel: BTRFS info (device dm-0): first mount of filesystem 8f86a166-b3d6-49f7-a49d-597eaeb9f5e5 Apr 30 12:41:47.122024 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:41:47.122083 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 12:41:47.122986 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 12:41:47.123060 kernel: BTRFS info (device dm-0): using free space tree Apr 30 12:41:47.129640 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 12:41:47.132444 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 12:41:47.133177 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 12:41:47.147022 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 12:41:47.151745 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 12:41:47.171666 kernel: BTRFS info (device sda6): first mount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:41:47.171728 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:41:47.171743 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:41:47.176608 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 12:41:47.176681 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:41:47.182617 kernel: BTRFS info (device sda6): last unmount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:41:47.186639 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 12:41:47.192808 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 12:41:47.288444 ignition[673]: Ignition 2.20.0 Apr 30 12:41:47.288455 ignition[673]: Stage: fetch-offline Apr 30 12:41:47.288491 ignition[673]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:41:47.288499 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:41:47.288681 ignition[673]: parsed url from cmdline: "" Apr 30 12:41:47.291604 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:41:47.288684 ignition[673]: no config URL provided Apr 30 12:41:47.288689 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 12:41:47.288696 ignition[673]: no config at "/usr/lib/ignition/user.ign" Apr 30 12:41:47.288701 ignition[673]: failed to fetch config: resource requires networking Apr 30 12:41:47.289171 ignition[673]: Ignition finished successfully Apr 30 12:41:47.308539 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:41:47.313769 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:41:47.350107 systemd-networkd[778]: lo: Link UP Apr 30 12:41:47.350120 systemd-networkd[778]: lo: Gained carrier Apr 30 12:41:47.351890 systemd-networkd[778]: Enumeration completed Apr 30 12:41:47.352076 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:41:47.352730 systemd[1]: Reached target network.target - Network. Apr 30 12:41:47.354039 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:41:47.354043 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:41:47.354720 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:41:47.354723 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:41:47.355220 systemd-networkd[778]: eth0: Link UP Apr 30 12:41:47.355223 systemd-networkd[778]: eth0: Gained carrier Apr 30 12:41:47.355231 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:41:47.358097 systemd-networkd[778]: eth1: Link UP Apr 30 12:41:47.358100 systemd-networkd[778]: eth1: Gained carrier Apr 30 12:41:47.358110 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:41:47.359774 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 12:41:47.375092 ignition[781]: Ignition 2.20.0 Apr 30 12:41:47.375103 ignition[781]: Stage: fetch Apr 30 12:41:47.375281 ignition[781]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:41:47.375291 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:41:47.375464 ignition[781]: parsed url from cmdline: "" Apr 30 12:41:47.375468 ignition[781]: no config URL provided Apr 30 12:41:47.375473 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 12:41:47.375482 ignition[781]: no config at "/usr/lib/ignition/user.ign" Apr 30 12:41:47.375588 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 30 12:41:47.376446 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 30 12:41:47.391686 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 12:41:47.424689 systemd-networkd[778]: eth0: DHCPv4 address 91.99.0.103/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 12:41:47.577280 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 30 12:41:47.585175 ignition[781]: GET result: OK Apr 30 12:41:47.585305 ignition[781]: parsing config with SHA512: d663d29b1adcc45671c14f5a170e06bf5b7ca2a9611896593db8896f4abb7a3f7e056901f44eaa39686eb88b10d8263c84803ba0e8a5ed405f0017bbf1321364 Apr 30 12:41:47.594370 unknown[781]: fetched base config from "system" Apr 30 12:41:47.594384 unknown[781]: fetched base config from "system" Apr 30 12:41:47.594818 ignition[781]: fetch: fetch complete Apr 30 12:41:47.594390 unknown[781]: fetched user config from "hetzner" Apr 30 12:41:47.594823 ignition[781]: fetch: fetch passed Apr 30 12:41:47.597190 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 12:41:47.594873 ignition[781]: Ignition finished successfully Apr 30 12:41:47.603840 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 12:41:47.618234 ignition[788]: Ignition 2.20.0 Apr 30 12:41:47.618247 ignition[788]: Stage: kargs Apr 30 12:41:47.618458 ignition[788]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:41:47.618470 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:41:47.624298 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 12:41:47.619458 ignition[788]: kargs: kargs passed Apr 30 12:41:47.619514 ignition[788]: Ignition finished successfully Apr 30 12:41:47.630806 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 12:41:47.642767 ignition[795]: Ignition 2.20.0 Apr 30 12:41:47.643468 ignition[795]: Stage: disks Apr 30 12:41:47.644089 ignition[795]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:41:47.644105 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:41:47.645093 ignition[795]: disks: disks passed Apr 30 12:41:47.646771 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 12:41:47.645146 ignition[795]: Ignition finished successfully Apr 30 12:41:47.647905 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 12:41:47.648777 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 12:41:47.649427 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:41:47.650251 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:41:47.651274 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:41:47.658042 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 12:41:47.676304 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 12:41:47.682800 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 12:41:47.690819 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 12:41:47.734600 kernel: EXT4-fs (sda9): mounted filesystem 597557b0-8ae6-4a5a-8e98-f3f884fcfe65 r/w with ordered data mode. Quota mode: none. Apr 30 12:41:47.735865 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 12:41:47.738109 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 12:41:47.745738 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:41:47.748844 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 12:41:47.751366 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 12:41:47.754408 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 12:41:47.754451 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:41:47.764392 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (812) Apr 30 12:41:47.764447 kernel: BTRFS info (device sda6): first mount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:41:47.764463 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:41:47.764478 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:41:47.764856 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 12:41:47.766484 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 12:41:47.774599 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 12:41:47.774650 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:41:47.780018 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:41:47.819793 coreos-metadata[814]: Apr 30 12:41:47.819 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 30 12:41:47.820946 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 12:41:47.823823 coreos-metadata[814]: Apr 30 12:41:47.823 INFO Fetch successful Apr 30 12:41:47.824841 coreos-metadata[814]: Apr 30 12:41:47.824 INFO wrote hostname ci-4230-1-1-9-a0dc1fa777 to /sysroot/etc/hostname Apr 30 12:41:47.827118 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 12:41:47.829290 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Apr 30 12:41:47.835618 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 12:41:47.840725 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 12:41:47.952081 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 12:41:47.958809 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 12:41:47.963821 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 12:41:47.969591 kernel: BTRFS info (device sda6): last unmount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:41:47.991303 ignition[930]: INFO : Ignition 2.20.0 Apr 30 12:41:47.991303 ignition[930]: INFO : Stage: mount Apr 30 12:41:47.992957 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:41:47.992957 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:41:47.991970 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 12:41:47.996278 ignition[930]: INFO : mount: mount passed Apr 30 12:41:47.996278 ignition[930]: INFO : Ignition finished successfully Apr 30 12:41:47.996377 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 12:41:48.002773 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 12:41:48.122551 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 12:41:48.127870 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:41:48.140603 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (942) Apr 30 12:41:48.141987 kernel: BTRFS info (device sda6): first mount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:41:48.142045 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:41:48.142065 kernel: BTRFS info (device sda6): using free space tree Apr 30 12:41:48.146626 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 12:41:48.146695 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 12:41:48.149500 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:41:48.177748 ignition[959]: INFO : Ignition 2.20.0 Apr 30 12:41:48.178493 ignition[959]: INFO : Stage: files Apr 30 12:41:48.179556 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:41:48.179556 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:41:48.181577 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Apr 30 12:41:48.181577 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 12:41:48.181577 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 12:41:48.185442 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 12:41:48.186942 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 12:41:48.186942 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 12:41:48.185893 unknown[959]: wrote ssh authorized keys file for user: core Apr 30 12:41:48.189869 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 12:41:48.189869 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 12:41:48.507916 systemd-networkd[778]: eth0: Gained IPv6LL Apr 30 12:41:49.147799 systemd-networkd[778]: eth1: Gained IPv6LL Apr 30 12:41:50.233666 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 12:41:53.379914 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 12:41:53.379914 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:41:53.379914 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 30 12:41:53.955067 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 12:41:54.059151 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:41:54.059151 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 12:41:54.061877 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 12:41:54.061877 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:41:54.061877 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:41:54.061877 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:41:54.061877 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:41:54.061877 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:41:54.061877 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:41:54.061877 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:41:54.061877 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:41:54.061877 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 12:41:54.061877 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 12:41:54.061877 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 12:41:54.061877 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 30 12:41:54.453627 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 12:41:54.755079 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 12:41:54.755079 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 12:41:54.757412 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:41:54.757412 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:41:54.757412 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 12:41:54.757412 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 30 12:41:54.757412 ignition[959]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 12:41:54.757412 ignition[959]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 30 12:41:54.757412 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 30 12:41:54.757412 ignition[959]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 30 12:41:54.757412 ignition[959]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 12:41:54.757412 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:41:54.757412 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:41:54.757412 ignition[959]: INFO : files: files passed Apr 30 12:41:54.757412 ignition[959]: INFO : Ignition finished successfully Apr 30 12:41:54.759333 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 12:41:54.768294 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 12:41:54.771372 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 12:41:54.775754 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 12:41:54.776617 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 12:41:54.787717 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:41:54.787717 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:41:54.790372 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:41:54.792488 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:41:54.793483 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 12:41:54.799895 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 12:41:54.825530 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 12:41:54.825806 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 12:41:54.828436 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 12:41:54.829153 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 12:41:54.830313 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 12:41:54.840796 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 12:41:54.858223 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:41:54.864837 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 12:41:54.874776 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:41:54.876348 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:41:54.877678 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 12:41:54.878235 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 12:41:54.878380 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:41:54.880004 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 12:41:54.880732 systemd[1]: Stopped target basic.target - Basic System. Apr 30 12:41:54.882645 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 12:41:54.884372 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:41:54.885353 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 12:41:54.886416 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 12:41:54.887464 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:41:54.888603 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 12:41:54.889584 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 12:41:54.890682 systemd[1]: Stopped target swap.target - Swaps. Apr 30 12:41:54.891534 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 12:41:54.891679 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:41:54.892921 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:41:54.893573 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:41:54.894615 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 12:41:54.895604 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:41:54.896286 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 12:41:54.896402 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 12:41:54.897903 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 12:41:54.898017 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:41:54.899334 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 12:41:54.899423 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 12:41:54.900322 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 12:41:54.900419 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 12:41:54.912003 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 12:41:54.916993 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 12:41:54.918022 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 12:41:54.918203 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:41:54.919876 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 12:41:54.920550 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:41:54.930839 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 12:41:54.931321 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 12:41:54.934629 ignition[1011]: INFO : Ignition 2.20.0 Apr 30 12:41:54.934629 ignition[1011]: INFO : Stage: umount Apr 30 12:41:54.937297 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:41:54.937297 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 30 12:41:54.937297 ignition[1011]: INFO : umount: umount passed Apr 30 12:41:54.937297 ignition[1011]: INFO : Ignition finished successfully Apr 30 12:41:54.938577 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 12:41:54.938704 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 12:41:54.942631 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 12:41:54.942712 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 12:41:54.943694 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 12:41:54.943733 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 12:41:54.944584 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 12:41:54.944627 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 12:41:54.945520 systemd[1]: Stopped target network.target - Network. Apr 30 12:41:54.947264 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 12:41:54.947337 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:41:54.948004 systemd[1]: Stopped target paths.target - Path Units. Apr 30 12:41:54.950713 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 12:41:54.950774 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:41:54.951904 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 12:41:54.953333 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 12:41:54.956708 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 12:41:54.956773 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:41:54.957364 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 12:41:54.957399 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:41:54.958512 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 12:41:54.958590 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 12:41:54.960716 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 12:41:54.960764 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 12:41:54.962919 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 12:41:54.963660 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 12:41:54.966081 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 12:41:54.968578 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 12:41:54.969308 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 12:41:54.970863 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 12:41:54.970956 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 12:41:54.975390 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 30 12:41:54.975665 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 12:41:54.975763 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 12:41:54.978295 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 30 12:41:54.979701 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 12:41:54.979939 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:41:54.981313 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 12:41:54.981393 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 12:41:54.985761 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 12:41:54.986292 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 12:41:54.986350 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:41:54.989846 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:41:54.989899 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:41:54.991165 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 12:41:54.991205 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 12:41:54.992221 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 12:41:54.992277 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:41:54.993531 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:41:54.995233 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 30 12:41:54.995304 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:41:55.004875 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 12:41:55.005102 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:41:55.006772 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 12:41:55.006826 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 12:41:55.007843 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 12:41:55.007886 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:41:55.009905 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 12:41:55.010000 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:41:55.013588 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 12:41:55.013656 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 12:41:55.014991 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:41:55.015047 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:41:55.027595 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 12:41:55.031786 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 12:41:55.031896 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:41:55.034610 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 12:41:55.034671 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:41:55.036129 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 12:41:55.036180 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:41:55.037672 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:41:55.037722 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:41:55.040762 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 30 12:41:55.040824 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:41:55.041153 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 12:41:55.041293 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 12:41:55.042729 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 12:41:55.042817 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 12:41:55.045260 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 12:41:55.051848 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 12:41:55.058968 systemd[1]: Switching root. Apr 30 12:41:55.094711 systemd-journald[237]: Journal stopped Apr 30 12:41:56.071278 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Apr 30 12:41:56.071342 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 12:41:56.071356 kernel: SELinux: policy capability open_perms=1 Apr 30 12:41:56.071366 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 12:41:56.071374 kernel: SELinux: policy capability always_check_network=0 Apr 30 12:41:56.071383 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 12:41:56.071392 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 12:41:56.071401 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 12:41:56.071410 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 12:41:56.071427 kernel: audit: type=1403 audit(1746016915.251:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 12:41:56.071438 systemd[1]: Successfully loaded SELinux policy in 36.003ms. Apr 30 12:41:56.071460 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.899ms. Apr 30 12:41:56.071471 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:41:56.071481 systemd[1]: Detected virtualization kvm. Apr 30 12:41:56.071492 systemd[1]: Detected architecture arm64. Apr 30 12:41:56.071502 systemd[1]: Detected first boot. Apr 30 12:41:56.071512 systemd[1]: Hostname set to . Apr 30 12:41:56.071522 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:41:56.071531 kernel: NET: Registered PF_VSOCK protocol family Apr 30 12:41:56.071542 zram_generator::config[1055]: No configuration found. Apr 30 12:41:56.071553 systemd[1]: Populated /etc with preset unit settings. Apr 30 12:41:56.071575 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 30 12:41:56.071587 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 12:41:56.071596 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 12:41:56.071606 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 12:41:56.071616 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 12:41:56.071627 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 12:41:56.071639 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 12:41:56.071650 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 12:41:56.071660 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 12:41:56.071670 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 12:41:56.071680 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 12:41:56.071690 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 12:41:56.071700 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:41:56.071710 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:41:56.071720 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 12:41:56.071731 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 12:41:56.071742 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 12:41:56.071752 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:41:56.071762 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 12:41:56.071772 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:41:56.071782 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 12:41:56.071793 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 12:41:56.071803 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 12:41:56.071813 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 12:41:56.071823 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:41:56.071833 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:41:56.071847 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:41:56.071857 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:41:56.071867 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 12:41:56.071877 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 12:41:56.071891 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 30 12:41:56.071903 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:41:56.071913 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:41:56.071923 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:41:56.071933 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 12:41:56.071944 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 12:41:56.071956 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 12:41:56.071966 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 12:41:56.071976 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 12:41:56.071986 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 12:41:56.071996 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 12:41:56.072006 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 12:41:56.072016 systemd[1]: Reached target machines.target - Containers. Apr 30 12:41:56.072026 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 12:41:56.072036 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:41:56.072047 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:41:56.072057 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 12:41:56.072067 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:41:56.072078 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:41:56.072088 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:41:56.072098 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 12:41:56.072108 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:41:56.072118 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 12:41:56.072130 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 12:41:56.072140 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 12:41:56.072150 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 12:41:56.072161 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 12:41:56.072171 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:41:56.072182 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:41:56.072193 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:41:56.072204 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 12:41:56.072214 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 12:41:56.072224 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 30 12:41:56.072267 kernel: fuse: init (API version 7.39) Apr 30 12:41:56.072282 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:41:56.072293 kernel: loop: module loaded Apr 30 12:41:56.072307 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 12:41:56.072318 systemd[1]: Stopped verity-setup.service. Apr 30 12:41:56.072328 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 12:41:56.072338 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 12:41:56.072348 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 12:41:56.072360 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 12:41:56.072372 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 12:41:56.072382 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 12:41:56.072397 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:41:56.072408 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 12:41:56.072418 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 12:41:56.072429 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:41:56.072439 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:41:56.072453 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:41:56.072464 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:41:56.072475 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 12:41:56.072486 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 12:41:56.072496 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:41:56.072508 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:41:56.072519 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:41:56.072529 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 12:41:56.072539 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 12:41:56.072550 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 12:41:56.076608 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 12:41:56.076676 systemd-journald[1126]: Collecting audit messages is disabled. Apr 30 12:41:56.076700 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:41:56.076711 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 30 12:41:56.076721 kernel: ACPI: bus type drm_connector registered Apr 30 12:41:56.076733 systemd-journald[1126]: Journal started Apr 30 12:41:56.076765 systemd-journald[1126]: Runtime Journal (/run/log/journal/b6b65da6779246da94df4fc48f58ce4a) is 8M, max 76.6M, 68.6M free. Apr 30 12:41:55.796968 systemd[1]: Queued start job for default target multi-user.target. Apr 30 12:41:55.813137 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 12:41:55.813771 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 12:41:56.081654 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 12:41:56.085984 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 12:41:56.086059 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:41:56.097585 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 12:41:56.102341 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:41:56.102450 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 12:41:56.104692 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:41:56.114914 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:41:56.124749 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 12:41:56.130897 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 12:41:56.130967 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:41:56.134638 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 12:41:56.136136 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:41:56.136627 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:41:56.138957 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 12:41:56.139983 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 30 12:41:56.141959 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 12:41:56.144525 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 12:41:56.146730 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 12:41:56.158866 kernel: loop0: detected capacity change from 0 to 8 Apr 30 12:41:56.161738 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 12:41:56.174931 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 12:41:56.184365 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 12:41:56.187459 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 12:41:56.195157 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Apr 30 12:41:56.195198 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Apr 30 12:41:56.198984 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 12:41:56.201823 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 30 12:41:56.208636 kernel: loop1: detected capacity change from 0 to 113512 Apr 30 12:41:56.219110 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:41:56.220549 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:41:56.235884 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 12:41:56.241093 systemd-journald[1126]: Time spent on flushing to /var/log/journal/b6b65da6779246da94df4fc48f58ce4a is 38.259ms for 1155 entries. Apr 30 12:41:56.241093 systemd-journald[1126]: System Journal (/var/log/journal/b6b65da6779246da94df4fc48f58ce4a) is 8M, max 584.8M, 576.8M free. Apr 30 12:41:56.301280 systemd-journald[1126]: Received client request to flush runtime journal. Apr 30 12:41:56.301335 kernel: loop2: detected capacity change from 0 to 123192 Apr 30 12:41:56.259859 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:41:56.274987 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 12:41:56.279065 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 30 12:41:56.302494 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 12:41:56.305635 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 12:41:56.311009 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 12:41:56.318481 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:41:56.328939 kernel: loop3: detected capacity change from 0 to 194096 Apr 30 12:41:56.341444 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 30 12:41:56.341463 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Apr 30 12:41:56.348912 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:41:56.375755 kernel: loop4: detected capacity change from 0 to 8 Apr 30 12:41:56.377598 kernel: loop5: detected capacity change from 0 to 113512 Apr 30 12:41:56.389615 kernel: loop6: detected capacity change from 0 to 123192 Apr 30 12:41:56.402602 kernel: loop7: detected capacity change from 0 to 194096 Apr 30 12:41:56.417429 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 30 12:41:56.417966 (sd-merge)[1203]: Merged extensions into '/usr'. Apr 30 12:41:56.424171 systemd[1]: Reload requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 12:41:56.424598 systemd[1]: Reloading... Apr 30 12:41:56.555592 zram_generator::config[1232]: No configuration found. Apr 30 12:41:56.638188 ldconfig[1152]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 12:41:56.717593 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:41:56.779873 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 12:41:56.780435 systemd[1]: Reloading finished in 355 ms. Apr 30 12:41:56.796094 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 12:41:56.798597 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 12:41:56.809786 systemd[1]: Starting ensure-sysext.service... Apr 30 12:41:56.820941 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:41:56.843703 systemd[1]: Reload requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Apr 30 12:41:56.843727 systemd[1]: Reloading... Apr 30 12:41:56.845240 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 12:41:56.845850 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 12:41:56.846687 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 12:41:56.846923 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Apr 30 12:41:56.846976 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Apr 30 12:41:56.851126 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:41:56.851137 systemd-tmpfiles[1269]: Skipping /boot Apr 30 12:41:56.862029 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:41:56.862157 systemd-tmpfiles[1269]: Skipping /boot Apr 30 12:41:56.945587 zram_generator::config[1298]: No configuration found. Apr 30 12:41:57.030077 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:41:57.090730 systemd[1]: Reloading finished in 246 ms. Apr 30 12:41:57.106984 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 12:41:57.121977 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:41:57.133871 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:41:57.139944 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 12:41:57.148920 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 12:41:57.151833 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:41:57.155909 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:41:57.163061 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 12:41:57.169956 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:41:57.178878 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:41:57.183972 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:41:57.188885 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:41:57.191152 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:41:57.191350 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:41:57.195425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:41:57.195666 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:41:57.199142 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:41:57.201205 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:41:57.203021 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:41:57.203159 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:41:57.212057 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 12:41:57.214531 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 12:41:57.223311 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:41:57.223507 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:41:57.234327 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:41:57.234521 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:41:57.240500 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 12:41:57.243812 systemd[1]: Finished ensure-sysext.service. Apr 30 12:41:57.245837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:41:57.246109 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:41:57.253415 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:41:57.260483 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Apr 30 12:41:57.261467 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:41:57.266200 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:41:57.268477 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:41:57.268527 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:41:57.268702 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:41:57.273879 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 12:41:57.278323 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 12:41:57.280632 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 12:41:57.282066 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:41:57.282625 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:41:57.285920 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 12:41:57.289914 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:41:57.291759 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:41:57.293427 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:41:57.298622 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 12:41:57.315322 augenrules[1385]: No rules Apr 30 12:41:57.320844 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:41:57.321066 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:41:57.325784 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 12:41:57.338822 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:41:57.350057 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:41:57.423630 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 12:41:57.424620 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 12:41:57.449764 systemd-resolved[1346]: Positive Trust Anchors: Apr 30 12:41:57.450082 systemd-resolved[1346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:41:57.450203 systemd-resolved[1346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:41:57.459073 systemd-resolved[1346]: Using system hostname 'ci-4230-1-1-9-a0dc1fa777'. Apr 30 12:41:57.461183 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:41:57.462100 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:41:57.471442 systemd-networkd[1398]: lo: Link UP Apr 30 12:41:57.471455 systemd-networkd[1398]: lo: Gained carrier Apr 30 12:41:57.487311 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 30 12:41:57.494645 systemd-networkd[1398]: Enumeration completed Apr 30 12:41:57.494992 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:41:57.495774 systemd[1]: Reached target network.target - Network. Apr 30 12:41:57.504808 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 30 12:41:57.507922 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 12:41:57.527370 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 30 12:41:57.567776 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 12:41:57.593992 systemd-networkd[1398]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:41:57.594133 systemd-networkd[1398]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:41:57.596283 systemd-networkd[1398]: eth1: Link UP Apr 30 12:41:57.596427 systemd-networkd[1398]: eth1: Gained carrier Apr 30 12:41:57.596495 systemd-networkd[1398]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:41:57.607977 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:41:57.607988 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:41:57.610478 systemd-networkd[1398]: eth0: Link UP Apr 30 12:41:57.610488 systemd-networkd[1398]: eth0: Gained carrier Apr 30 12:41:57.610509 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:41:57.623588 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1397) Apr 30 12:41:57.647467 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 30 12:41:57.648091 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:41:57.649837 systemd-networkd[1398]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 12:41:57.650640 systemd-timesyncd[1375]: Network configuration changed, trying to establish connection. Apr 30 12:41:57.654863 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:41:57.659343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:41:57.665126 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:41:57.666653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:41:57.666697 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:41:57.666719 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 12:41:57.669099 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:41:57.669345 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:41:57.673094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:41:57.673294 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:41:57.675975 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:41:57.685587 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 30 12:41:57.685667 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 30 12:41:57.685681 kernel: [drm] features: -context_init Apr 30 12:41:57.691497 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:41:57.691811 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:41:57.692675 systemd-networkd[1398]: eth0: DHCPv4 address 91.99.0.103/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 30 12:41:57.693156 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:41:57.694009 systemd-timesyncd[1375]: Network configuration changed, trying to establish connection. Apr 30 12:41:57.695709 kernel: [drm] number of scanouts: 1 Apr 30 12:41:57.695752 kernel: [drm] number of cap sets: 0 Apr 30 12:41:57.702601 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 30 12:41:57.708315 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 30 12:41:57.710664 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 12:41:57.719193 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 12:41:57.722070 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 30 12:41:57.750595 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 12:41:57.764783 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:41:57.784454 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:41:57.785965 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:41:57.798794 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:41:57.855013 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:41:57.919172 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 12:41:57.929416 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 12:41:57.941749 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:41:57.970127 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 12:41:57.971926 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:41:57.972936 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:41:57.973843 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 12:41:57.974521 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 12:41:57.975636 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 12:41:57.976303 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 12:41:57.976990 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 12:41:57.977659 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 12:41:57.977738 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:41:57.978229 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:41:57.980273 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 12:41:57.982762 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 12:41:57.986395 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 30 12:41:57.987350 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 30 12:41:57.988067 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 30 12:41:57.997165 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 12:41:58.000343 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 30 12:41:58.010898 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 12:41:58.013330 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 12:41:58.014195 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:41:58.014903 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:41:58.015627 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:41:58.015670 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:41:58.016640 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:41:58.018765 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 12:41:58.024113 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 12:41:58.027430 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 12:41:58.031754 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 12:41:58.035172 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 12:41:58.035784 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 12:41:58.042812 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 12:41:58.047062 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 12:41:58.052680 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 30 12:41:58.059004 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 12:41:58.068304 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 12:41:58.076629 jq[1470]: false Apr 30 12:41:58.084733 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 12:41:58.086963 dbus-daemon[1469]: [system] SELinux support is enabled Apr 30 12:41:58.087695 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 12:41:58.088607 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 12:41:58.089961 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 12:41:58.099636 coreos-metadata[1468]: Apr 30 12:41:58.099 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 30 12:41:58.107730 coreos-metadata[1468]: Apr 30 12:41:58.106 INFO Fetch successful Apr 30 12:41:58.107730 coreos-metadata[1468]: Apr 30 12:41:58.106 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 30 12:41:58.102755 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 12:41:58.105778 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 12:41:58.109724 coreos-metadata[1468]: Apr 30 12:41:58.108 INFO Fetch successful Apr 30 12:41:58.111607 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 12:41:58.118115 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 12:41:58.119949 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 12:41:58.136517 jq[1483]: true Apr 30 12:41:58.146351 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 12:41:58.146600 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 12:41:58.148114 extend-filesystems[1471]: Found loop4 Apr 30 12:41:58.149101 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 12:41:58.149318 extend-filesystems[1471]: Found loop5 Apr 30 12:41:58.151704 extend-filesystems[1471]: Found loop6 Apr 30 12:41:58.151704 extend-filesystems[1471]: Found loop7 Apr 30 12:41:58.151704 extend-filesystems[1471]: Found sda Apr 30 12:41:58.151704 extend-filesystems[1471]: Found sda1 Apr 30 12:41:58.151704 extend-filesystems[1471]: Found sda2 Apr 30 12:41:58.151704 extend-filesystems[1471]: Found sda3 Apr 30 12:41:58.151704 extend-filesystems[1471]: Found usr Apr 30 12:41:58.151704 extend-filesystems[1471]: Found sda4 Apr 30 12:41:58.151704 extend-filesystems[1471]: Found sda6 Apr 30 12:41:58.151704 extend-filesystems[1471]: Found sda7 Apr 30 12:41:58.151704 extend-filesystems[1471]: Found sda9 Apr 30 12:41:58.151704 extend-filesystems[1471]: Checking size of /dev/sda9 Apr 30 12:41:58.149718 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 12:41:58.152930 (ntainerd)[1499]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 12:41:58.165119 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 12:41:58.180101 tar[1489]: linux-arm64/helm Apr 30 12:41:58.165166 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 12:41:58.169361 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 12:41:58.169384 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 12:41:58.202161 extend-filesystems[1471]: Resized partition /dev/sda9 Apr 30 12:41:58.211586 extend-filesystems[1516]: resize2fs 1.47.1 (20-May-2024) Apr 30 12:41:58.222424 jq[1501]: true Apr 30 12:41:58.235053 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 30 12:41:58.265123 update_engine[1482]: I20250430 12:41:58.264034 1482 main.cc:92] Flatcar Update Engine starting Apr 30 12:41:58.278671 systemd[1]: Started update-engine.service - Update Engine. Apr 30 12:41:58.284427 update_engine[1482]: I20250430 12:41:58.283274 1482 update_check_scheduler.cc:74] Next update check in 9m25s Apr 30 12:41:58.304665 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1415) Apr 30 12:41:58.317096 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 12:41:58.338593 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 12:41:58.342474 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 12:41:58.390841 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:41:58.392104 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 12:41:58.393146 systemd-logind[1479]: New seat seat0. Apr 30 12:41:58.399332 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 30 12:41:58.397854 systemd[1]: Starting sshkeys.service... Apr 30 12:41:58.419144 systemd-logind[1479]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 12:41:58.419167 systemd-logind[1479]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 30 12:41:58.419491 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 12:41:58.422889 extend-filesystems[1516]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 12:41:58.422889 extend-filesystems[1516]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 30 12:41:58.422889 extend-filesystems[1516]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 30 12:41:58.438553 extend-filesystems[1471]: Resized filesystem in /dev/sda9 Apr 30 12:41:58.438553 extend-filesystems[1471]: Found sr0 Apr 30 12:41:58.423797 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 12:41:58.478259 containerd[1499]: time="2025-04-30T12:41:58.430150240Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 12:41:58.424030 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 12:41:58.453147 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 12:41:58.482015 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 12:41:58.514432 containerd[1499]: time="2025-04-30T12:41:58.512587400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:41:58.516116 containerd[1499]: time="2025-04-30T12:41:58.516072280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:41:58.516492 containerd[1499]: time="2025-04-30T12:41:58.516472480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 12:41:58.516758 containerd[1499]: time="2025-04-30T12:41:58.516742640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 12:41:58.516976 containerd[1499]: time="2025-04-30T12:41:58.516956960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 12:41:58.517600 containerd[1499]: time="2025-04-30T12:41:58.517300880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 12:41:58.517600 containerd[1499]: time="2025-04-30T12:41:58.517388880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:41:58.517600 containerd[1499]: time="2025-04-30T12:41:58.517405160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:41:58.518004 containerd[1499]: time="2025-04-30T12:41:58.517982680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:41:58.518061 containerd[1499]: time="2025-04-30T12:41:58.518048560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 12:41:58.518128 containerd[1499]: time="2025-04-30T12:41:58.518113760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:41:58.518171 containerd[1499]: time="2025-04-30T12:41:58.518160320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 12:41:58.518351 containerd[1499]: time="2025-04-30T12:41:58.518332280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:41:58.518709 containerd[1499]: time="2025-04-30T12:41:58.518688720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:41:58.519952 containerd[1499]: time="2025-04-30T12:41:58.519758840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:41:58.519952 containerd[1499]: time="2025-04-30T12:41:58.519782360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 12:41:58.519952 containerd[1499]: time="2025-04-30T12:41:58.519877960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 12:41:58.519952 containerd[1499]: time="2025-04-30T12:41:58.519926240Z" level=info msg="metadata content store policy set" policy=shared Apr 30 12:41:58.527148 containerd[1499]: time="2025-04-30T12:41:58.526263920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 12:41:58.527148 containerd[1499]: time="2025-04-30T12:41:58.526385600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 12:41:58.527148 containerd[1499]: time="2025-04-30T12:41:58.526423400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 12:41:58.527148 containerd[1499]: time="2025-04-30T12:41:58.526441400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 12:41:58.527148 containerd[1499]: time="2025-04-30T12:41:58.526456680Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 12:41:58.527148 containerd[1499]: time="2025-04-30T12:41:58.526687160Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 12:41:58.527148 containerd[1499]: time="2025-04-30T12:41:58.526970800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 12:41:58.527148 containerd[1499]: time="2025-04-30T12:41:58.527093200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 12:41:58.527148 containerd[1499]: time="2025-04-30T12:41:58.527111200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 12:41:58.527577 containerd[1499]: time="2025-04-30T12:41:58.527125600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 12:41:58.527577 containerd[1499]: time="2025-04-30T12:41:58.527445360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 12:41:58.527577 containerd[1499]: time="2025-04-30T12:41:58.527464040Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 12:41:58.527577 containerd[1499]: time="2025-04-30T12:41:58.527477560Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 12:41:58.527577 containerd[1499]: time="2025-04-30T12:41:58.527504560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 12:41:58.527577 containerd[1499]: time="2025-04-30T12:41:58.527523160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 12:41:58.527577 containerd[1499]: time="2025-04-30T12:41:58.527535880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 12:41:58.527577 containerd[1499]: time="2025-04-30T12:41:58.527547440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 12:41:58.528455 containerd[1499]: time="2025-04-30T12:41:58.527558800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 12:41:58.528455 containerd[1499]: time="2025-04-30T12:41:58.527758560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 12:41:58.528455 containerd[1499]: time="2025-04-30T12:41:58.527772680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 12:41:58.528455 containerd[1499]: time="2025-04-30T12:41:58.527784840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 12:41:58.528455 containerd[1499]: time="2025-04-30T12:41:58.528303960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 12:41:58.528455 containerd[1499]: time="2025-04-30T12:41:58.528322080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 12:41:58.528455 containerd[1499]: time="2025-04-30T12:41:58.528336080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 12:41:58.528455 containerd[1499]: time="2025-04-30T12:41:58.528347760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 12:41:58.528455 containerd[1499]: time="2025-04-30T12:41:58.528360960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 12:41:58.528455 containerd[1499]: time="2025-04-30T12:41:58.528386120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 12:41:58.528455 containerd[1499]: time="2025-04-30T12:41:58.528401600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 12:41:58.528455 containerd[1499]: time="2025-04-30T12:41:58.528419160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 12:41:58.528455 containerd[1499]: time="2025-04-30T12:41:58.528433480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 12:41:58.529350 containerd[1499]: time="2025-04-30T12:41:58.528445120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 12:41:58.529350 containerd[1499]: time="2025-04-30T12:41:58.528737080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 12:41:58.529350 containerd[1499]: time="2025-04-30T12:41:58.528770920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 12:41:58.529350 containerd[1499]: time="2025-04-30T12:41:58.528921400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 12:41:58.529350 containerd[1499]: time="2025-04-30T12:41:58.528935720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 12:41:58.529897 containerd[1499]: time="2025-04-30T12:41:58.529557960Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 12:41:58.529897 containerd[1499]: time="2025-04-30T12:41:58.529846640Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 12:41:58.529897 containerd[1499]: time="2025-04-30T12:41:58.529858960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 12:41:58.529897 containerd[1499]: time="2025-04-30T12:41:58.529870800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 12:41:58.530130 containerd[1499]: time="2025-04-30T12:41:58.529879600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 12:41:58.530130 containerd[1499]: time="2025-04-30T12:41:58.530068960Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 12:41:58.530130 containerd[1499]: time="2025-04-30T12:41:58.530082200Z" level=info msg="NRI interface is disabled by configuration." Apr 30 12:41:58.530130 containerd[1499]: time="2025-04-30T12:41:58.530094000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 12:41:58.531346 containerd[1499]: time="2025-04-30T12:41:58.530959640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 12:41:58.531635 containerd[1499]: time="2025-04-30T12:41:58.531615160Z" level=info msg="Connect containerd service" Apr 30 12:41:58.534367 containerd[1499]: time="2025-04-30T12:41:58.531916320Z" level=info msg="using legacy CRI server" Apr 30 12:41:58.534547 containerd[1499]: time="2025-04-30T12:41:58.534528520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 12:41:58.538268 containerd[1499]: time="2025-04-30T12:41:58.537020720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 12:41:58.538484 coreos-metadata[1550]: Apr 30 12:41:58.538 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 30 12:41:58.540903 containerd[1499]: time="2025-04-30T12:41:58.540748560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:41:58.542666 containerd[1499]: time="2025-04-30T12:41:58.541536000Z" level=info msg="Start subscribing containerd event" Apr 30 12:41:58.542666 containerd[1499]: time="2025-04-30T12:41:58.541607720Z" level=info msg="Start recovering state" Apr 30 12:41:58.542666 containerd[1499]: time="2025-04-30T12:41:58.541681720Z" level=info msg="Start event monitor" Apr 30 12:41:58.542666 containerd[1499]: time="2025-04-30T12:41:58.541694960Z" level=info msg="Start snapshots syncer" Apr 30 12:41:58.542666 containerd[1499]: time="2025-04-30T12:41:58.541705080Z" level=info msg="Start cni network conf syncer for default" Apr 30 12:41:58.542666 containerd[1499]: time="2025-04-30T12:41:58.541713520Z" level=info msg="Start streaming server" Apr 30 12:41:58.542666 containerd[1499]: time="2025-04-30T12:41:58.542190600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 12:41:58.542666 containerd[1499]: time="2025-04-30T12:41:58.542312680Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 12:41:58.542864 coreos-metadata[1550]: Apr 30 12:41:58.542 INFO Fetch successful Apr 30 12:41:58.542737 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 12:41:58.543041 containerd[1499]: time="2025-04-30T12:41:58.542646320Z" level=info msg="containerd successfully booted in 0.117866s" Apr 30 12:41:58.548455 unknown[1550]: wrote ssh authorized keys file for user: core Apr 30 12:41:58.578833 update-ssh-keys[1556]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:41:58.580040 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 12:41:58.582874 systemd[1]: Finished sshkeys.service. Apr 30 12:41:58.632523 sshd_keygen[1511]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 12:41:58.659441 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 12:41:58.663981 locksmithd[1540]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 12:41:58.666974 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 12:41:58.677904 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 12:41:58.678435 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 12:41:58.687796 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 12:41:58.702696 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 12:41:58.713111 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 12:41:58.722981 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 12:41:58.724207 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 12:41:58.864492 tar[1489]: linux-arm64/LICENSE Apr 30 12:41:58.864492 tar[1489]: linux-arm64/README.md Apr 30 12:41:58.884004 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 12:41:59.451826 systemd-networkd[1398]: eth1: Gained IPv6LL Apr 30 12:41:59.454750 systemd-timesyncd[1375]: Network configuration changed, trying to establish connection. Apr 30 12:41:59.454917 systemd-networkd[1398]: eth0: Gained IPv6LL Apr 30 12:41:59.456475 systemd-timesyncd[1375]: Network configuration changed, trying to establish connection. Apr 30 12:41:59.457851 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 12:41:59.459958 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 12:41:59.474118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:41:59.478012 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 12:41:59.510340 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 12:42:00.162728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:00.164402 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 12:42:00.170254 systemd[1]: Startup finished in 779ms (kernel) + 10.568s (initrd) + 4.954s (userspace) = 16.302s. Apr 30 12:42:00.174120 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:42:00.750713 kubelet[1600]: E0430 12:42:00.750615 1600 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:42:00.754106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:42:00.754343 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:42:00.755709 systemd[1]: kubelet.service: Consumed 860ms CPU time, 237.7M memory peak. Apr 30 12:42:11.005110 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 12:42:11.020966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:42:11.139588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:11.144366 (kubelet)[1620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:42:11.200670 kubelet[1620]: E0430 12:42:11.200602 1620 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:42:11.205344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:42:11.205536 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:42:11.206174 systemd[1]: kubelet.service: Consumed 158ms CPU time, 96.8M memory peak. Apr 30 12:42:21.447498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 12:42:21.467924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:42:21.570011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:21.574333 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:42:21.623796 kubelet[1636]: E0430 12:42:21.623725 1636 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:42:21.626459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:42:21.626651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:42:21.627366 systemd[1]: kubelet.service: Consumed 141ms CPU time, 96.6M memory peak. Apr 30 12:42:22.289978 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 12:42:22.296106 systemd[1]: Started sshd@0-91.99.0.103:22-139.178.89.65:57706.service - OpenSSH per-connection server daemon (139.178.89.65:57706). Apr 30 12:42:23.312389 sshd[1645]: Accepted publickey for core from 139.178.89.65 port 57706 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:42:23.313277 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:23.323118 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 12:42:23.327960 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 12:42:23.340365 systemd-logind[1479]: New session 1 of user core. Apr 30 12:42:23.346690 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 12:42:23.355056 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 12:42:23.359262 (systemd)[1649]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 12:42:23.362014 systemd-logind[1479]: New session c1 of user core. Apr 30 12:42:23.497802 systemd[1649]: Queued start job for default target default.target. Apr 30 12:42:23.506533 systemd[1649]: Created slice app.slice - User Application Slice. Apr 30 12:42:23.506618 systemd[1649]: Reached target paths.target - Paths. Apr 30 12:42:23.506695 systemd[1649]: Reached target timers.target - Timers. Apr 30 12:42:23.509217 systemd[1649]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 12:42:23.523428 systemd[1649]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 12:42:23.524150 systemd[1649]: Reached target sockets.target - Sockets. Apr 30 12:42:23.524487 systemd[1649]: Reached target basic.target - Basic System. Apr 30 12:42:23.524852 systemd[1649]: Reached target default.target - Main User Target. Apr 30 12:42:23.524947 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 12:42:23.525394 systemd[1649]: Startup finished in 155ms. Apr 30 12:42:23.537882 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 12:42:24.250338 systemd[1]: Started sshd@1-91.99.0.103:22-139.178.89.65:57712.service - OpenSSH per-connection server daemon (139.178.89.65:57712). Apr 30 12:42:25.246371 sshd[1660]: Accepted publickey for core from 139.178.89.65 port 57712 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:42:25.247981 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:25.254284 systemd-logind[1479]: New session 2 of user core. Apr 30 12:42:25.264886 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 12:42:25.932617 sshd[1662]: Connection closed by 139.178.89.65 port 57712 Apr 30 12:42:25.933192 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:25.937740 systemd[1]: sshd@1-91.99.0.103:22-139.178.89.65:57712.service: Deactivated successfully. Apr 30 12:42:25.939938 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 12:42:25.940698 systemd-logind[1479]: Session 2 logged out. Waiting for processes to exit. Apr 30 12:42:25.941699 systemd-logind[1479]: Removed session 2. Apr 30 12:42:26.111143 systemd[1]: Started sshd@2-91.99.0.103:22-139.178.89.65:57722.service - OpenSSH per-connection server daemon (139.178.89.65:57722). Apr 30 12:42:27.101795 sshd[1668]: Accepted publickey for core from 139.178.89.65 port 57722 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:42:27.103730 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:27.110031 systemd-logind[1479]: New session 3 of user core. Apr 30 12:42:27.116983 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 12:42:27.780943 sshd[1670]: Connection closed by 139.178.89.65 port 57722 Apr 30 12:42:27.781908 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:27.788182 systemd[1]: sshd@2-91.99.0.103:22-139.178.89.65:57722.service: Deactivated successfully. Apr 30 12:42:27.792040 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 12:42:27.793198 systemd-logind[1479]: Session 3 logged out. Waiting for processes to exit. Apr 30 12:42:27.794508 systemd-logind[1479]: Removed session 3. Apr 30 12:42:27.958888 systemd[1]: Started sshd@3-91.99.0.103:22-139.178.89.65:44014.service - OpenSSH per-connection server daemon (139.178.89.65:44014). Apr 30 12:42:28.941741 sshd[1676]: Accepted publickey for core from 139.178.89.65 port 44014 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:42:28.943553 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:28.950827 systemd-logind[1479]: New session 4 of user core. Apr 30 12:42:28.956829 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 12:42:29.622695 sshd[1678]: Connection closed by 139.178.89.65 port 44014 Apr 30 12:42:29.623680 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:29.628644 systemd[1]: sshd@3-91.99.0.103:22-139.178.89.65:44014.service: Deactivated successfully. Apr 30 12:42:29.631200 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 12:42:29.633988 systemd-logind[1479]: Session 4 logged out. Waiting for processes to exit. Apr 30 12:42:29.635194 systemd-logind[1479]: Removed session 4. Apr 30 12:42:29.805031 systemd[1]: Started sshd@4-91.99.0.103:22-139.178.89.65:44022.service - OpenSSH per-connection server daemon (139.178.89.65:44022). Apr 30 12:42:29.822408 systemd-timesyncd[1375]: Contacted time server 185.228.138.224:123 (2.flatcar.pool.ntp.org). Apr 30 12:42:29.822491 systemd-timesyncd[1375]: Initial clock synchronization to Wed 2025-04-30 12:42:29.849501 UTC. Apr 30 12:42:30.804673 sshd[1684]: Accepted publickey for core from 139.178.89.65 port 44022 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:42:30.807030 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:30.814054 systemd-logind[1479]: New session 5 of user core. Apr 30 12:42:30.825890 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 12:42:31.340260 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 12:42:31.340533 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:42:31.358035 sudo[1687]: pam_unix(sudo:session): session closed for user root Apr 30 12:42:31.519938 sshd[1686]: Connection closed by 139.178.89.65 port 44022 Apr 30 12:42:31.521333 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:31.526580 systemd-logind[1479]: Session 5 logged out. Waiting for processes to exit. Apr 30 12:42:31.527250 systemd[1]: sshd@4-91.99.0.103:22-139.178.89.65:44022.service: Deactivated successfully. Apr 30 12:42:31.529872 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 12:42:31.531165 systemd-logind[1479]: Removed session 5. Apr 30 12:42:31.693949 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 12:42:31.709969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:42:31.713116 systemd[1]: Started sshd@5-91.99.0.103:22-139.178.89.65:44032.service - OpenSSH per-connection server daemon (139.178.89.65:44032). Apr 30 12:42:31.814800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:31.828402 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:42:31.878812 kubelet[1702]: E0430 12:42:31.878744 1702 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:42:31.881455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:42:31.882010 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:42:31.882501 systemd[1]: kubelet.service: Consumed 140ms CPU time, 94.8M memory peak. Apr 30 12:42:32.707420 sshd[1694]: Accepted publickey for core from 139.178.89.65 port 44032 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:42:32.709406 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:32.715342 systemd-logind[1479]: New session 6 of user core. Apr 30 12:42:32.726948 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 12:42:33.231788 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 12:42:33.232173 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:42:33.237405 sudo[1712]: pam_unix(sudo:session): session closed for user root Apr 30 12:42:33.242900 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 12:42:33.243258 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:42:33.256949 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:42:33.288833 augenrules[1734]: No rules Apr 30 12:42:33.290503 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:42:33.290733 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:42:33.292822 sudo[1711]: pam_unix(sudo:session): session closed for user root Apr 30 12:42:33.453695 sshd[1710]: Connection closed by 139.178.89.65 port 44032 Apr 30 12:42:33.453536 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:33.458257 systemd[1]: sshd@5-91.99.0.103:22-139.178.89.65:44032.service: Deactivated successfully. Apr 30 12:42:33.459927 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 12:42:33.461627 systemd-logind[1479]: Session 6 logged out. Waiting for processes to exit. Apr 30 12:42:33.462787 systemd-logind[1479]: Removed session 6. Apr 30 12:42:33.624847 systemd[1]: Started sshd@6-91.99.0.103:22-139.178.89.65:44034.service - OpenSSH per-connection server daemon (139.178.89.65:44034). Apr 30 12:42:34.603896 sshd[1743]: Accepted publickey for core from 139.178.89.65 port 44034 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:42:34.605733 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:34.610709 systemd-logind[1479]: New session 7 of user core. Apr 30 12:42:34.618897 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 12:42:35.122413 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 12:42:35.123146 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:42:35.443171 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 12:42:35.443357 (dockerd)[1762]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 12:42:35.677519 dockerd[1762]: time="2025-04-30T12:42:35.677440515Z" level=info msg="Starting up" Apr 30 12:42:35.794591 dockerd[1762]: time="2025-04-30T12:42:35.794299994Z" level=info msg="Loading containers: start." Apr 30 12:42:35.951584 kernel: Initializing XFRM netlink socket Apr 30 12:42:36.036099 systemd-networkd[1398]: docker0: Link UP Apr 30 12:42:36.074206 dockerd[1762]: time="2025-04-30T12:42:36.074064691Z" level=info msg="Loading containers: done." Apr 30 12:42:36.091166 dockerd[1762]: time="2025-04-30T12:42:36.091094892Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 12:42:36.091334 dockerd[1762]: time="2025-04-30T12:42:36.091241461Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Apr 30 12:42:36.091541 dockerd[1762]: time="2025-04-30T12:42:36.091493953Z" level=info msg="Daemon has completed initialization" Apr 30 12:42:36.128680 dockerd[1762]: time="2025-04-30T12:42:36.128595349Z" level=info msg="API listen on /run/docker.sock" Apr 30 12:42:36.129009 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 12:42:37.268128 containerd[1499]: time="2025-04-30T12:42:37.268008844Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 12:42:37.909860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4079799225.mount: Deactivated successfully. Apr 30 12:42:40.204953 containerd[1499]: time="2025-04-30T12:42:40.203677123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:40.206594 containerd[1499]: time="2025-04-30T12:42:40.206522064Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794242" Apr 30 12:42:40.208706 containerd[1499]: time="2025-04-30T12:42:40.208670863Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:40.212484 containerd[1499]: time="2025-04-30T12:42:40.212414445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:40.213674 containerd[1499]: time="2025-04-30T12:42:40.213640620Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.945581003s" Apr 30 12:42:40.213794 containerd[1499]: time="2025-04-30T12:42:40.213777623Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" Apr 30 12:42:40.242101 containerd[1499]: time="2025-04-30T12:42:40.242048267Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 12:42:41.947469 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 30 12:42:41.956869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:42:42.053268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:42.063917 (kubelet)[2020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:42:42.112884 kubelet[2020]: E0430 12:42:42.112808 2020 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:42:42.114997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:42:42.115313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:42:42.115656 systemd[1]: kubelet.service: Consumed 143ms CPU time, 96.7M memory peak. Apr 30 12:42:42.807709 containerd[1499]: time="2025-04-30T12:42:42.807609435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:42.809375 containerd[1499]: time="2025-04-30T12:42:42.808958854Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855570" Apr 30 12:42:42.810590 containerd[1499]: time="2025-04-30T12:42:42.810543858Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:42.813832 containerd[1499]: time="2025-04-30T12:42:42.813774634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:42.815133 containerd[1499]: time="2025-04-30T12:42:42.814867011Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 2.57246587s" Apr 30 12:42:42.815133 containerd[1499]: time="2025-04-30T12:42:42.814907563Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" Apr 30 12:42:42.838467 containerd[1499]: time="2025-04-30T12:42:42.838396239Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 12:42:43.876248 update_engine[1482]: I20250430 12:42:43.876098 1482 update_attempter.cc:509] Updating boot flags... Apr 30 12:42:43.925698 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2047) Apr 30 12:42:43.982597 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2051) Apr 30 12:42:44.512599 containerd[1499]: time="2025-04-30T12:42:44.510792413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:44.512599 containerd[1499]: time="2025-04-30T12:42:44.512167122Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263965" Apr 30 12:42:44.512599 containerd[1499]: time="2025-04-30T12:42:44.512513801Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:44.515763 containerd[1499]: time="2025-04-30T12:42:44.515726618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:44.516971 containerd[1499]: time="2025-04-30T12:42:44.516933210Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.678473362s" Apr 30 12:42:44.517083 containerd[1499]: time="2025-04-30T12:42:44.517065942Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" Apr 30 12:42:44.542689 containerd[1499]: time="2025-04-30T12:42:44.542591072Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 12:42:45.490139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount597707698.mount: Deactivated successfully. Apr 30 12:42:45.791914 containerd[1499]: time="2025-04-30T12:42:45.791717502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:45.793541 containerd[1499]: time="2025-04-30T12:42:45.793415321Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775731" Apr 30 12:42:45.795700 containerd[1499]: time="2025-04-30T12:42:45.795649166Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:45.800646 containerd[1499]: time="2025-04-30T12:42:45.800539929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:45.801369 containerd[1499]: time="2025-04-30T12:42:45.801330961Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.258693778s" Apr 30 12:42:45.801437 containerd[1499]: time="2025-04-30T12:42:45.801371387Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" Apr 30 12:42:45.823815 containerd[1499]: time="2025-04-30T12:42:45.823766953Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 12:42:46.445031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114971588.mount: Deactivated successfully. Apr 30 12:42:47.152596 containerd[1499]: time="2025-04-30T12:42:47.150831049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:47.152596 containerd[1499]: time="2025-04-30T12:42:47.152288238Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Apr 30 12:42:47.153615 containerd[1499]: time="2025-04-30T12:42:47.153038304Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:47.156465 containerd[1499]: time="2025-04-30T12:42:47.156404938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:47.157822 containerd[1499]: time="2025-04-30T12:42:47.157687027Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.333869802s" Apr 30 12:42:47.157822 containerd[1499]: time="2025-04-30T12:42:47.157721807Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 30 12:42:47.181365 containerd[1499]: time="2025-04-30T12:42:47.180779196Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 12:42:47.681524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249783.mount: Deactivated successfully. Apr 30 12:42:47.688247 containerd[1499]: time="2025-04-30T12:42:47.688167027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:47.688802 containerd[1499]: time="2025-04-30T12:42:47.688756402Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Apr 30 12:42:47.690079 containerd[1499]: time="2025-04-30T12:42:47.690019561Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:47.693253 containerd[1499]: time="2025-04-30T12:42:47.692895916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:47.693877 containerd[1499]: time="2025-04-30T12:42:47.693839412Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 513.010869ms" Apr 30 12:42:47.693877 containerd[1499]: time="2025-04-30T12:42:47.693873872Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 30 12:42:47.718964 containerd[1499]: time="2025-04-30T12:42:47.718921152Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 12:42:48.300349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1433070458.mount: Deactivated successfully. Apr 30 12:42:52.196755 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 30 12:42:52.203844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:42:52.339784 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:52.340519 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:42:52.392421 kubelet[2186]: E0430 12:42:52.391995 2186 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:42:52.394820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:42:52.394990 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:42:52.395494 systemd[1]: kubelet.service: Consumed 140ms CPU time, 96.2M memory peak. Apr 30 12:42:52.710111 containerd[1499]: time="2025-04-30T12:42:52.710032364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:52.711464 containerd[1499]: time="2025-04-30T12:42:52.711419575Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Apr 30 12:42:52.712598 containerd[1499]: time="2025-04-30T12:42:52.712216423Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:52.717118 containerd[1499]: time="2025-04-30T12:42:52.717035608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:42:52.722621 containerd[1499]: time="2025-04-30T12:42:52.720185225Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 5.001174024s" Apr 30 12:42:52.722621 containerd[1499]: time="2025-04-30T12:42:52.720244809Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Apr 30 12:42:58.020019 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:58.020250 systemd[1]: kubelet.service: Consumed 140ms CPU time, 96.2M memory peak. Apr 30 12:42:58.030988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:42:58.057729 systemd[1]: Reload requested from client PID 2257 ('systemctl') (unit session-7.scope)... Apr 30 12:42:58.057977 systemd[1]: Reloading... Apr 30 12:42:58.237608 zram_generator::config[2311]: No configuration found. Apr 30 12:42:58.341157 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:42:58.442081 systemd[1]: Reloading finished in 383 ms. Apr 30 12:42:58.490601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:58.496043 (kubelet)[2340]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:42:58.501234 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:42:58.502153 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 12:42:58.502407 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:58.502454 systemd[1]: kubelet.service: Consumed 91ms CPU time, 85.9M memory peak. Apr 30 12:42:58.507147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:42:58.635860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:42:58.644049 (kubelet)[2357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:42:58.700636 kubelet[2357]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:42:58.700636 kubelet[2357]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 12:42:58.700636 kubelet[2357]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:42:58.701036 kubelet[2357]: I0430 12:42:58.700882 2357 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:42:59.080164 kubelet[2357]: I0430 12:42:59.080042 2357 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 12:42:59.080164 kubelet[2357]: I0430 12:42:59.080080 2357 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:42:59.080427 kubelet[2357]: I0430 12:42:59.080314 2357 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 12:42:59.096872 kubelet[2357]: E0430 12:42:59.096831 2357 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://91.99.0.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 91.99.0.103:6443: connect: connection refused Apr 30 12:42:59.098050 kubelet[2357]: I0430 12:42:59.097980 2357 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:42:59.112227 kubelet[2357]: I0430 12:42:59.112198 2357 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:42:59.113508 kubelet[2357]: I0430 12:42:59.112701 2357 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:42:59.113508 kubelet[2357]: I0430 12:42:59.112735 2357 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-9-a0dc1fa777","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 12:42:59.113508 kubelet[2357]: I0430 12:42:59.112964 2357 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:42:59.113508 kubelet[2357]: I0430 12:42:59.112973 2357 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 12:42:59.113783 kubelet[2357]: I0430 12:42:59.113235 2357 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:42:59.114688 kubelet[2357]: I0430 12:42:59.114666 2357 kubelet.go:400] "Attempting to sync node with API server" Apr 30 12:42:59.114782 kubelet[2357]: I0430 12:42:59.114771 2357 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:42:59.114923 kubelet[2357]: I0430 12:42:59.114912 2357 kubelet.go:312] "Adding apiserver pod source" Apr 30 12:42:59.115214 kubelet[2357]: I0430 12:42:59.115204 2357 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:42:59.115891 kubelet[2357]: W0430 12:42:59.115827 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-9-a0dc1fa777&limit=500&resourceVersion=0": dial tcp 91.99.0.103:6443: connect: connection refused Apr 30 12:42:59.115949 kubelet[2357]: E0430 12:42:59.115903 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://91.99.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-9-a0dc1fa777&limit=500&resourceVersion=0": dial tcp 91.99.0.103:6443: connect: connection refused Apr 30 12:42:59.116458 kubelet[2357]: W0430 12:42:59.116421 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 91.99.0.103:6443: connect: connection refused Apr 30 12:42:59.116554 kubelet[2357]: E0430 12:42:59.116543 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://91.99.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 91.99.0.103:6443: connect: connection refused Apr 30 12:42:59.116822 kubelet[2357]: I0430 12:42:59.116804 2357 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:42:59.117277 kubelet[2357]: I0430 12:42:59.117262 2357 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:42:59.117440 kubelet[2357]: W0430 12:42:59.117429 2357 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 12:42:59.118773 kubelet[2357]: I0430 12:42:59.118751 2357 server.go:1264] "Started kubelet" Apr 30 12:42:59.121296 kubelet[2357]: I0430 12:42:59.121268 2357 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:42:59.127156 kubelet[2357]: I0430 12:42:59.127101 2357 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:42:59.129612 kubelet[2357]: I0430 12:42:59.129299 2357 server.go:455] "Adding debug handlers to kubelet server" Apr 30 12:42:59.134667 kubelet[2357]: I0430 12:42:59.134618 2357 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 12:42:59.136508 kubelet[2357]: I0430 12:42:59.136478 2357 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 12:42:59.137849 kubelet[2357]: I0430 12:42:59.137827 2357 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:42:59.138881 kubelet[2357]: I0430 12:42:59.138750 2357 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:42:59.139437 kubelet[2357]: I0430 12:42:59.139183 2357 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:42:59.144613 kubelet[2357]: W0430 12:42:59.142998 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.0.103:6443: connect: connection refused Apr 30 12:42:59.144613 kubelet[2357]: E0430 12:42:59.143057 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://91.99.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.0.103:6443: connect: connection refused Apr 30 12:42:59.144613 kubelet[2357]: E0430 12:42:59.143114 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-9-a0dc1fa777?timeout=10s\": dial tcp 91.99.0.103:6443: connect: connection refused" interval="200ms" Apr 30 12:42:59.144782 kubelet[2357]: I0430 12:42:59.144710 2357 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:42:59.145429 kubelet[2357]: E0430 12:42:59.143437 2357 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.0.103:6443/api/v1/namespaces/default/events\": dial tcp 91.99.0.103:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-1-1-9-a0dc1fa777.183b1931142c042d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-1-9-a0dc1fa777,UID:ci-4230-1-1-9-a0dc1fa777,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-9-a0dc1fa777,},FirstTimestamp:2025-04-30 12:42:59.118720045 +0000 UTC m=+0.470645477,LastTimestamp:2025-04-30 12:42:59.118720045 +0000 UTC m=+0.470645477,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-9-a0dc1fa777,}" Apr 30 12:42:59.146243 kubelet[2357]: I0430 12:42:59.146200 2357 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:42:59.146243 kubelet[2357]: I0430 12:42:59.146244 2357 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 12:42:59.146328 kubelet[2357]: I0430 12:42:59.146268 2357 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 12:42:59.146328 kubelet[2357]: E0430 12:42:59.146309 2357 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:42:59.152900 kubelet[2357]: I0430 12:42:59.152858 2357 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:42:59.152900 kubelet[2357]: I0430 12:42:59.152887 2357 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:42:59.153062 kubelet[2357]: I0430 12:42:59.152988 2357 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:42:59.153338 kubelet[2357]: W0430 12:42:59.153294 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.0.103:6443: connect: connection refused Apr 30 12:42:59.153601 kubelet[2357]: E0430 12:42:59.153417 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://91.99.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.0.103:6443: connect: connection refused Apr 30 12:42:59.154378 kubelet[2357]: E0430 12:42:59.154263 2357 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 12:42:59.173891 kubelet[2357]: I0430 12:42:59.173813 2357 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 12:42:59.173891 kubelet[2357]: I0430 12:42:59.173868 2357 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 12:42:59.173891 kubelet[2357]: I0430 12:42:59.173901 2357 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:42:59.177543 kubelet[2357]: I0430 12:42:59.177487 2357 policy_none.go:49] "None policy: Start" Apr 30 12:42:59.179867 kubelet[2357]: I0430 12:42:59.179804 2357 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 12:42:59.179867 kubelet[2357]: I0430 12:42:59.179863 2357 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:42:59.188424 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 12:42:59.197475 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 12:42:59.211973 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 12:42:59.215310 kubelet[2357]: I0430 12:42:59.215230 2357 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:42:59.215696 kubelet[2357]: I0430 12:42:59.215548 2357 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:42:59.215796 kubelet[2357]: I0430 12:42:59.215780 2357 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:42:59.219149 kubelet[2357]: E0430 12:42:59.219077 2357 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-1-1-9-a0dc1fa777\" not found" Apr 30 12:42:59.237919 kubelet[2357]: I0430 12:42:59.237891 2357 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-9-a0dc1fa777" Apr 30 12:42:59.238510 kubelet[2357]: E0430 12:42:59.238480 2357 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://91.99.0.103:6443/api/v1/nodes\": dial tcp 91.99.0.103:6443: connect: connection refused" node="ci-4230-1-1-9-a0dc1fa777" Apr 30 12:42:59.246738 kubelet[2357]: I0430 12:42:59.246667 2357 topology_manager.go:215] "Topology Admit Handler" podUID="abda4258b8f1ce54c7adfde85ec4e227" podNamespace="kube-system" podName="kube-apiserver-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:42:59.249748 kubelet[2357]: I0430 12:42:59.249583 2357 topology_manager.go:215] "Topology Admit Handler" podUID="557c1d4435baaa101e07af3730046257" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:42:59.251830 kubelet[2357]: I0430 12:42:59.251783 2357 topology_manager.go:215] "Topology Admit Handler" podUID="a40a2e631a2bbab3f55a3137f7cbc8f1" podNamespace="kube-system" podName="kube-scheduler-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:42:59.260186 systemd[1]: Created slice kubepods-burstable-podabda4258b8f1ce54c7adfde85ec4e227.slice - libcontainer container kubepods-burstable-podabda4258b8f1ce54c7adfde85ec4e227.slice. Apr 30 12:42:59.268245 systemd[1]: Created slice kubepods-burstable-pod557c1d4435baaa101e07af3730046257.slice - libcontainer container kubepods-burstable-pod557c1d4435baaa101e07af3730046257.slice. Apr 30 12:42:59.289152 systemd[1]: Created slice kubepods-burstable-poda40a2e631a2bbab3f55a3137f7cbc8f1.slice - libcontainer container kubepods-burstable-poda40a2e631a2bbab3f55a3137f7cbc8f1.slice. Apr 30 12:42:59.345802 kubelet[2357]: E0430 12:42:59.344290 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-9-a0dc1fa777?timeout=10s\": dial tcp 91.99.0.103:6443: connect: connection refused" interval="400ms" Apr 30 12:42:59.439233 kubelet[2357]: I0430 12:42:59.438794 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/557c1d4435baaa101e07af3730046257-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-9-a0dc1fa777\" (UID: \"557c1d4435baaa101e07af3730046257\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:42:59.439233 kubelet[2357]: I0430 12:42:59.438866 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/557c1d4435baaa101e07af3730046257-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-9-a0dc1fa777\" (UID: \"557c1d4435baaa101e07af3730046257\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:42:59.439233 kubelet[2357]: I0430 12:42:59.438906 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/557c1d4435baaa101e07af3730046257-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-9-a0dc1fa777\" (UID: \"557c1d4435baaa101e07af3730046257\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:42:59.439233 kubelet[2357]: I0430 12:42:59.438945 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/557c1d4435baaa101e07af3730046257-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-9-a0dc1fa777\" (UID: \"557c1d4435baaa101e07af3730046257\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:42:59.439233 kubelet[2357]: I0430 12:42:59.438976 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/557c1d4435baaa101e07af3730046257-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-9-a0dc1fa777\" (UID: \"557c1d4435baaa101e07af3730046257\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:42:59.439624 kubelet[2357]: I0430 12:42:59.439003 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a40a2e631a2bbab3f55a3137f7cbc8f1-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-9-a0dc1fa777\" (UID: \"a40a2e631a2bbab3f55a3137f7cbc8f1\") " pod="kube-system/kube-scheduler-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:42:59.439624 kubelet[2357]: I0430 12:42:59.439032 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abda4258b8f1ce54c7adfde85ec4e227-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-9-a0dc1fa777\" (UID: \"abda4258b8f1ce54c7adfde85ec4e227\") " pod="kube-system/kube-apiserver-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:42:59.439624 kubelet[2357]: I0430 12:42:59.439061 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abda4258b8f1ce54c7adfde85ec4e227-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-9-a0dc1fa777\" (UID: \"abda4258b8f1ce54c7adfde85ec4e227\") " pod="kube-system/kube-apiserver-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:42:59.439624 kubelet[2357]: I0430 12:42:59.439091 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abda4258b8f1ce54c7adfde85ec4e227-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-9-a0dc1fa777\" (UID: \"abda4258b8f1ce54c7adfde85ec4e227\") " pod="kube-system/kube-apiserver-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:42:59.440532 kubelet[2357]: I0430 12:42:59.440499 2357 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-9-a0dc1fa777" Apr 30 12:42:59.441062 kubelet[2357]: E0430 12:42:59.441002 2357 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://91.99.0.103:6443/api/v1/nodes\": dial tcp 91.99.0.103:6443: connect: connection refused" node="ci-4230-1-1-9-a0dc1fa777" Apr 30 12:42:59.566310 containerd[1499]: time="2025-04-30T12:42:59.566148219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-9-a0dc1fa777,Uid:abda4258b8f1ce54c7adfde85ec4e227,Namespace:kube-system,Attempt:0,}" Apr 30 12:42:59.588532 containerd[1499]: time="2025-04-30T12:42:59.588075687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-9-a0dc1fa777,Uid:557c1d4435baaa101e07af3730046257,Namespace:kube-system,Attempt:0,}" Apr 30 12:42:59.593634 containerd[1499]: time="2025-04-30T12:42:59.593416047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-9-a0dc1fa777,Uid:a40a2e631a2bbab3f55a3137f7cbc8f1,Namespace:kube-system,Attempt:0,}" Apr 30 12:42:59.744890 kubelet[2357]: E0430 12:42:59.744843 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-9-a0dc1fa777?timeout=10s\": dial tcp 91.99.0.103:6443: connect: connection refused" interval="800ms" Apr 30 12:42:59.844002 kubelet[2357]: I0430 12:42:59.843919 2357 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-9-a0dc1fa777" Apr 30 12:42:59.844454 kubelet[2357]: E0430 12:42:59.844419 2357 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://91.99.0.103:6443/api/v1/nodes\": dial tcp 91.99.0.103:6443: connect: connection refused" node="ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:00.091383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1447945450.mount: Deactivated successfully. Apr 30 12:43:00.097553 containerd[1499]: time="2025-04-30T12:43:00.097277671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:43:00.099116 containerd[1499]: time="2025-04-30T12:43:00.099059389Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 30 12:43:00.101375 containerd[1499]: time="2025-04-30T12:43:00.101264611Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:43:00.103764 containerd[1499]: time="2025-04-30T12:43:00.103709052Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:43:00.108050 containerd[1499]: time="2025-04-30T12:43:00.107983943Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:43:00.112350 containerd[1499]: time="2025-04-30T12:43:00.112304845Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:43:00.114327 containerd[1499]: time="2025-04-30T12:43:00.114266327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:43:00.116040 containerd[1499]: time="2025-04-30T12:43:00.115389003Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:43:00.116040 containerd[1499]: time="2025-04-30T12:43:00.115452538Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.107868ms" Apr 30 12:43:00.118835 kubelet[2357]: W0430 12:43:00.118778 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-9-a0dc1fa777&limit=500&resourceVersion=0": dial tcp 91.99.0.103:6443: connect: connection refused Apr 30 12:43:00.119057 kubelet[2357]: E0430 12:43:00.118843 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://91.99.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-9-a0dc1fa777&limit=500&resourceVersion=0": dial tcp 91.99.0.103:6443: connect: connection refused Apr 30 12:43:00.119663 containerd[1499]: time="2025-04-30T12:43:00.119623323Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 526.084484ms" Apr 30 12:43:00.123213 containerd[1499]: time="2025-04-30T12:43:00.122987390Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 534.795273ms" Apr 30 12:43:00.196360 kubelet[2357]: W0430 12:43:00.196305 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.0.103:6443: connect: connection refused Apr 30 12:43:00.196666 kubelet[2357]: E0430 12:43:00.196632 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://91.99.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.0.103:6443: connect: connection refused Apr 30 12:43:00.246706 containerd[1499]: time="2025-04-30T12:43:00.246495945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:43:00.246706 containerd[1499]: time="2025-04-30T12:43:00.246667867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:43:00.248131 containerd[1499]: time="2025-04-30T12:43:00.247887727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:00.250524 containerd[1499]: time="2025-04-30T12:43:00.248883692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:00.252366 containerd[1499]: time="2025-04-30T12:43:00.252232235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:43:00.252671 containerd[1499]: time="2025-04-30T12:43:00.252296570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:43:00.254737 containerd[1499]: time="2025-04-30T12:43:00.254659631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:00.254989 containerd[1499]: time="2025-04-30T12:43:00.254954224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:00.256670 containerd[1499]: time="2025-04-30T12:43:00.256484280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:43:00.256670 containerd[1499]: time="2025-04-30T12:43:00.256577262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:43:00.256670 containerd[1499]: time="2025-04-30T12:43:00.256596787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:00.256805 containerd[1499]: time="2025-04-30T12:43:00.256758227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:00.282633 systemd[1]: Started cri-containerd-e7bc5e199fa706a7686698669a1f2b85b5a348cf74cbfa929b8dd672f1a54166.scope - libcontainer container e7bc5e199fa706a7686698669a1f2b85b5a348cf74cbfa929b8dd672f1a54166. Apr 30 12:43:00.287121 systemd[1]: Started cri-containerd-0503a2cd03b27bba954256a43c4192a20a9b78e49f4d5842e8b43deee1604395.scope - libcontainer container 0503a2cd03b27bba954256a43c4192a20a9b78e49f4d5842e8b43deee1604395. Apr 30 12:43:00.289316 systemd[1]: Started cri-containerd-93081d4e0e8c101bb36c6f9a0555776e73af93014d183f846e56ac255f39f76b.scope - libcontainer container 93081d4e0e8c101bb36c6f9a0555776e73af93014d183f846e56ac255f39f76b. Apr 30 12:43:00.348687 containerd[1499]: time="2025-04-30T12:43:00.348131404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-9-a0dc1fa777,Uid:a40a2e631a2bbab3f55a3137f7cbc8f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7bc5e199fa706a7686698669a1f2b85b5a348cf74cbfa929b8dd672f1a54166\"" Apr 30 12:43:00.358667 containerd[1499]: time="2025-04-30T12:43:00.358480307Z" level=info msg="CreateContainer within sandbox \"e7bc5e199fa706a7686698669a1f2b85b5a348cf74cbfa929b8dd672f1a54166\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 12:43:00.369288 containerd[1499]: time="2025-04-30T12:43:00.368988410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-9-a0dc1fa777,Uid:557c1d4435baaa101e07af3730046257,Namespace:kube-system,Attempt:0,} returns sandbox id \"93081d4e0e8c101bb36c6f9a0555776e73af93014d183f846e56ac255f39f76b\"" Apr 30 12:43:00.373518 containerd[1499]: time="2025-04-30T12:43:00.373412057Z" level=info msg="CreateContainer within sandbox \"93081d4e0e8c101bb36c6f9a0555776e73af93014d183f846e56ac255f39f76b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 12:43:00.378801 containerd[1499]: time="2025-04-30T12:43:00.378759891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-9-a0dc1fa777,Uid:abda4258b8f1ce54c7adfde85ec4e227,Namespace:kube-system,Attempt:0,} returns sandbox id \"0503a2cd03b27bba954256a43c4192a20a9b78e49f4d5842e8b43deee1604395\"" Apr 30 12:43:00.383407 containerd[1499]: time="2025-04-30T12:43:00.383191861Z" level=info msg="CreateContainer within sandbox \"0503a2cd03b27bba954256a43c4192a20a9b78e49f4d5842e8b43deee1604395\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 12:43:00.389013 containerd[1499]: time="2025-04-30T12:43:00.388919588Z" level=info msg="CreateContainer within sandbox \"e7bc5e199fa706a7686698669a1f2b85b5a348cf74cbfa929b8dd672f1a54166\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f0b7650c4419b627291b39b667897108047c28c9b68ca82c249c26646f43383d\"" Apr 30 12:43:00.390413 containerd[1499]: time="2025-04-30T12:43:00.390164294Z" level=info msg="StartContainer for \"f0b7650c4419b627291b39b667897108047c28c9b68ca82c249c26646f43383d\"" Apr 30 12:43:00.402403 kubelet[2357]: W0430 12:43:00.402096 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.0.103:6443: connect: connection refused Apr 30 12:43:00.402403 kubelet[2357]: E0430 12:43:00.402412 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://91.99.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.0.103:6443: connect: connection refused Apr 30 12:43:00.406801 containerd[1499]: time="2025-04-30T12:43:00.406693637Z" level=info msg="CreateContainer within sandbox \"93081d4e0e8c101bb36c6f9a0555776e73af93014d183f846e56ac255f39f76b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"71bb467caeb48b42cf436636432a42474ffca0f9ee4f354c58743c393e9ec0ee\"" Apr 30 12:43:00.408619 containerd[1499]: time="2025-04-30T12:43:00.407512318Z" level=info msg="StartContainer for \"71bb467caeb48b42cf436636432a42474ffca0f9ee4f354c58743c393e9ec0ee\"" Apr 30 12:43:00.412117 containerd[1499]: time="2025-04-30T12:43:00.412062716Z" level=info msg="CreateContainer within sandbox \"0503a2cd03b27bba954256a43c4192a20a9b78e49f4d5842e8b43deee1604395\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"44e646c8909e94823158d5f3622462f87b5a9669f27354254cd94bb330ec09a7\"" Apr 30 12:43:00.412753 containerd[1499]: time="2025-04-30T12:43:00.412695872Z" level=info msg="StartContainer for \"44e646c8909e94823158d5f3622462f87b5a9669f27354254cd94bb330ec09a7\"" Apr 30 12:43:00.435911 systemd[1]: Started cri-containerd-f0b7650c4419b627291b39b667897108047c28c9b68ca82c249c26646f43383d.scope - libcontainer container f0b7650c4419b627291b39b667897108047c28c9b68ca82c249c26646f43383d. Apr 30 12:43:00.447962 systemd[1]: Started cri-containerd-71bb467caeb48b42cf436636432a42474ffca0f9ee4f354c58743c393e9ec0ee.scope - libcontainer container 71bb467caeb48b42cf436636432a42474ffca0f9ee4f354c58743c393e9ec0ee. Apr 30 12:43:00.463822 systemd[1]: Started cri-containerd-44e646c8909e94823158d5f3622462f87b5a9669f27354254cd94bb330ec09a7.scope - libcontainer container 44e646c8909e94823158d5f3622462f87b5a9669f27354254cd94bb330ec09a7. Apr 30 12:43:00.510821 containerd[1499]: time="2025-04-30T12:43:00.509728320Z" level=info msg="StartContainer for \"71bb467caeb48b42cf436636432a42474ffca0f9ee4f354c58743c393e9ec0ee\" returns successfully" Apr 30 12:43:00.520594 containerd[1499]: time="2025-04-30T12:43:00.518709887Z" level=info msg="StartContainer for \"f0b7650c4419b627291b39b667897108047c28c9b68ca82c249c26646f43383d\" returns successfully" Apr 30 12:43:00.526230 containerd[1499]: time="2025-04-30T12:43:00.525781505Z" level=info msg="StartContainer for \"44e646c8909e94823158d5f3622462f87b5a9669f27354254cd94bb330ec09a7\" returns successfully" Apr 30 12:43:00.546874 kubelet[2357]: E0430 12:43:00.546808 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-9-a0dc1fa777?timeout=10s\": dial tcp 91.99.0.103:6443: connect: connection refused" interval="1.6s" Apr 30 12:43:00.641485 kubelet[2357]: W0430 12:43:00.641289 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 91.99.0.103:6443: connect: connection refused Apr 30 12:43:00.641485 kubelet[2357]: E0430 12:43:00.641358 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://91.99.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 91.99.0.103:6443: connect: connection refused Apr 30 12:43:00.648616 kubelet[2357]: I0430 12:43:00.647432 2357 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:00.648616 kubelet[2357]: E0430 12:43:00.647766 2357 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://91.99.0.103:6443/api/v1/nodes\": dial tcp 91.99.0.103:6443: connect: connection refused" node="ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:02.252267 kubelet[2357]: I0430 12:43:02.252234 2357 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:02.834037 kubelet[2357]: E0430 12:43:02.833973 2357 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-1-1-9-a0dc1fa777\" not found" node="ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:02.992520 kubelet[2357]: I0430 12:43:02.992302 2357 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:03.128656 kubelet[2357]: I0430 12:43:03.128108 2357 apiserver.go:52] "Watching apiserver" Apr 30 12:43:03.137329 kubelet[2357]: I0430 12:43:03.137275 2357 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 12:43:05.087813 systemd[1]: Reload requested from client PID 2638 ('systemctl') (unit session-7.scope)... Apr 30 12:43:05.087834 systemd[1]: Reloading... Apr 30 12:43:05.250603 zram_generator::config[2686]: No configuration found. Apr 30 12:43:05.379090 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:43:05.491530 systemd[1]: Reloading finished in 403 ms. Apr 30 12:43:05.528071 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:43:05.543864 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 12:43:05.544217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:43:05.544284 systemd[1]: kubelet.service: Consumed 901ms CPU time, 113.8M memory peak. Apr 30 12:43:05.548929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:43:05.676388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:43:05.687971 (kubelet)[2728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:43:05.742463 kubelet[2728]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:43:05.744432 kubelet[2728]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 12:43:05.744432 kubelet[2728]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:43:05.744432 kubelet[2728]: I0430 12:43:05.742576 2728 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:43:05.747787 kubelet[2728]: I0430 12:43:05.747758 2728 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 12:43:05.747942 kubelet[2728]: I0430 12:43:05.747931 2728 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:43:05.748193 kubelet[2728]: I0430 12:43:05.748179 2728 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 12:43:05.749982 kubelet[2728]: I0430 12:43:05.749955 2728 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 12:43:05.751544 kubelet[2728]: I0430 12:43:05.751523 2728 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:43:05.761608 kubelet[2728]: I0430 12:43:05.761582 2728 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:43:05.762887 kubelet[2728]: I0430 12:43:05.762851 2728 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:43:05.763139 kubelet[2728]: I0430 12:43:05.762886 2728 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-9-a0dc1fa777","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 12:43:05.763224 kubelet[2728]: I0430 12:43:05.763156 2728 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:43:05.763224 kubelet[2728]: I0430 12:43:05.763168 2728 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 12:43:05.763224 kubelet[2728]: I0430 12:43:05.763203 2728 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:43:05.763359 kubelet[2728]: I0430 12:43:05.763349 2728 kubelet.go:400] "Attempting to sync node with API server" Apr 30 12:43:05.763389 kubelet[2728]: I0430 12:43:05.763363 2728 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:43:05.763435 kubelet[2728]: I0430 12:43:05.763422 2728 kubelet.go:312] "Adding apiserver pod source" Apr 30 12:43:05.763461 kubelet[2728]: I0430 12:43:05.763439 2728 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:43:05.769579 kubelet[2728]: I0430 12:43:05.767755 2728 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:43:05.769579 kubelet[2728]: I0430 12:43:05.767954 2728 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:43:05.769579 kubelet[2728]: I0430 12:43:05.768683 2728 server.go:1264] "Started kubelet" Apr 30 12:43:05.770148 kubelet[2728]: I0430 12:43:05.769929 2728 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:43:05.770707 kubelet[2728]: I0430 12:43:05.770690 2728 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:43:05.771421 kubelet[2728]: I0430 12:43:05.771366 2728 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:43:05.771803 kubelet[2728]: I0430 12:43:05.771788 2728 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:43:05.772746 kubelet[2728]: I0430 12:43:05.772728 2728 server.go:455] "Adding debug handlers to kubelet server" Apr 30 12:43:05.779824 kubelet[2728]: I0430 12:43:05.779798 2728 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 12:43:05.783519 kubelet[2728]: I0430 12:43:05.779973 2728 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 12:43:05.791204 kubelet[2728]: I0430 12:43:05.791172 2728 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:43:05.796175 kubelet[2728]: I0430 12:43:05.796144 2728 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:43:05.796674 kubelet[2728]: I0430 12:43:05.796657 2728 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:43:05.796851 kubelet[2728]: I0430 12:43:05.796830 2728 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:43:05.812909 kubelet[2728]: E0430 12:43:05.812863 2728 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 12:43:05.814745 kubelet[2728]: I0430 12:43:05.814711 2728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:43:05.818724 kubelet[2728]: I0430 12:43:05.818693 2728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:43:05.820256 kubelet[2728]: I0430 12:43:05.818891 2728 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 12:43:05.820256 kubelet[2728]: I0430 12:43:05.818916 2728 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 12:43:05.820256 kubelet[2728]: E0430 12:43:05.818968 2728 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:43:05.867630 kubelet[2728]: I0430 12:43:05.867578 2728 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 12:43:05.867630 kubelet[2728]: I0430 12:43:05.867600 2728 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 12:43:05.867935 kubelet[2728]: I0430 12:43:05.867681 2728 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:43:05.867935 kubelet[2728]: I0430 12:43:05.867887 2728 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 12:43:05.867935 kubelet[2728]: I0430 12:43:05.867899 2728 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 12:43:05.867935 kubelet[2728]: I0430 12:43:05.867920 2728 policy_none.go:49] "None policy: Start" Apr 30 12:43:05.868670 kubelet[2728]: I0430 12:43:05.868653 2728 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 12:43:05.868741 kubelet[2728]: I0430 12:43:05.868677 2728 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:43:05.868873 kubelet[2728]: I0430 12:43:05.868857 2728 state_mem.go:75] "Updated machine memory state" Apr 30 12:43:05.873263 kubelet[2728]: I0430 12:43:05.873213 2728 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:43:05.873445 kubelet[2728]: I0430 12:43:05.873406 2728 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:43:05.873547 kubelet[2728]: I0430 12:43:05.873534 2728 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:43:05.885707 kubelet[2728]: I0430 12:43:05.885679 2728 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:05.903162 kubelet[2728]: I0430 12:43:05.902880 2728 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:05.903162 kubelet[2728]: I0430 12:43:05.902979 2728 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:05.919923 kubelet[2728]: I0430 12:43:05.919881 2728 topology_manager.go:215] "Topology Admit Handler" podUID="abda4258b8f1ce54c7adfde85ec4e227" podNamespace="kube-system" podName="kube-apiserver-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:05.920588 kubelet[2728]: I0430 12:43:05.920197 2728 topology_manager.go:215] "Topology Admit Handler" podUID="557c1d4435baaa101e07af3730046257" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:05.920823 kubelet[2728]: I0430 12:43:05.920794 2728 topology_manager.go:215] "Topology Admit Handler" podUID="a40a2e631a2bbab3f55a3137f7cbc8f1" podNamespace="kube-system" podName="kube-scheduler-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:05.930798 kubelet[2728]: E0430 12:43:05.930608 2728 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-1-1-9-a0dc1fa777\" already exists" pod="kube-system/kube-apiserver-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:05.933592 kubelet[2728]: E0430 12:43:05.933496 2728 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230-1-1-9-a0dc1fa777\" already exists" pod="kube-system/kube-scheduler-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:05.992388 kubelet[2728]: I0430 12:43:05.992242 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a40a2e631a2bbab3f55a3137f7cbc8f1-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-9-a0dc1fa777\" (UID: \"a40a2e631a2bbab3f55a3137f7cbc8f1\") " pod="kube-system/kube-scheduler-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:05.992388 kubelet[2728]: I0430 12:43:05.992313 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/557c1d4435baaa101e07af3730046257-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-9-a0dc1fa777\" (UID: \"557c1d4435baaa101e07af3730046257\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:05.992388 kubelet[2728]: I0430 12:43:05.992358 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/557c1d4435baaa101e07af3730046257-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-9-a0dc1fa777\" (UID: \"557c1d4435baaa101e07af3730046257\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:05.992388 kubelet[2728]: I0430 12:43:05.992397 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/557c1d4435baaa101e07af3730046257-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-9-a0dc1fa777\" (UID: \"557c1d4435baaa101e07af3730046257\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:05.992388 kubelet[2728]: I0430 12:43:05.992433 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abda4258b8f1ce54c7adfde85ec4e227-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-9-a0dc1fa777\" (UID: \"abda4258b8f1ce54c7adfde85ec4e227\") " pod="kube-system/kube-apiserver-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:05.992961 kubelet[2728]: I0430 12:43:05.992490 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abda4258b8f1ce54c7adfde85ec4e227-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-9-a0dc1fa777\" (UID: \"abda4258b8f1ce54c7adfde85ec4e227\") " pod="kube-system/kube-apiserver-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:05.992961 kubelet[2728]: I0430 12:43:05.992546 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abda4258b8f1ce54c7adfde85ec4e227-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-9-a0dc1fa777\" (UID: \"abda4258b8f1ce54c7adfde85ec4e227\") " pod="kube-system/kube-apiserver-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:05.992961 kubelet[2728]: I0430 12:43:05.992613 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/557c1d4435baaa101e07af3730046257-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-9-a0dc1fa777\" (UID: \"557c1d4435baaa101e07af3730046257\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:05.992961 kubelet[2728]: I0430 12:43:05.992646 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/557c1d4435baaa101e07af3730046257-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-9-a0dc1fa777\" (UID: \"557c1d4435baaa101e07af3730046257\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:06.082685 sudo[2762]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 12:43:06.083625 sudo[2762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 12:43:06.576067 sudo[2762]: pam_unix(sudo:session): session closed for user root Apr 30 12:43:06.765436 kubelet[2728]: I0430 12:43:06.764978 2728 apiserver.go:52] "Watching apiserver" Apr 30 12:43:06.785075 kubelet[2728]: I0430 12:43:06.784899 2728 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 12:43:06.857617 kubelet[2728]: E0430 12:43:06.857479 2728 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-1-1-9-a0dc1fa777\" already exists" pod="kube-system/kube-apiserver-ci-4230-1-1-9-a0dc1fa777" Apr 30 12:43:06.885252 kubelet[2728]: I0430 12:43:06.885144 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-1-1-9-a0dc1fa777" podStartSLOduration=1.885106097 podStartE2EDuration="1.885106097s" podCreationTimestamp="2025-04-30 12:43:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:43:06.884031358 +0000 UTC m=+1.191479277" watchObservedRunningTime="2025-04-30 12:43:06.885106097 +0000 UTC m=+1.192554016" Apr 30 12:43:06.909489 kubelet[2728]: I0430 12:43:06.909424 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-1-1-9-a0dc1fa777" podStartSLOduration=2.90939475 podStartE2EDuration="2.90939475s" podCreationTimestamp="2025-04-30 12:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:43:06.897154868 +0000 UTC m=+1.204602787" watchObservedRunningTime="2025-04-30 12:43:06.90939475 +0000 UTC m=+1.216842669" Apr 30 12:43:08.618165 sudo[1746]: pam_unix(sudo:session): session closed for user root Apr 30 12:43:08.776547 sshd[1745]: Connection closed by 139.178.89.65 port 44034 Apr 30 12:43:08.777064 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Apr 30 12:43:08.782036 systemd[1]: sshd@6-91.99.0.103:22-139.178.89.65:44034.service: Deactivated successfully. Apr 30 12:43:08.784908 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 12:43:08.785375 systemd[1]: session-7.scope: Consumed 7.670s CPU time, 293.4M memory peak. Apr 30 12:43:08.787260 systemd-logind[1479]: Session 7 logged out. Waiting for processes to exit. Apr 30 12:43:08.789052 systemd-logind[1479]: Removed session 7. Apr 30 12:43:11.252070 kubelet[2728]: I0430 12:43:11.251912 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-1-1-9-a0dc1fa777" podStartSLOduration=6.251892131 podStartE2EDuration="6.251892131s" podCreationTimestamp="2025-04-30 12:43:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:43:06.910398438 +0000 UTC m=+1.217846357" watchObservedRunningTime="2025-04-30 12:43:11.251892131 +0000 UTC m=+5.559340050" Apr 30 12:43:19.784381 kubelet[2728]: I0430 12:43:19.784298 2728 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 12:43:19.785177 containerd[1499]: time="2025-04-30T12:43:19.784952867Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 12:43:19.786383 kubelet[2728]: I0430 12:43:19.785444 2728 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 12:43:20.573687 kubelet[2728]: I0430 12:43:20.573635 2728 topology_manager.go:215] "Topology Admit Handler" podUID="b9437fd8-87c6-478c-be3d-250ebd419b43" podNamespace="kube-system" podName="kube-proxy-g6xk7" Apr 30 12:43:20.584150 kubelet[2728]: I0430 12:43:20.583663 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b9437fd8-87c6-478c-be3d-250ebd419b43-kube-proxy\") pod \"kube-proxy-g6xk7\" (UID: \"b9437fd8-87c6-478c-be3d-250ebd419b43\") " pod="kube-system/kube-proxy-g6xk7" Apr 30 12:43:20.584150 kubelet[2728]: I0430 12:43:20.583698 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9437fd8-87c6-478c-be3d-250ebd419b43-lib-modules\") pod \"kube-proxy-g6xk7\" (UID: \"b9437fd8-87c6-478c-be3d-250ebd419b43\") " pod="kube-system/kube-proxy-g6xk7" Apr 30 12:43:20.584150 kubelet[2728]: I0430 12:43:20.583724 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9437fd8-87c6-478c-be3d-250ebd419b43-xtables-lock\") pod \"kube-proxy-g6xk7\" (UID: \"b9437fd8-87c6-478c-be3d-250ebd419b43\") " pod="kube-system/kube-proxy-g6xk7" Apr 30 12:43:20.584150 kubelet[2728]: I0430 12:43:20.583742 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f57c2\" (UniqueName: \"kubernetes.io/projected/b9437fd8-87c6-478c-be3d-250ebd419b43-kube-api-access-f57c2\") pod \"kube-proxy-g6xk7\" (UID: \"b9437fd8-87c6-478c-be3d-250ebd419b43\") " pod="kube-system/kube-proxy-g6xk7" Apr 30 12:43:20.586455 systemd[1]: Created slice kubepods-besteffort-podb9437fd8_87c6_478c_be3d_250ebd419b43.slice - libcontainer container kubepods-besteffort-podb9437fd8_87c6_478c_be3d_250ebd419b43.slice. Apr 30 12:43:20.592123 kubelet[2728]: I0430 12:43:20.590803 2728 topology_manager.go:215] "Topology Admit Handler" podUID="e68a2502-7ff6-4652-b551-2eb17624e6b6" podNamespace="kube-system" podName="cilium-c2jt9" Apr 30 12:43:20.604511 systemd[1]: Created slice kubepods-burstable-pode68a2502_7ff6_4652_b551_2eb17624e6b6.slice - libcontainer container kubepods-burstable-pode68a2502_7ff6_4652_b551_2eb17624e6b6.slice. Apr 30 12:43:20.685691 kubelet[2728]: I0430 12:43:20.684848 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-etc-cni-netd\") pod \"cilium-c2jt9\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " pod="kube-system/cilium-c2jt9" Apr 30 12:43:20.685691 kubelet[2728]: I0430 12:43:20.684916 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-host-proc-sys-net\") pod \"cilium-c2jt9\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " pod="kube-system/cilium-c2jt9" Apr 30 12:43:20.685691 kubelet[2728]: I0430 12:43:20.684947 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-host-proc-sys-kernel\") pod \"cilium-c2jt9\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " pod="kube-system/cilium-c2jt9" Apr 30 12:43:20.685691 kubelet[2728]: I0430 12:43:20.684986 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-hostproc\") pod \"cilium-c2jt9\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " pod="kube-system/cilium-c2jt9" Apr 30 12:43:20.685691 kubelet[2728]: I0430 12:43:20.685033 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e68a2502-7ff6-4652-b551-2eb17624e6b6-clustermesh-secrets\") pod \"cilium-c2jt9\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " pod="kube-system/cilium-c2jt9" Apr 30 12:43:20.685691 kubelet[2728]: I0430 12:43:20.685076 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-cilium-run\") pod \"cilium-c2jt9\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " pod="kube-system/cilium-c2jt9" Apr 30 12:43:20.686030 kubelet[2728]: I0430 12:43:20.685106 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-cni-path\") pod \"cilium-c2jt9\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " pod="kube-system/cilium-c2jt9" Apr 30 12:43:20.686030 kubelet[2728]: I0430 12:43:20.685149 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e68a2502-7ff6-4652-b551-2eb17624e6b6-cilium-config-path\") pod \"cilium-c2jt9\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " pod="kube-system/cilium-c2jt9" Apr 30 12:43:20.686030 kubelet[2728]: I0430 12:43:20.685176 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67kwz\" (UniqueName: \"kubernetes.io/projected/e68a2502-7ff6-4652-b551-2eb17624e6b6-kube-api-access-67kwz\") pod \"cilium-c2jt9\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " pod="kube-system/cilium-c2jt9" Apr 30 12:43:20.686030 kubelet[2728]: I0430 12:43:20.685205 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-bpf-maps\") pod \"cilium-c2jt9\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " pod="kube-system/cilium-c2jt9" Apr 30 12:43:20.686030 kubelet[2728]: I0430 12:43:20.685230 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-cilium-cgroup\") pod \"cilium-c2jt9\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " pod="kube-system/cilium-c2jt9" Apr 30 12:43:20.686030 kubelet[2728]: I0430 12:43:20.685259 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e68a2502-7ff6-4652-b551-2eb17624e6b6-hubble-tls\") pod \"cilium-c2jt9\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " pod="kube-system/cilium-c2jt9" Apr 30 12:43:20.686212 kubelet[2728]: I0430 12:43:20.685285 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-lib-modules\") pod \"cilium-c2jt9\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " pod="kube-system/cilium-c2jt9" Apr 30 12:43:20.686212 kubelet[2728]: I0430 12:43:20.685315 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-xtables-lock\") pod \"cilium-c2jt9\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " pod="kube-system/cilium-c2jt9" Apr 30 12:43:20.696052 kubelet[2728]: E0430 12:43:20.696013 2728 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 30 12:43:20.696052 kubelet[2728]: E0430 12:43:20.696050 2728 projected.go:200] Error preparing data for projected volume kube-api-access-f57c2 for pod kube-system/kube-proxy-g6xk7: configmap "kube-root-ca.crt" not found Apr 30 12:43:20.696227 kubelet[2728]: E0430 12:43:20.696118 2728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b9437fd8-87c6-478c-be3d-250ebd419b43-kube-api-access-f57c2 podName:b9437fd8-87c6-478c-be3d-250ebd419b43 nodeName:}" failed. No retries permitted until 2025-04-30 12:43:21.196096005 +0000 UTC m=+15.503543924 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f57c2" (UniqueName: "kubernetes.io/projected/b9437fd8-87c6-478c-be3d-250ebd419b43-kube-api-access-f57c2") pod "kube-proxy-g6xk7" (UID: "b9437fd8-87c6-478c-be3d-250ebd419b43") : configmap "kube-root-ca.crt" not found Apr 30 12:43:20.866859 kubelet[2728]: I0430 12:43:20.866390 2728 topology_manager.go:215] "Topology Admit Handler" podUID="6222e9ac-ed41-49e6-96be-3f0efe6361d9" podNamespace="kube-system" podName="cilium-operator-599987898-q4m4q" Apr 30 12:43:20.882500 systemd[1]: Created slice kubepods-besteffort-pod6222e9ac_ed41_49e6_96be_3f0efe6361d9.slice - libcontainer container kubepods-besteffort-pod6222e9ac_ed41_49e6_96be_3f0efe6361d9.slice. Apr 30 12:43:20.888727 kubelet[2728]: I0430 12:43:20.887620 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbbnt\" (UniqueName: \"kubernetes.io/projected/6222e9ac-ed41-49e6-96be-3f0efe6361d9-kube-api-access-bbbnt\") pod \"cilium-operator-599987898-q4m4q\" (UID: \"6222e9ac-ed41-49e6-96be-3f0efe6361d9\") " pod="kube-system/cilium-operator-599987898-q4m4q" Apr 30 12:43:20.888727 kubelet[2728]: I0430 12:43:20.887665 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6222e9ac-ed41-49e6-96be-3f0efe6361d9-cilium-config-path\") pod \"cilium-operator-599987898-q4m4q\" (UID: \"6222e9ac-ed41-49e6-96be-3f0efe6361d9\") " pod="kube-system/cilium-operator-599987898-q4m4q" Apr 30 12:43:20.910090 containerd[1499]: time="2025-04-30T12:43:20.910037670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c2jt9,Uid:e68a2502-7ff6-4652-b551-2eb17624e6b6,Namespace:kube-system,Attempt:0,}" Apr 30 12:43:20.953348 containerd[1499]: time="2025-04-30T12:43:20.952774279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:43:20.953348 containerd[1499]: time="2025-04-30T12:43:20.952851645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:43:20.953348 containerd[1499]: time="2025-04-30T12:43:20.952937571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:20.953348 containerd[1499]: time="2025-04-30T12:43:20.953058419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:20.982785 systemd[1]: Started cri-containerd-c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8.scope - libcontainer container c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8. Apr 30 12:43:21.017355 containerd[1499]: time="2025-04-30T12:43:21.017079878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c2jt9,Uid:e68a2502-7ff6-4652-b551-2eb17624e6b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\"" Apr 30 12:43:21.021539 containerd[1499]: time="2025-04-30T12:43:21.021295185Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 12:43:21.190034 containerd[1499]: time="2025-04-30T12:43:21.189862550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-q4m4q,Uid:6222e9ac-ed41-49e6-96be-3f0efe6361d9,Namespace:kube-system,Attempt:0,}" Apr 30 12:43:21.214420 containerd[1499]: time="2025-04-30T12:43:21.214243295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:43:21.214420 containerd[1499]: time="2025-04-30T12:43:21.214309100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:43:21.214420 containerd[1499]: time="2025-04-30T12:43:21.214332301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:21.215055 containerd[1499]: time="2025-04-30T12:43:21.214981302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:21.235983 systemd[1]: Started cri-containerd-c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f.scope - libcontainer container c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f. Apr 30 12:43:21.274942 containerd[1499]: time="2025-04-30T12:43:21.274898140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-q4m4q,Uid:6222e9ac-ed41-49e6-96be-3f0efe6361d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f\"" Apr 30 12:43:21.503240 containerd[1499]: time="2025-04-30T12:43:21.503167650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g6xk7,Uid:b9437fd8-87c6-478c-be3d-250ebd419b43,Namespace:kube-system,Attempt:0,}" Apr 30 12:43:21.528282 containerd[1499]: time="2025-04-30T12:43:21.527948860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:43:21.528282 containerd[1499]: time="2025-04-30T12:43:21.528109191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:43:21.528282 containerd[1499]: time="2025-04-30T12:43:21.528125712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:21.528835 containerd[1499]: time="2025-04-30T12:43:21.528674466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:21.548826 systemd[1]: Started cri-containerd-4b74ad5324f2480e0642bffb44d3965c0807f814eddb20f010eab3f87e37d75d.scope - libcontainer container 4b74ad5324f2480e0642bffb44d3965c0807f814eddb20f010eab3f87e37d75d. Apr 30 12:43:21.575627 containerd[1499]: time="2025-04-30T12:43:21.575351665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g6xk7,Uid:b9437fd8-87c6-478c-be3d-250ebd419b43,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b74ad5324f2480e0642bffb44d3965c0807f814eddb20f010eab3f87e37d75d\"" Apr 30 12:43:21.579030 containerd[1499]: time="2025-04-30T12:43:21.578892890Z" level=info msg="CreateContainer within sandbox \"4b74ad5324f2480e0642bffb44d3965c0807f814eddb20f010eab3f87e37d75d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 12:43:21.594487 containerd[1499]: time="2025-04-30T12:43:21.594441915Z" level=info msg="CreateContainer within sandbox \"4b74ad5324f2480e0642bffb44d3965c0807f814eddb20f010eab3f87e37d75d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3c116fa873a7c561057b192195500327f36cd70765f7462c07d56f502e060b56\"" Apr 30 12:43:21.595874 containerd[1499]: time="2025-04-30T12:43:21.595832083Z" level=info msg="StartContainer for \"3c116fa873a7c561057b192195500327f36cd70765f7462c07d56f502e060b56\"" Apr 30 12:43:21.622864 systemd[1]: Started cri-containerd-3c116fa873a7c561057b192195500327f36cd70765f7462c07d56f502e060b56.scope - libcontainer container 3c116fa873a7c561057b192195500327f36cd70765f7462c07d56f502e060b56. Apr 30 12:43:21.658105 containerd[1499]: time="2025-04-30T12:43:21.657993064Z" level=info msg="StartContainer for \"3c116fa873a7c561057b192195500327f36cd70765f7462c07d56f502e060b56\" returns successfully" Apr 30 12:43:24.671066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount310942254.mount: Deactivated successfully. Apr 30 12:43:25.841521 kubelet[2728]: I0430 12:43:25.839277 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g6xk7" podStartSLOduration=5.837538414 podStartE2EDuration="5.837538414s" podCreationTimestamp="2025-04-30 12:43:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:43:21.899134349 +0000 UTC m=+16.206582228" watchObservedRunningTime="2025-04-30 12:43:25.837538414 +0000 UTC m=+20.144986373" Apr 30 12:43:27.195605 containerd[1499]: time="2025-04-30T12:43:27.194872265Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:43:27.196980 containerd[1499]: time="2025-04-30T12:43:27.196869551Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 30 12:43:27.197739 containerd[1499]: time="2025-04-30T12:43:27.197686066Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:43:27.200490 containerd[1499]: time="2025-04-30T12:43:27.200426344Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.179082956s" Apr 30 12:43:27.201244 containerd[1499]: time="2025-04-30T12:43:27.200719396Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 30 12:43:27.202678 containerd[1499]: time="2025-04-30T12:43:27.202469832Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 12:43:27.207973 containerd[1499]: time="2025-04-30T12:43:27.206869821Z" level=info msg="CreateContainer within sandbox \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:43:27.223662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4041375947.mount: Deactivated successfully. Apr 30 12:43:27.233180 containerd[1499]: time="2025-04-30T12:43:27.233126351Z" level=info msg="CreateContainer within sandbox \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85\"" Apr 30 12:43:27.234993 containerd[1499]: time="2025-04-30T12:43:27.234962190Z" level=info msg="StartContainer for \"eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85\"" Apr 30 12:43:27.270928 systemd[1]: Started cri-containerd-eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85.scope - libcontainer container eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85. Apr 30 12:43:27.302206 containerd[1499]: time="2025-04-30T12:43:27.302154562Z" level=info msg="StartContainer for \"eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85\" returns successfully" Apr 30 12:43:27.320254 systemd[1]: cri-containerd-eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85.scope: Deactivated successfully. Apr 30 12:43:27.481275 containerd[1499]: time="2025-04-30T12:43:27.481199668Z" level=info msg="shim disconnected" id=eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85 namespace=k8s.io Apr 30 12:43:27.481275 containerd[1499]: time="2025-04-30T12:43:27.481266190Z" level=warning msg="cleaning up after shim disconnected" id=eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85 namespace=k8s.io Apr 30 12:43:27.481275 containerd[1499]: time="2025-04-30T12:43:27.481276591Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:43:27.912593 containerd[1499]: time="2025-04-30T12:43:27.911068608Z" level=info msg="CreateContainer within sandbox \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:43:27.925598 containerd[1499]: time="2025-04-30T12:43:27.925539711Z" level=info msg="CreateContainer within sandbox \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e\"" Apr 30 12:43:27.926626 containerd[1499]: time="2025-04-30T12:43:27.926344385Z" level=info msg="StartContainer for \"f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e\"" Apr 30 12:43:27.957831 systemd[1]: Started cri-containerd-f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e.scope - libcontainer container f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e. Apr 30 12:43:27.992702 containerd[1499]: time="2025-04-30T12:43:27.992514473Z" level=info msg="StartContainer for \"f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e\" returns successfully" Apr 30 12:43:28.009209 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:43:28.009434 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:43:28.010203 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:43:28.017802 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:43:28.018037 systemd[1]: cri-containerd-f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e.scope: Deactivated successfully. Apr 30 12:43:28.040808 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:43:28.049640 containerd[1499]: time="2025-04-30T12:43:28.049350908Z" level=info msg="shim disconnected" id=f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e namespace=k8s.io Apr 30 12:43:28.049640 containerd[1499]: time="2025-04-30T12:43:28.049412750Z" level=warning msg="cleaning up after shim disconnected" id=f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e namespace=k8s.io Apr 30 12:43:28.049640 containerd[1499]: time="2025-04-30T12:43:28.049422071Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:43:28.218585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85-rootfs.mount: Deactivated successfully. Apr 30 12:43:28.909588 containerd[1499]: time="2025-04-30T12:43:28.909518333Z" level=info msg="CreateContainer within sandbox \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:43:28.932626 containerd[1499]: time="2025-04-30T12:43:28.932549103Z" level=info msg="CreateContainer within sandbox \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe\"" Apr 30 12:43:28.933483 containerd[1499]: time="2025-04-30T12:43:28.933431458Z" level=info msg="StartContainer for \"1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe\"" Apr 30 12:43:28.971818 systemd[1]: Started cri-containerd-1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe.scope - libcontainer container 1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe. Apr 30 12:43:29.009097 containerd[1499]: time="2025-04-30T12:43:29.009023967Z" level=info msg="StartContainer for \"1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe\" returns successfully" Apr 30 12:43:29.010140 systemd[1]: cri-containerd-1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe.scope: Deactivated successfully. Apr 30 12:43:29.038293 containerd[1499]: time="2025-04-30T12:43:29.038219431Z" level=info msg="shim disconnected" id=1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe namespace=k8s.io Apr 30 12:43:29.038293 containerd[1499]: time="2025-04-30T12:43:29.038286674Z" level=warning msg="cleaning up after shim disconnected" id=1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe namespace=k8s.io Apr 30 12:43:29.038293 containerd[1499]: time="2025-04-30T12:43:29.038295474Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:43:29.217824 systemd[1]: run-containerd-runc-k8s.io-1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe-runc.375vHf.mount: Deactivated successfully. Apr 30 12:43:29.217941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe-rootfs.mount: Deactivated successfully. Apr 30 12:43:29.915925 containerd[1499]: time="2025-04-30T12:43:29.915684902Z" level=info msg="CreateContainer within sandbox \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:43:29.940926 containerd[1499]: time="2025-04-30T12:43:29.939758572Z" level=info msg="CreateContainer within sandbox \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804\"" Apr 30 12:43:29.941892 containerd[1499]: time="2025-04-30T12:43:29.941728767Z" level=info msg="StartContainer for \"71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804\"" Apr 30 12:43:29.971755 systemd[1]: Started cri-containerd-71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804.scope - libcontainer container 71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804. Apr 30 12:43:29.996398 systemd[1]: cri-containerd-71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804.scope: Deactivated successfully. Apr 30 12:43:30.000120 containerd[1499]: time="2025-04-30T12:43:29.999544594Z" level=info msg="StartContainer for \"71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804\" returns successfully" Apr 30 12:43:30.027290 containerd[1499]: time="2025-04-30T12:43:30.027221460Z" level=info msg="shim disconnected" id=71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804 namespace=k8s.io Apr 30 12:43:30.027290 containerd[1499]: time="2025-04-30T12:43:30.027280542Z" level=warning msg="cleaning up after shim disconnected" id=71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804 namespace=k8s.io Apr 30 12:43:30.027290 containerd[1499]: time="2025-04-30T12:43:30.027290703Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:43:30.219777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804-rootfs.mount: Deactivated successfully. Apr 30 12:43:30.921323 containerd[1499]: time="2025-04-30T12:43:30.921034716Z" level=info msg="CreateContainer within sandbox \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:43:30.942103 containerd[1499]: time="2025-04-30T12:43:30.942046301Z" level=info msg="CreateContainer within sandbox \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb\"" Apr 30 12:43:30.943670 containerd[1499]: time="2025-04-30T12:43:30.943607397Z" level=info msg="StartContainer for \"bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb\"" Apr 30 12:43:30.981837 systemd[1]: Started cri-containerd-bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb.scope - libcontainer container bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb. Apr 30 12:43:31.010065 containerd[1499]: time="2025-04-30T12:43:31.010000532Z" level=info msg="StartContainer for \"bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb\" returns successfully" Apr 30 12:43:31.194683 kubelet[2728]: I0430 12:43:31.192258 2728 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 12:43:31.219426 systemd[1]: run-containerd-runc-k8s.io-bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb-runc.qyx1io.mount: Deactivated successfully. Apr 30 12:43:31.230163 kubelet[2728]: I0430 12:43:31.230111 2728 topology_manager.go:215] "Topology Admit Handler" podUID="df589d07-1ce3-4fe1-8d3d-27c4ae12aa9e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6g5r4" Apr 30 12:43:31.239588 kubelet[2728]: I0430 12:43:31.239527 2728 topology_manager.go:215] "Topology Admit Handler" podUID="3bb54f6f-ed63-49ce-9bda-40d4e847dc24" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kk5kz" Apr 30 12:43:31.241557 systemd[1]: Created slice kubepods-burstable-poddf589d07_1ce3_4fe1_8d3d_27c4ae12aa9e.slice - libcontainer container kubepods-burstable-poddf589d07_1ce3_4fe1_8d3d_27c4ae12aa9e.slice. Apr 30 12:43:31.251381 systemd[1]: Created slice kubepods-burstable-pod3bb54f6f_ed63_49ce_9bda_40d4e847dc24.slice - libcontainer container kubepods-burstable-pod3bb54f6f_ed63_49ce_9bda_40d4e847dc24.slice. Apr 30 12:43:31.259113 kubelet[2728]: I0430 12:43:31.259066 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df589d07-1ce3-4fe1-8d3d-27c4ae12aa9e-config-volume\") pod \"coredns-7db6d8ff4d-6g5r4\" (UID: \"df589d07-1ce3-4fe1-8d3d-27c4ae12aa9e\") " pod="kube-system/coredns-7db6d8ff4d-6g5r4" Apr 30 12:43:31.259113 kubelet[2728]: I0430 12:43:31.259114 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64s49\" (UniqueName: \"kubernetes.io/projected/3bb54f6f-ed63-49ce-9bda-40d4e847dc24-kube-api-access-64s49\") pod \"coredns-7db6d8ff4d-kk5kz\" (UID: \"3bb54f6f-ed63-49ce-9bda-40d4e847dc24\") " pod="kube-system/coredns-7db6d8ff4d-kk5kz" Apr 30 12:43:31.259279 kubelet[2728]: I0430 12:43:31.259137 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpzzp\" (UniqueName: \"kubernetes.io/projected/df589d07-1ce3-4fe1-8d3d-27c4ae12aa9e-kube-api-access-rpzzp\") pod \"coredns-7db6d8ff4d-6g5r4\" (UID: \"df589d07-1ce3-4fe1-8d3d-27c4ae12aa9e\") " pod="kube-system/coredns-7db6d8ff4d-6g5r4" Apr 30 12:43:31.259279 kubelet[2728]: I0430 12:43:31.259153 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bb54f6f-ed63-49ce-9bda-40d4e847dc24-config-volume\") pod \"coredns-7db6d8ff4d-kk5kz\" (UID: \"3bb54f6f-ed63-49ce-9bda-40d4e847dc24\") " pod="kube-system/coredns-7db6d8ff4d-kk5kz" Apr 30 12:43:31.546845 containerd[1499]: time="2025-04-30T12:43:31.546799019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6g5r4,Uid:df589d07-1ce3-4fe1-8d3d-27c4ae12aa9e,Namespace:kube-system,Attempt:0,}" Apr 30 12:43:31.556610 containerd[1499]: time="2025-04-30T12:43:31.556236932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kk5kz,Uid:3bb54f6f-ed63-49ce-9bda-40d4e847dc24,Namespace:kube-system,Attempt:0,}" Apr 30 12:43:31.944632 kubelet[2728]: I0430 12:43:31.944429 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c2jt9" podStartSLOduration=5.763142176 podStartE2EDuration="11.944406917s" podCreationTimestamp="2025-04-30 12:43:20 +0000 UTC" firstStartedPulling="2025-04-30 12:43:21.020810794 +0000 UTC m=+15.328258673" lastFinishedPulling="2025-04-30 12:43:27.202075415 +0000 UTC m=+21.509523414" observedRunningTime="2025-04-30 12:43:31.943736375 +0000 UTC m=+26.251184334" watchObservedRunningTime="2025-04-30 12:43:31.944406917 +0000 UTC m=+26.251854836" Apr 30 12:43:35.526418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount688939102.mount: Deactivated successfully. Apr 30 12:43:35.847689 containerd[1499]: time="2025-04-30T12:43:35.847212517Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:43:35.848761 containerd[1499]: time="2025-04-30T12:43:35.848695812Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 30 12:43:35.849668 containerd[1499]: time="2025-04-30T12:43:35.849599326Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:43:35.851212 containerd[1499]: time="2025-04-30T12:43:35.850849052Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 8.648171852s" Apr 30 12:43:35.851212 containerd[1499]: time="2025-04-30T12:43:35.850885333Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 30 12:43:35.855125 containerd[1499]: time="2025-04-30T12:43:35.855069209Z" level=info msg="CreateContainer within sandbox \"c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 12:43:35.871429 containerd[1499]: time="2025-04-30T12:43:35.871303772Z" level=info msg="CreateContainer within sandbox \"c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96\"" Apr 30 12:43:35.872828 containerd[1499]: time="2025-04-30T12:43:35.871828432Z" level=info msg="StartContainer for \"927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96\"" Apr 30 12:43:35.901744 systemd[1]: Started cri-containerd-927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96.scope - libcontainer container 927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96. Apr 30 12:43:35.926902 containerd[1499]: time="2025-04-30T12:43:35.926862557Z" level=info msg="StartContainer for \"927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96\" returns successfully" Apr 30 12:43:35.953611 kubelet[2728]: I0430 12:43:35.951696 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-q4m4q" podStartSLOduration=1.376254149 podStartE2EDuration="15.951675799s" podCreationTimestamp="2025-04-30 12:43:20 +0000 UTC" firstStartedPulling="2025-04-30 12:43:21.276519803 +0000 UTC m=+15.583967722" lastFinishedPulling="2025-04-30 12:43:35.851941453 +0000 UTC m=+30.159389372" observedRunningTime="2025-04-30 12:43:35.951506033 +0000 UTC m=+30.258953912" watchObservedRunningTime="2025-04-30 12:43:35.951675799 +0000 UTC m=+30.259123718" Apr 30 12:43:39.999356 systemd-networkd[1398]: cilium_host: Link UP Apr 30 12:43:40.000450 systemd-networkd[1398]: cilium_net: Link UP Apr 30 12:43:40.001130 systemd-networkd[1398]: cilium_net: Gained carrier Apr 30 12:43:40.001920 systemd-networkd[1398]: cilium_host: Gained carrier Apr 30 12:43:40.111943 systemd-networkd[1398]: cilium_vxlan: Link UP Apr 30 12:43:40.111950 systemd-networkd[1398]: cilium_vxlan: Gained carrier Apr 30 12:43:40.400724 kernel: NET: Registered PF_ALG protocol family Apr 30 12:43:40.451789 systemd-networkd[1398]: cilium_net: Gained IPv6LL Apr 30 12:43:40.475855 systemd-networkd[1398]: cilium_host: Gained IPv6LL Apr 30 12:43:41.099644 systemd-networkd[1398]: lxc_health: Link UP Apr 30 12:43:41.110251 systemd-networkd[1398]: lxc_health: Gained carrier Apr 30 12:43:41.625061 systemd-networkd[1398]: lxc588b1cf80f4d: Link UP Apr 30 12:43:41.628636 kernel: eth0: renamed from tmp246b1 Apr 30 12:43:41.635848 systemd-networkd[1398]: lxc588b1cf80f4d: Gained carrier Apr 30 12:43:41.670599 kernel: eth0: renamed from tmp850dd Apr 30 12:43:41.670033 systemd-networkd[1398]: lxca32a46112dad: Link UP Apr 30 12:43:41.677369 systemd-networkd[1398]: lxca32a46112dad: Gained carrier Apr 30 12:43:42.044185 systemd-networkd[1398]: cilium_vxlan: Gained IPv6LL Apr 30 12:43:42.619969 systemd-networkd[1398]: lxc_health: Gained IPv6LL Apr 30 12:43:43.131910 systemd-networkd[1398]: lxca32a46112dad: Gained IPv6LL Apr 30 12:43:43.579903 systemd-networkd[1398]: lxc588b1cf80f4d: Gained IPv6LL Apr 30 12:43:45.641343 containerd[1499]: time="2025-04-30T12:43:45.641209161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:43:45.641935 containerd[1499]: time="2025-04-30T12:43:45.641745976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:43:45.641935 containerd[1499]: time="2025-04-30T12:43:45.641813858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:45.642400 containerd[1499]: time="2025-04-30T12:43:45.642344233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:45.670341 systemd[1]: Started cri-containerd-850ddbb5531fa0d406da2eb3ce579fecd6d143d640580d7b078524664b670065.scope - libcontainer container 850ddbb5531fa0d406da2eb3ce579fecd6d143d640580d7b078524664b670065. Apr 30 12:43:45.674700 containerd[1499]: time="2025-04-30T12:43:45.674455139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:43:45.676132 containerd[1499]: time="2025-04-30T12:43:45.675628652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:43:45.676132 containerd[1499]: time="2025-04-30T12:43:45.675671333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:45.676132 containerd[1499]: time="2025-04-30T12:43:45.675779816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:43:45.715220 systemd[1]: Started cri-containerd-246b126bfe1d7a22b2ac3877934903ae27a03dc8921961718c53eb090ff7b32a.scope - libcontainer container 246b126bfe1d7a22b2ac3877934903ae27a03dc8921961718c53eb090ff7b32a. Apr 30 12:43:45.744313 containerd[1499]: time="2025-04-30T12:43:45.744160225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kk5kz,Uid:3bb54f6f-ed63-49ce-9bda-40d4e847dc24,Namespace:kube-system,Attempt:0,} returns sandbox id \"850ddbb5531fa0d406da2eb3ce579fecd6d143d640580d7b078524664b670065\"" Apr 30 12:43:45.748348 containerd[1499]: time="2025-04-30T12:43:45.748281341Z" level=info msg="CreateContainer within sandbox \"850ddbb5531fa0d406da2eb3ce579fecd6d143d640580d7b078524664b670065\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:43:45.776135 containerd[1499]: time="2025-04-30T12:43:45.776081326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6g5r4,Uid:df589d07-1ce3-4fe1-8d3d-27c4ae12aa9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"246b126bfe1d7a22b2ac3877934903ae27a03dc8921961718c53eb090ff7b32a\"" Apr 30 12:43:45.776639 containerd[1499]: time="2025-04-30T12:43:45.776608781Z" level=info msg="CreateContainer within sandbox \"850ddbb5531fa0d406da2eb3ce579fecd6d143d640580d7b078524664b670065\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bfafac0eedc34082a91a7bdac790c9b2cd34e36f6333a12070a36911d25baffe\"" Apr 30 12:43:45.778877 containerd[1499]: time="2025-04-30T12:43:45.778655838Z" level=info msg="StartContainer for \"bfafac0eedc34082a91a7bdac790c9b2cd34e36f6333a12070a36911d25baffe\"" Apr 30 12:43:45.782618 containerd[1499]: time="2025-04-30T12:43:45.781905730Z" level=info msg="CreateContainer within sandbox \"246b126bfe1d7a22b2ac3877934903ae27a03dc8921961718c53eb090ff7b32a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:43:45.800932 containerd[1499]: time="2025-04-30T12:43:45.800804543Z" level=info msg="CreateContainer within sandbox \"246b126bfe1d7a22b2ac3877934903ae27a03dc8921961718c53eb090ff7b32a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"73b7aeb4cfc92c0a61fddf20b737068c1f163a582b648d320c09a55b99c78dd2\"" Apr 30 12:43:45.801631 containerd[1499]: time="2025-04-30T12:43:45.801601886Z" level=info msg="StartContainer for \"73b7aeb4cfc92c0a61fddf20b737068c1f163a582b648d320c09a55b99c78dd2\"" Apr 30 12:43:45.838791 systemd[1]: Started cri-containerd-bfafac0eedc34082a91a7bdac790c9b2cd34e36f6333a12070a36911d25baffe.scope - libcontainer container bfafac0eedc34082a91a7bdac790c9b2cd34e36f6333a12070a36911d25baffe. Apr 30 12:43:45.854902 systemd[1]: Started cri-containerd-73b7aeb4cfc92c0a61fddf20b737068c1f163a582b648d320c09a55b99c78dd2.scope - libcontainer container 73b7aeb4cfc92c0a61fddf20b737068c1f163a582b648d320c09a55b99c78dd2. Apr 30 12:43:45.897810 containerd[1499]: time="2025-04-30T12:43:45.897539872Z" level=info msg="StartContainer for \"bfafac0eedc34082a91a7bdac790c9b2cd34e36f6333a12070a36911d25baffe\" returns successfully" Apr 30 12:43:45.908235 containerd[1499]: time="2025-04-30T12:43:45.907196105Z" level=info msg="StartContainer for \"73b7aeb4cfc92c0a61fddf20b737068c1f163a582b648d320c09a55b99c78dd2\" returns successfully" Apr 30 12:43:46.026759 kubelet[2728]: I0430 12:43:46.026692 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kk5kz" podStartSLOduration=26.026675056 podStartE2EDuration="26.026675056s" podCreationTimestamp="2025-04-30 12:43:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:43:46.02210349 +0000 UTC m=+40.329551449" watchObservedRunningTime="2025-04-30 12:43:46.026675056 +0000 UTC m=+40.334122975" Apr 30 12:43:46.042556 kubelet[2728]: I0430 12:43:46.041210 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6g5r4" podStartSLOduration=26.041193175 podStartE2EDuration="26.041193175s" podCreationTimestamp="2025-04-30 12:43:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:43:46.040198587 +0000 UTC m=+40.347646466" watchObservedRunningTime="2025-04-30 12:43:46.041193175 +0000 UTC m=+40.348641054" Apr 30 12:43:46.650825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1125345887.mount: Deactivated successfully. Apr 30 12:45:38.532927 systemd[1]: Started sshd@7-91.99.0.103:22-139.178.89.65:35918.service - OpenSSH per-connection server daemon (139.178.89.65:35918). Apr 30 12:45:39.517840 sshd[4115]: Accepted publickey for core from 139.178.89.65 port 35918 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:45:39.520036 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:45:39.526100 systemd-logind[1479]: New session 8 of user core. Apr 30 12:45:39.532016 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 12:45:40.293973 sshd[4117]: Connection closed by 139.178.89.65 port 35918 Apr 30 12:45:40.294865 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Apr 30 12:45:40.300293 systemd-logind[1479]: Session 8 logged out. Waiting for processes to exit. Apr 30 12:45:40.301153 systemd[1]: sshd@7-91.99.0.103:22-139.178.89.65:35918.service: Deactivated successfully. Apr 30 12:45:40.303994 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 12:45:40.306007 systemd-logind[1479]: Removed session 8. Apr 30 12:45:45.477968 systemd[1]: Started sshd@8-91.99.0.103:22-139.178.89.65:35934.service - OpenSSH per-connection server daemon (139.178.89.65:35934). Apr 30 12:45:46.477024 sshd[4130]: Accepted publickey for core from 139.178.89.65 port 35934 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:45:46.478110 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:45:46.483414 systemd-logind[1479]: New session 9 of user core. Apr 30 12:45:46.491932 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 12:45:47.240873 sshd[4133]: Connection closed by 139.178.89.65 port 35934 Apr 30 12:45:47.241938 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Apr 30 12:45:47.247679 systemd[1]: sshd@8-91.99.0.103:22-139.178.89.65:35934.service: Deactivated successfully. Apr 30 12:45:47.250397 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 12:45:47.251520 systemd-logind[1479]: Session 9 logged out. Waiting for processes to exit. Apr 30 12:45:47.253374 systemd-logind[1479]: Removed session 9. Apr 30 12:45:52.419074 systemd[1]: Started sshd@9-91.99.0.103:22-139.178.89.65:44306.service - OpenSSH per-connection server daemon (139.178.89.65:44306). Apr 30 12:45:53.407226 sshd[4149]: Accepted publickey for core from 139.178.89.65 port 44306 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:45:53.409739 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:45:53.421937 systemd-logind[1479]: New session 10 of user core. Apr 30 12:45:53.428900 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 12:45:54.164719 sshd[4151]: Connection closed by 139.178.89.65 port 44306 Apr 30 12:45:54.165443 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Apr 30 12:45:54.171719 systemd[1]: sshd@9-91.99.0.103:22-139.178.89.65:44306.service: Deactivated successfully. Apr 30 12:45:54.175164 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 12:45:54.176184 systemd-logind[1479]: Session 10 logged out. Waiting for processes to exit. Apr 30 12:45:54.177539 systemd-logind[1479]: Removed session 10. Apr 30 12:45:59.350202 systemd[1]: Started sshd@10-91.99.0.103:22-139.178.89.65:47216.service - OpenSSH per-connection server daemon (139.178.89.65:47216). Apr 30 12:46:00.347821 sshd[4163]: Accepted publickey for core from 139.178.89.65 port 47216 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:46:00.349051 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:46:00.355586 systemd-logind[1479]: New session 11 of user core. Apr 30 12:46:00.362776 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 12:46:01.118645 sshd[4165]: Connection closed by 139.178.89.65 port 47216 Apr 30 12:46:01.118388 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Apr 30 12:46:01.122960 systemd[1]: sshd@10-91.99.0.103:22-139.178.89.65:47216.service: Deactivated successfully. Apr 30 12:46:01.125030 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 12:46:01.127622 systemd-logind[1479]: Session 11 logged out. Waiting for processes to exit. Apr 30 12:46:01.129275 systemd-logind[1479]: Removed session 11. Apr 30 12:46:01.299404 systemd[1]: Started sshd@11-91.99.0.103:22-139.178.89.65:47224.service - OpenSSH per-connection server daemon (139.178.89.65:47224). Apr 30 12:46:02.294150 sshd[4177]: Accepted publickey for core from 139.178.89.65 port 47224 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:46:02.296217 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:46:02.303025 systemd-logind[1479]: New session 12 of user core. Apr 30 12:46:02.310884 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 12:46:03.117341 sshd[4179]: Connection closed by 139.178.89.65 port 47224 Apr 30 12:46:03.118197 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Apr 30 12:46:03.122436 systemd-logind[1479]: Session 12 logged out. Waiting for processes to exit. Apr 30 12:46:03.123139 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 12:46:03.124394 systemd[1]: sshd@11-91.99.0.103:22-139.178.89.65:47224.service: Deactivated successfully. Apr 30 12:46:03.127957 systemd-logind[1479]: Removed session 12. Apr 30 12:46:03.295035 systemd[1]: Started sshd@12-91.99.0.103:22-139.178.89.65:47238.service - OpenSSH per-connection server daemon (139.178.89.65:47238). Apr 30 12:46:04.292070 sshd[4188]: Accepted publickey for core from 139.178.89.65 port 47238 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:46:04.294476 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:46:04.302748 systemd-logind[1479]: New session 13 of user core. Apr 30 12:46:04.308851 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 12:46:05.054827 sshd[4190]: Connection closed by 139.178.89.65 port 47238 Apr 30 12:46:05.055818 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Apr 30 12:46:05.062120 systemd-logind[1479]: Session 13 logged out. Waiting for processes to exit. Apr 30 12:46:05.062947 systemd[1]: sshd@12-91.99.0.103:22-139.178.89.65:47238.service: Deactivated successfully. Apr 30 12:46:05.068477 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 12:46:05.070349 systemd-logind[1479]: Removed session 13. Apr 30 12:46:10.237278 systemd[1]: Started sshd@13-91.99.0.103:22-139.178.89.65:55446.service - OpenSSH per-connection server daemon (139.178.89.65:55446). Apr 30 12:46:11.242132 sshd[4204]: Accepted publickey for core from 139.178.89.65 port 55446 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:46:11.244802 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:46:11.251347 systemd-logind[1479]: New session 14 of user core. Apr 30 12:46:11.261890 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 12:46:12.006703 sshd[4206]: Connection closed by 139.178.89.65 port 55446 Apr 30 12:46:12.007239 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Apr 30 12:46:12.012021 systemd-logind[1479]: Session 14 logged out. Waiting for processes to exit. Apr 30 12:46:12.013035 systemd[1]: sshd@13-91.99.0.103:22-139.178.89.65:55446.service: Deactivated successfully. Apr 30 12:46:12.017144 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 12:46:12.019891 systemd-logind[1479]: Removed session 14. Apr 30 12:46:12.185945 systemd[1]: Started sshd@14-91.99.0.103:22-139.178.89.65:55450.service - OpenSSH per-connection server daemon (139.178.89.65:55450). Apr 30 12:46:13.159896 sshd[4218]: Accepted publickey for core from 139.178.89.65 port 55450 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:46:13.161860 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:46:13.169241 systemd-logind[1479]: New session 15 of user core. Apr 30 12:46:13.172777 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 12:46:13.960089 sshd[4220]: Connection closed by 139.178.89.65 port 55450 Apr 30 12:46:13.959947 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Apr 30 12:46:13.965377 systemd[1]: sshd@14-91.99.0.103:22-139.178.89.65:55450.service: Deactivated successfully. Apr 30 12:46:13.968232 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 12:46:13.969959 systemd-logind[1479]: Session 15 logged out. Waiting for processes to exit. Apr 30 12:46:13.971428 systemd-logind[1479]: Removed session 15. Apr 30 12:46:14.139059 systemd[1]: Started sshd@15-91.99.0.103:22-139.178.89.65:55464.service - OpenSSH per-connection server daemon (139.178.89.65:55464). Apr 30 12:46:15.128253 sshd[4230]: Accepted publickey for core from 139.178.89.65 port 55464 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:46:15.130717 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:46:15.136767 systemd-logind[1479]: New session 16 of user core. Apr 30 12:46:15.141852 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 12:46:16.212639 sshd[4232]: Connection closed by 139.178.89.65 port 55464 Apr 30 12:46:16.213681 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Apr 30 12:46:16.219634 systemd[1]: sshd@15-91.99.0.103:22-139.178.89.65:55464.service: Deactivated successfully. Apr 30 12:46:16.223268 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 12:46:16.224659 systemd-logind[1479]: Session 16 logged out. Waiting for processes to exit. Apr 30 12:46:16.226323 systemd-logind[1479]: Removed session 16. Apr 30 12:46:16.393055 systemd[1]: Started sshd@16-91.99.0.103:22-139.178.89.65:55474.service - OpenSSH per-connection server daemon (139.178.89.65:55474). Apr 30 12:46:17.380875 sshd[4241]: Accepted publickey for core from 139.178.89.65 port 55474 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:46:17.384802 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:46:17.394115 systemd-logind[1479]: New session 17 of user core. Apr 30 12:46:17.397816 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 12:46:19.984313 systemd[1]: run-containerd-runc-k8s.io-bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb-runc.YxNsPG.mount: Deactivated successfully. Apr 30 12:46:19.989362 containerd[1499]: time="2025-04-30T12:46:19.989146566Z" level=info msg="StopContainer for \"927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96\" with timeout 30 (s)" Apr 30 12:46:19.990543 containerd[1499]: time="2025-04-30T12:46:19.990510304Z" level=info msg="Stop container \"927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96\" with signal terminated" Apr 30 12:46:20.011373 systemd[1]: cri-containerd-927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96.scope: Deactivated successfully. Apr 30 12:46:20.016189 containerd[1499]: time="2025-04-30T12:46:20.016121226Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:46:20.026471 containerd[1499]: time="2025-04-30T12:46:20.026377035Z" level=info msg="StopContainer for \"bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb\" with timeout 2 (s)" Apr 30 12:46:20.029415 containerd[1499]: time="2025-04-30T12:46:20.029376232Z" level=info msg="Stop container \"bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb\" with signal terminated" Apr 30 12:46:20.041969 systemd-networkd[1398]: lxc_health: Link DOWN Apr 30 12:46:20.041975 systemd-networkd[1398]: lxc_health: Lost carrier Apr 30 12:46:20.056957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96-rootfs.mount: Deactivated successfully. Apr 30 12:46:20.063139 systemd[1]: cri-containerd-bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb.scope: Deactivated successfully. Apr 30 12:46:20.063735 systemd[1]: cri-containerd-bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb.scope: Consumed 7.365s CPU time, 124.2M memory peak, 136K read from disk, 12.9M written to disk. Apr 30 12:46:20.069060 containerd[1499]: time="2025-04-30T12:46:20.068878008Z" level=info msg="shim disconnected" id=927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96 namespace=k8s.io Apr 30 12:46:20.069060 containerd[1499]: time="2025-04-30T12:46:20.068939769Z" level=warning msg="cleaning up after shim disconnected" id=927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96 namespace=k8s.io Apr 30 12:46:20.069060 containerd[1499]: time="2025-04-30T12:46:20.068949649Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:20.089012 containerd[1499]: time="2025-04-30T12:46:20.088840139Z" level=info msg="StopContainer for \"927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96\" returns successfully" Apr 30 12:46:20.094898 containerd[1499]: time="2025-04-30T12:46:20.091835536Z" level=info msg="StopPodSandbox for \"c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f\"" Apr 30 12:46:20.094898 containerd[1499]: time="2025-04-30T12:46:20.091900217Z" level=info msg="Container to stop \"927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:46:20.092888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb-rootfs.mount: Deactivated successfully. Apr 30 12:46:20.098996 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f-shm.mount: Deactivated successfully. Apr 30 12:46:20.107676 containerd[1499]: time="2025-04-30T12:46:20.106307318Z" level=info msg="shim disconnected" id=bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb namespace=k8s.io Apr 30 12:46:20.107676 containerd[1499]: time="2025-04-30T12:46:20.106370479Z" level=warning msg="cleaning up after shim disconnected" id=bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb namespace=k8s.io Apr 30 12:46:20.107676 containerd[1499]: time="2025-04-30T12:46:20.106379479Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:20.110941 systemd[1]: cri-containerd-c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f.scope: Deactivated successfully. Apr 30 12:46:20.131044 containerd[1499]: time="2025-04-30T12:46:20.130990187Z" level=info msg="StopContainer for \"bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb\" returns successfully" Apr 30 12:46:20.131664 containerd[1499]: time="2025-04-30T12:46:20.131596835Z" level=info msg="StopPodSandbox for \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\"" Apr 30 12:46:20.131759 containerd[1499]: time="2025-04-30T12:46:20.131675316Z" level=info msg="Container to stop \"1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:46:20.131759 containerd[1499]: time="2025-04-30T12:46:20.131692876Z" level=info msg="Container to stop \"eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:46:20.131759 containerd[1499]: time="2025-04-30T12:46:20.131705156Z" level=info msg="Container to stop \"71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:46:20.131759 containerd[1499]: time="2025-04-30T12:46:20.131716197Z" level=info msg="Container to stop \"bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:46:20.131759 containerd[1499]: time="2025-04-30T12:46:20.131726637Z" level=info msg="Container to stop \"f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:46:20.140318 systemd[1]: cri-containerd-c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8.scope: Deactivated successfully. Apr 30 12:46:20.153388 containerd[1499]: time="2025-04-30T12:46:20.153164146Z" level=info msg="shim disconnected" id=c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f namespace=k8s.io Apr 30 12:46:20.153388 containerd[1499]: time="2025-04-30T12:46:20.153241587Z" level=warning msg="cleaning up after shim disconnected" id=c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f namespace=k8s.io Apr 30 12:46:20.153388 containerd[1499]: time="2025-04-30T12:46:20.153250387Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:20.174202 containerd[1499]: time="2025-04-30T12:46:20.174134009Z" level=warning msg="cleanup warnings time=\"2025-04-30T12:46:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 12:46:20.175698 containerd[1499]: time="2025-04-30T12:46:20.175656988Z" level=info msg="TearDown network for sandbox \"c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f\" successfully" Apr 30 12:46:20.175698 containerd[1499]: time="2025-04-30T12:46:20.175693468Z" level=info msg="StopPodSandbox for \"c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f\" returns successfully" Apr 30 12:46:20.176752 containerd[1499]: time="2025-04-30T12:46:20.176691921Z" level=info msg="shim disconnected" id=c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8 namespace=k8s.io Apr 30 12:46:20.178346 containerd[1499]: time="2025-04-30T12:46:20.177655493Z" level=warning msg="cleaning up after shim disconnected" id=c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8 namespace=k8s.io Apr 30 12:46:20.178346 containerd[1499]: time="2025-04-30T12:46:20.177689253Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:20.196261 containerd[1499]: time="2025-04-30T12:46:20.196209326Z" level=info msg="TearDown network for sandbox \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\" successfully" Apr 30 12:46:20.196261 containerd[1499]: time="2025-04-30T12:46:20.196249486Z" level=info msg="StopPodSandbox for \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\" returns successfully" Apr 30 12:46:20.235880 kubelet[2728]: I0430 12:46:20.234437 2728 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-hostproc\") pod \"e68a2502-7ff6-4652-b551-2eb17624e6b6\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " Apr 30 12:46:20.235880 kubelet[2728]: I0430 12:46:20.234508 2728 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e68a2502-7ff6-4652-b551-2eb17624e6b6-clustermesh-secrets\") pod \"e68a2502-7ff6-4652-b551-2eb17624e6b6\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " Apr 30 12:46:20.235880 kubelet[2728]: I0430 12:46:20.234537 2728 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-cilium-run\") pod \"e68a2502-7ff6-4652-b551-2eb17624e6b6\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " Apr 30 12:46:20.235880 kubelet[2728]: I0430 12:46:20.234653 2728 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e68a2502-7ff6-4652-b551-2eb17624e6b6-hubble-tls\") pod \"e68a2502-7ff6-4652-b551-2eb17624e6b6\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " Apr 30 12:46:20.235880 kubelet[2728]: I0430 12:46:20.234687 2728 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-host-proc-sys-kernel\") pod \"e68a2502-7ff6-4652-b551-2eb17624e6b6\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " Apr 30 12:46:20.235880 kubelet[2728]: I0430 12:46:20.234715 2728 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-cni-path\") pod \"e68a2502-7ff6-4652-b551-2eb17624e6b6\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " Apr 30 12:46:20.237929 kubelet[2728]: I0430 12:46:20.234747 2728 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbbnt\" (UniqueName: \"kubernetes.io/projected/6222e9ac-ed41-49e6-96be-3f0efe6361d9-kube-api-access-bbbnt\") pod \"6222e9ac-ed41-49e6-96be-3f0efe6361d9\" (UID: \"6222e9ac-ed41-49e6-96be-3f0efe6361d9\") " Apr 30 12:46:20.237929 kubelet[2728]: I0430 12:46:20.234779 2728 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-host-proc-sys-net\") pod \"e68a2502-7ff6-4652-b551-2eb17624e6b6\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " Apr 30 12:46:20.237929 kubelet[2728]: I0430 12:46:20.234813 2728 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e68a2502-7ff6-4652-b551-2eb17624e6b6-cilium-config-path\") pod \"e68a2502-7ff6-4652-b551-2eb17624e6b6\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " Apr 30 12:46:20.237929 kubelet[2728]: I0430 12:46:20.234841 2728 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-bpf-maps\") pod \"e68a2502-7ff6-4652-b551-2eb17624e6b6\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " Apr 30 12:46:20.237929 kubelet[2728]: I0430 12:46:20.234866 2728 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-cilium-cgroup\") pod \"e68a2502-7ff6-4652-b551-2eb17624e6b6\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " Apr 30 12:46:20.237929 kubelet[2728]: I0430 12:46:20.234895 2728 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-etc-cni-netd\") pod \"e68a2502-7ff6-4652-b551-2eb17624e6b6\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " Apr 30 12:46:20.238194 kubelet[2728]: I0430 12:46:20.234927 2728 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6222e9ac-ed41-49e6-96be-3f0efe6361d9-cilium-config-path\") pod \"6222e9ac-ed41-49e6-96be-3f0efe6361d9\" (UID: \"6222e9ac-ed41-49e6-96be-3f0efe6361d9\") " Apr 30 12:46:20.238194 kubelet[2728]: I0430 12:46:20.234964 2728 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67kwz\" (UniqueName: \"kubernetes.io/projected/e68a2502-7ff6-4652-b551-2eb17624e6b6-kube-api-access-67kwz\") pod \"e68a2502-7ff6-4652-b551-2eb17624e6b6\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " Apr 30 12:46:20.238194 kubelet[2728]: I0430 12:46:20.234991 2728 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-xtables-lock\") pod \"e68a2502-7ff6-4652-b551-2eb17624e6b6\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " Apr 30 12:46:20.238194 kubelet[2728]: I0430 12:46:20.235017 2728 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-lib-modules\") pod \"e68a2502-7ff6-4652-b551-2eb17624e6b6\" (UID: \"e68a2502-7ff6-4652-b551-2eb17624e6b6\") " Apr 30 12:46:20.238194 kubelet[2728]: I0430 12:46:20.235129 2728 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e68a2502-7ff6-4652-b551-2eb17624e6b6" (UID: "e68a2502-7ff6-4652-b551-2eb17624e6b6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:46:20.238475 kubelet[2728]: I0430 12:46:20.236740 2728 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e68a2502-7ff6-4652-b551-2eb17624e6b6" (UID: "e68a2502-7ff6-4652-b551-2eb17624e6b6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:46:20.240238 kubelet[2728]: I0430 12:46:20.238943 2728 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e68a2502-7ff6-4652-b551-2eb17624e6b6" (UID: "e68a2502-7ff6-4652-b551-2eb17624e6b6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:46:20.240238 kubelet[2728]: I0430 12:46:20.239012 2728 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e68a2502-7ff6-4652-b551-2eb17624e6b6" (UID: "e68a2502-7ff6-4652-b551-2eb17624e6b6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:46:20.240238 kubelet[2728]: I0430 12:46:20.239035 2728 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e68a2502-7ff6-4652-b551-2eb17624e6b6" (UID: "e68a2502-7ff6-4652-b551-2eb17624e6b6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:46:20.240238 kubelet[2728]: I0430 12:46:20.239828 2728 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-hostproc" (OuterVolumeSpecName: "hostproc") pod "e68a2502-7ff6-4652-b551-2eb17624e6b6" (UID: "e68a2502-7ff6-4652-b551-2eb17624e6b6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:46:20.240238 kubelet[2728]: I0430 12:46:20.239876 2728 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e68a2502-7ff6-4652-b551-2eb17624e6b6" (UID: "e68a2502-7ff6-4652-b551-2eb17624e6b6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:46:20.241549 kubelet[2728]: I0430 12:46:20.241503 2728 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e68a2502-7ff6-4652-b551-2eb17624e6b6" (UID: "e68a2502-7ff6-4652-b551-2eb17624e6b6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:46:20.244381 kubelet[2728]: I0430 12:46:20.244265 2728 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e68a2502-7ff6-4652-b551-2eb17624e6b6" (UID: "e68a2502-7ff6-4652-b551-2eb17624e6b6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:46:20.244381 kubelet[2728]: I0430 12:46:20.244318 2728 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-cni-path" (OuterVolumeSpecName: "cni-path") pod "e68a2502-7ff6-4652-b551-2eb17624e6b6" (UID: "e68a2502-7ff6-4652-b551-2eb17624e6b6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:46:20.248934 kubelet[2728]: I0430 12:46:20.247249 2728 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6222e9ac-ed41-49e6-96be-3f0efe6361d9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6222e9ac-ed41-49e6-96be-3f0efe6361d9" (UID: "6222e9ac-ed41-49e6-96be-3f0efe6361d9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 12:46:20.248934 kubelet[2728]: I0430 12:46:20.247401 2728 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e68a2502-7ff6-4652-b551-2eb17624e6b6-kube-api-access-67kwz" (OuterVolumeSpecName: "kube-api-access-67kwz") pod "e68a2502-7ff6-4652-b551-2eb17624e6b6" (UID: "e68a2502-7ff6-4652-b551-2eb17624e6b6"). InnerVolumeSpecName "kube-api-access-67kwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:46:20.250774 kubelet[2728]: I0430 12:46:20.250727 2728 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e68a2502-7ff6-4652-b551-2eb17624e6b6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e68a2502-7ff6-4652-b551-2eb17624e6b6" (UID: "e68a2502-7ff6-4652-b551-2eb17624e6b6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:46:20.251087 kubelet[2728]: I0430 12:46:20.250955 2728 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6222e9ac-ed41-49e6-96be-3f0efe6361d9-kube-api-access-bbbnt" (OuterVolumeSpecName: "kube-api-access-bbbnt") pod "6222e9ac-ed41-49e6-96be-3f0efe6361d9" (UID: "6222e9ac-ed41-49e6-96be-3f0efe6361d9"). InnerVolumeSpecName "kube-api-access-bbbnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:46:20.251506 kubelet[2728]: I0430 12:46:20.251476 2728 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68a2502-7ff6-4652-b551-2eb17624e6b6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e68a2502-7ff6-4652-b551-2eb17624e6b6" (UID: "e68a2502-7ff6-4652-b551-2eb17624e6b6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 12:46:20.255388 kubelet[2728]: I0430 12:46:20.255313 2728 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e68a2502-7ff6-4652-b551-2eb17624e6b6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e68a2502-7ff6-4652-b551-2eb17624e6b6" (UID: "e68a2502-7ff6-4652-b551-2eb17624e6b6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 12:46:20.335493 kubelet[2728]: I0430 12:46:20.335352 2728 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e68a2502-7ff6-4652-b551-2eb17624e6b6-cilium-config-path\") on node \"ci-4230-1-1-9-a0dc1fa777\" DevicePath \"\"" Apr 30 12:46:20.335493 kubelet[2728]: I0430 12:46:20.335458 2728 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-bpf-maps\") on node \"ci-4230-1-1-9-a0dc1fa777\" DevicePath \"\"" Apr 30 12:46:20.335493 kubelet[2728]: I0430 12:46:20.335488 2728 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-cilium-cgroup\") on node \"ci-4230-1-1-9-a0dc1fa777\" DevicePath \"\"" Apr 30 12:46:20.336061 kubelet[2728]: I0430 12:46:20.335533 2728 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-etc-cni-netd\") on node \"ci-4230-1-1-9-a0dc1fa777\" DevicePath \"\"" Apr 30 12:46:20.336061 kubelet[2728]: I0430 12:46:20.335639 2728 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6222e9ac-ed41-49e6-96be-3f0efe6361d9-cilium-config-path\") on node \"ci-4230-1-1-9-a0dc1fa777\" DevicePath \"\"" Apr 30 12:46:20.336061 kubelet[2728]: I0430 12:46:20.335667 2728 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-67kwz\" (UniqueName: \"kubernetes.io/projected/e68a2502-7ff6-4652-b551-2eb17624e6b6-kube-api-access-67kwz\") on node \"ci-4230-1-1-9-a0dc1fa777\" DevicePath \"\"" Apr 30 12:46:20.336061 kubelet[2728]: I0430 12:46:20.335684 2728 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-xtables-lock\") on node \"ci-4230-1-1-9-a0dc1fa777\" DevicePath \"\"" Apr 30 12:46:20.336061 kubelet[2728]: I0430 12:46:20.335714 2728 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-lib-modules\") on node \"ci-4230-1-1-9-a0dc1fa777\" DevicePath \"\"" Apr 30 12:46:20.336061 kubelet[2728]: I0430 12:46:20.335743 2728 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-hostproc\") on node \"ci-4230-1-1-9-a0dc1fa777\" DevicePath \"\"" Apr 30 12:46:20.336061 kubelet[2728]: I0430 12:46:20.335763 2728 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e68a2502-7ff6-4652-b551-2eb17624e6b6-clustermesh-secrets\") on node \"ci-4230-1-1-9-a0dc1fa777\" DevicePath \"\"" Apr 30 12:46:20.336061 kubelet[2728]: I0430 12:46:20.335861 2728 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-cilium-run\") on node \"ci-4230-1-1-9-a0dc1fa777\" DevicePath \"\"" Apr 30 12:46:20.336594 kubelet[2728]: I0430 12:46:20.335891 2728 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e68a2502-7ff6-4652-b551-2eb17624e6b6-hubble-tls\") on node \"ci-4230-1-1-9-a0dc1fa777\" DevicePath \"\"" Apr 30 12:46:20.336594 kubelet[2728]: I0430 12:46:20.335936 2728 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-host-proc-sys-kernel\") on node \"ci-4230-1-1-9-a0dc1fa777\" DevicePath \"\"" Apr 30 12:46:20.336594 kubelet[2728]: I0430 12:46:20.335958 2728 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-cni-path\") on node \"ci-4230-1-1-9-a0dc1fa777\" DevicePath \"\"" Apr 30 12:46:20.336594 kubelet[2728]: I0430 12:46:20.335980 2728 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bbbnt\" (UniqueName: \"kubernetes.io/projected/6222e9ac-ed41-49e6-96be-3f0efe6361d9-kube-api-access-bbbnt\") on node \"ci-4230-1-1-9-a0dc1fa777\" DevicePath \"\"" Apr 30 12:46:20.336594 kubelet[2728]: I0430 12:46:20.336009 2728 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e68a2502-7ff6-4652-b551-2eb17624e6b6-host-proc-sys-net\") on node \"ci-4230-1-1-9-a0dc1fa777\" DevicePath \"\"" Apr 30 12:46:20.354422 kubelet[2728]: I0430 12:46:20.353686 2728 scope.go:117] "RemoveContainer" containerID="927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96" Apr 30 12:46:20.357848 containerd[1499]: time="2025-04-30T12:46:20.357534190Z" level=info msg="RemoveContainer for \"927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96\"" Apr 30 12:46:20.359193 systemd[1]: Removed slice kubepods-besteffort-pod6222e9ac_ed41_49e6_96be_3f0efe6361d9.slice - libcontainer container kubepods-besteffort-pod6222e9ac_ed41_49e6_96be_3f0efe6361d9.slice. Apr 30 12:46:20.367629 containerd[1499]: time="2025-04-30T12:46:20.367504075Z" level=info msg="RemoveContainer for \"927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96\" returns successfully" Apr 30 12:46:20.368806 kubelet[2728]: I0430 12:46:20.368700 2728 scope.go:117] "RemoveContainer" containerID="927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96" Apr 30 12:46:20.369882 containerd[1499]: time="2025-04-30T12:46:20.369233697Z" level=error msg="ContainerStatus for \"927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96\": not found" Apr 30 12:46:20.370205 kubelet[2728]: E0430 12:46:20.370172 2728 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96\": not found" containerID="927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96" Apr 30 12:46:20.370317 kubelet[2728]: I0430 12:46:20.370217 2728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96"} err="failed to get container status \"927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96\": rpc error: code = NotFound desc = an error occurred when try to find container \"927d1c83125940612c17a2d8e76511833d489d1b923683be1566f76149d0bd96\": not found" Apr 30 12:46:20.370371 kubelet[2728]: I0430 12:46:20.370322 2728 scope.go:117] "RemoveContainer" containerID="bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb" Apr 30 12:46:20.371282 systemd[1]: Removed slice kubepods-burstable-pode68a2502_7ff6_4652_b551_2eb17624e6b6.slice - libcontainer container kubepods-burstable-pode68a2502_7ff6_4652_b551_2eb17624e6b6.slice. Apr 30 12:46:20.371405 systemd[1]: kubepods-burstable-pode68a2502_7ff6_4652_b551_2eb17624e6b6.slice: Consumed 7.458s CPU time, 124.6M memory peak, 136K read from disk, 12.9M written to disk. Apr 30 12:46:20.374281 containerd[1499]: time="2025-04-30T12:46:20.374241040Z" level=info msg="RemoveContainer for \"bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb\"" Apr 30 12:46:20.382866 containerd[1499]: time="2025-04-30T12:46:20.382701786Z" level=info msg="RemoveContainer for \"bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb\" returns successfully" Apr 30 12:46:20.384535 kubelet[2728]: I0430 12:46:20.384499 2728 scope.go:117] "RemoveContainer" containerID="71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804" Apr 30 12:46:20.387951 containerd[1499]: time="2025-04-30T12:46:20.387667408Z" level=info msg="RemoveContainer for \"71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804\"" Apr 30 12:46:20.392850 containerd[1499]: time="2025-04-30T12:46:20.391732539Z" level=info msg="RemoveContainer for \"71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804\" returns successfully" Apr 30 12:46:20.392993 kubelet[2728]: I0430 12:46:20.392003 2728 scope.go:117] "RemoveContainer" containerID="1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe" Apr 30 12:46:20.398678 containerd[1499]: time="2025-04-30T12:46:20.398255861Z" level=info msg="RemoveContainer for \"1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe\"" Apr 30 12:46:20.402480 containerd[1499]: time="2025-04-30T12:46:20.402434913Z" level=info msg="RemoveContainer for \"1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe\" returns successfully" Apr 30 12:46:20.404041 kubelet[2728]: I0430 12:46:20.404003 2728 scope.go:117] "RemoveContainer" containerID="f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e" Apr 30 12:46:20.406920 containerd[1499]: time="2025-04-30T12:46:20.406882929Z" level=info msg="RemoveContainer for \"f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e\"" Apr 30 12:46:20.411301 containerd[1499]: time="2025-04-30T12:46:20.411170063Z" level=info msg="RemoveContainer for \"f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e\" returns successfully" Apr 30 12:46:20.411962 kubelet[2728]: I0430 12:46:20.411781 2728 scope.go:117] "RemoveContainer" containerID="eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85" Apr 30 12:46:20.413537 containerd[1499]: time="2025-04-30T12:46:20.413247209Z" level=info msg="RemoveContainer for \"eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85\"" Apr 30 12:46:20.417113 containerd[1499]: time="2025-04-30T12:46:20.417068937Z" level=info msg="RemoveContainer for \"eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85\" returns successfully" Apr 30 12:46:20.417763 kubelet[2728]: I0430 12:46:20.417631 2728 scope.go:117] "RemoveContainer" containerID="bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb" Apr 30 12:46:20.418114 containerd[1499]: time="2025-04-30T12:46:20.418077870Z" level=error msg="ContainerStatus for \"bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb\": not found" Apr 30 12:46:20.418422 kubelet[2728]: E0430 12:46:20.418294 2728 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb\": not found" containerID="bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb" Apr 30 12:46:20.418422 kubelet[2728]: I0430 12:46:20.418323 2728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb"} err="failed to get container status \"bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcde6cae0666906358f94f05e4b7b7bb187d947243d8114484ff01766695b5cb\": not found" Apr 30 12:46:20.418422 kubelet[2728]: I0430 12:46:20.418345 2728 scope.go:117] "RemoveContainer" containerID="71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804" Apr 30 12:46:20.418967 containerd[1499]: time="2025-04-30T12:46:20.418724758Z" level=error msg="ContainerStatus for \"71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804\": not found" Apr 30 12:46:20.419050 kubelet[2728]: E0430 12:46:20.418850 2728 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804\": not found" containerID="71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804" Apr 30 12:46:20.419050 kubelet[2728]: I0430 12:46:20.418874 2728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804"} err="failed to get container status \"71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804\": rpc error: code = NotFound desc = an error occurred when try to find container \"71686ad58fb6b8d0320483e6109caa1e1b37d48e0888486b69efb27024fbd804\": not found" Apr 30 12:46:20.419050 kubelet[2728]: I0430 12:46:20.418896 2728 scope.go:117] "RemoveContainer" containerID="1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe" Apr 30 12:46:20.419293 containerd[1499]: time="2025-04-30T12:46:20.419235444Z" level=error msg="ContainerStatus for \"1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe\": not found" Apr 30 12:46:20.419540 kubelet[2728]: E0430 12:46:20.419443 2728 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe\": not found" containerID="1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe" Apr 30 12:46:20.419540 kubelet[2728]: I0430 12:46:20.419465 2728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe"} err="failed to get container status \"1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e04bee12bda8af75214b74e8d4225f5fe2ff499d7a5803a189f5c6bb9fd1ffe\": not found" Apr 30 12:46:20.419540 kubelet[2728]: I0430 12:46:20.419479 2728 scope.go:117] "RemoveContainer" containerID="f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e" Apr 30 12:46:20.420020 containerd[1499]: time="2025-04-30T12:46:20.419926533Z" level=error msg="ContainerStatus for \"f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e\": not found" Apr 30 12:46:20.420618 kubelet[2728]: E0430 12:46:20.420502 2728 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e\": not found" containerID="f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e" Apr 30 12:46:20.420618 kubelet[2728]: I0430 12:46:20.420527 2728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e"} err="failed to get container status \"f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f319b3e770a7cd94f052eedd7fb6ca53ed8dd731117678b2a7fdebf3aa19061e\": not found" Apr 30 12:46:20.420618 kubelet[2728]: I0430 12:46:20.420542 2728 scope.go:117] "RemoveContainer" containerID="eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85" Apr 30 12:46:20.421141 containerd[1499]: time="2025-04-30T12:46:20.420935305Z" level=error msg="ContainerStatus for \"eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85\": not found" Apr 30 12:46:20.421215 kubelet[2728]: E0430 12:46:20.421059 2728 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85\": not found" containerID="eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85" Apr 30 12:46:20.421215 kubelet[2728]: I0430 12:46:20.421120 2728 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85"} err="failed to get container status \"eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb5da4f1201f7a480c606ebaf319b0549a724aae5b382cffac05654d872e2c85\": not found" Apr 30 12:46:20.929132 kubelet[2728]: E0430 12:46:20.929057 2728 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 12:46:20.980210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f-rootfs.mount: Deactivated successfully. Apr 30 12:46:20.980394 systemd[1]: var-lib-kubelet-pods-6222e9ac\x2ded41\x2d49e6\x2d96be\x2d3f0efe6361d9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbbbnt.mount: Deactivated successfully. Apr 30 12:46:20.981039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8-rootfs.mount: Deactivated successfully. Apr 30 12:46:20.981316 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8-shm.mount: Deactivated successfully. Apr 30 12:46:20.981436 systemd[1]: var-lib-kubelet-pods-e68a2502\x2d7ff6\x2d4652\x2db551\x2d2eb17624e6b6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d67kwz.mount: Deactivated successfully. Apr 30 12:46:20.981540 systemd[1]: var-lib-kubelet-pods-e68a2502\x2d7ff6\x2d4652\x2db551\x2d2eb17624e6b6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 12:46:20.981747 systemd[1]: var-lib-kubelet-pods-e68a2502\x2d7ff6\x2d4652\x2db551\x2d2eb17624e6b6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 12:46:21.824905 kubelet[2728]: I0430 12:46:21.824836 2728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6222e9ac-ed41-49e6-96be-3f0efe6361d9" path="/var/lib/kubelet/pods/6222e9ac-ed41-49e6-96be-3f0efe6361d9/volumes" Apr 30 12:46:21.825705 kubelet[2728]: I0430 12:46:21.825676 2728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e68a2502-7ff6-4652-b551-2eb17624e6b6" path="/var/lib/kubelet/pods/e68a2502-7ff6-4652-b551-2eb17624e6b6/volumes" Apr 30 12:46:22.061659 sshd[4243]: Connection closed by 139.178.89.65 port 55474 Apr 30 12:46:22.062492 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Apr 30 12:46:22.068135 systemd-logind[1479]: Session 17 logged out. Waiting for processes to exit. Apr 30 12:46:22.069858 systemd[1]: sshd@16-91.99.0.103:22-139.178.89.65:55474.service: Deactivated successfully. Apr 30 12:46:22.072649 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 12:46:22.073016 systemd[1]: session-17.scope: Consumed 1.417s CPU time, 23.7M memory peak. Apr 30 12:46:22.074019 systemd-logind[1479]: Removed session 17. Apr 30 12:46:22.231887 systemd[1]: Started sshd@17-91.99.0.103:22-139.178.89.65:56586.service - OpenSSH per-connection server daemon (139.178.89.65:56586). Apr 30 12:46:23.212161 sshd[4406]: Accepted publickey for core from 139.178.89.65 port 56586 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:46:23.214311 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:46:23.222418 systemd-logind[1479]: New session 18 of user core. Apr 30 12:46:23.227282 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 12:46:25.156741 kubelet[2728]: I0430 12:46:25.156689 2728 topology_manager.go:215] "Topology Admit Handler" podUID="bb31ad15-f483-427b-80ba-de86b76f4a83" podNamespace="kube-system" podName="cilium-95s8l" Apr 30 12:46:25.156741 kubelet[2728]: E0430 12:46:25.156750 2728 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e68a2502-7ff6-4652-b551-2eb17624e6b6" containerName="apply-sysctl-overwrites" Apr 30 12:46:25.157095 kubelet[2728]: E0430 12:46:25.156760 2728 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e68a2502-7ff6-4652-b551-2eb17624e6b6" containerName="clean-cilium-state" Apr 30 12:46:25.157095 kubelet[2728]: E0430 12:46:25.156766 2728 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e68a2502-7ff6-4652-b551-2eb17624e6b6" containerName="cilium-agent" Apr 30 12:46:25.157095 kubelet[2728]: E0430 12:46:25.156772 2728 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6222e9ac-ed41-49e6-96be-3f0efe6361d9" containerName="cilium-operator" Apr 30 12:46:25.157095 kubelet[2728]: E0430 12:46:25.156778 2728 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e68a2502-7ff6-4652-b551-2eb17624e6b6" containerName="mount-cgroup" Apr 30 12:46:25.157095 kubelet[2728]: E0430 12:46:25.156783 2728 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e68a2502-7ff6-4652-b551-2eb17624e6b6" containerName="mount-bpf-fs" Apr 30 12:46:25.157095 kubelet[2728]: I0430 12:46:25.156805 2728 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68a2502-7ff6-4652-b551-2eb17624e6b6" containerName="cilium-agent" Apr 30 12:46:25.157095 kubelet[2728]: I0430 12:46:25.156811 2728 memory_manager.go:354] "RemoveStaleState removing state" podUID="6222e9ac-ed41-49e6-96be-3f0efe6361d9" containerName="cilium-operator" Apr 30 12:46:25.167746 systemd[1]: Created slice kubepods-burstable-podbb31ad15_f483_427b_80ba_de86b76f4a83.slice - libcontainer container kubepods-burstable-podbb31ad15_f483_427b_80ba_de86b76f4a83.slice. Apr 30 12:46:25.270497 kubelet[2728]: I0430 12:46:25.270421 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gdp5\" (UniqueName: \"kubernetes.io/projected/bb31ad15-f483-427b-80ba-de86b76f4a83-kube-api-access-6gdp5\") pod \"cilium-95s8l\" (UID: \"bb31ad15-f483-427b-80ba-de86b76f4a83\") " pod="kube-system/cilium-95s8l" Apr 30 12:46:25.270497 kubelet[2728]: I0430 12:46:25.270489 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb31ad15-f483-427b-80ba-de86b76f4a83-hostproc\") pod \"cilium-95s8l\" (UID: \"bb31ad15-f483-427b-80ba-de86b76f4a83\") " pod="kube-system/cilium-95s8l" Apr 30 12:46:25.270796 kubelet[2728]: I0430 12:46:25.270525 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb31ad15-f483-427b-80ba-de86b76f4a83-cilium-cgroup\") pod \"cilium-95s8l\" (UID: \"bb31ad15-f483-427b-80ba-de86b76f4a83\") " pod="kube-system/cilium-95s8l" Apr 30 12:46:25.270796 kubelet[2728]: I0430 12:46:25.270558 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb31ad15-f483-427b-80ba-de86b76f4a83-bpf-maps\") pod \"cilium-95s8l\" (UID: \"bb31ad15-f483-427b-80ba-de86b76f4a83\") " pod="kube-system/cilium-95s8l" Apr 30 12:46:25.270796 kubelet[2728]: I0430 12:46:25.270608 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb31ad15-f483-427b-80ba-de86b76f4a83-cni-path\") pod \"cilium-95s8l\" (UID: \"bb31ad15-f483-427b-80ba-de86b76f4a83\") " pod="kube-system/cilium-95s8l" Apr 30 12:46:25.270796 kubelet[2728]: I0430 12:46:25.270655 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb31ad15-f483-427b-80ba-de86b76f4a83-host-proc-sys-net\") pod \"cilium-95s8l\" (UID: \"bb31ad15-f483-427b-80ba-de86b76f4a83\") " pod="kube-system/cilium-95s8l" Apr 30 12:46:25.270796 kubelet[2728]: I0430 12:46:25.270682 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb31ad15-f483-427b-80ba-de86b76f4a83-clustermesh-secrets\") pod \"cilium-95s8l\" (UID: \"bb31ad15-f483-427b-80ba-de86b76f4a83\") " pod="kube-system/cilium-95s8l" Apr 30 12:46:25.270796 kubelet[2728]: I0430 12:46:25.270707 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bb31ad15-f483-427b-80ba-de86b76f4a83-cilium-ipsec-secrets\") pod \"cilium-95s8l\" (UID: \"bb31ad15-f483-427b-80ba-de86b76f4a83\") " pod="kube-system/cilium-95s8l" Apr 30 12:46:25.271066 kubelet[2728]: I0430 12:46:25.270735 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb31ad15-f483-427b-80ba-de86b76f4a83-etc-cni-netd\") pod \"cilium-95s8l\" (UID: \"bb31ad15-f483-427b-80ba-de86b76f4a83\") " pod="kube-system/cilium-95s8l" Apr 30 12:46:25.271066 kubelet[2728]: I0430 12:46:25.270760 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb31ad15-f483-427b-80ba-de86b76f4a83-lib-modules\") pod \"cilium-95s8l\" (UID: \"bb31ad15-f483-427b-80ba-de86b76f4a83\") " pod="kube-system/cilium-95s8l" Apr 30 12:46:25.271066 kubelet[2728]: I0430 12:46:25.270785 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb31ad15-f483-427b-80ba-de86b76f4a83-xtables-lock\") pod \"cilium-95s8l\" (UID: \"bb31ad15-f483-427b-80ba-de86b76f4a83\") " pod="kube-system/cilium-95s8l" Apr 30 12:46:25.271066 kubelet[2728]: I0430 12:46:25.270809 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb31ad15-f483-427b-80ba-de86b76f4a83-cilium-config-path\") pod \"cilium-95s8l\" (UID: \"bb31ad15-f483-427b-80ba-de86b76f4a83\") " pod="kube-system/cilium-95s8l" Apr 30 12:46:25.271066 kubelet[2728]: I0430 12:46:25.270837 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb31ad15-f483-427b-80ba-de86b76f4a83-cilium-run\") pod \"cilium-95s8l\" (UID: \"bb31ad15-f483-427b-80ba-de86b76f4a83\") " pod="kube-system/cilium-95s8l" Apr 30 12:46:25.271066 kubelet[2728]: I0430 12:46:25.270864 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb31ad15-f483-427b-80ba-de86b76f4a83-hubble-tls\") pod \"cilium-95s8l\" (UID: \"bb31ad15-f483-427b-80ba-de86b76f4a83\") " pod="kube-system/cilium-95s8l" Apr 30 12:46:25.271317 kubelet[2728]: I0430 12:46:25.270894 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb31ad15-f483-427b-80ba-de86b76f4a83-host-proc-sys-kernel\") pod \"cilium-95s8l\" (UID: \"bb31ad15-f483-427b-80ba-de86b76f4a83\") " pod="kube-system/cilium-95s8l" Apr 30 12:46:25.343972 sshd[4408]: Connection closed by 139.178.89.65 port 56586 Apr 30 12:46:25.344982 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Apr 30 12:46:25.348839 systemd[1]: sshd@17-91.99.0.103:22-139.178.89.65:56586.service: Deactivated successfully. Apr 30 12:46:25.350705 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 12:46:25.351723 systemd[1]: session-18.scope: Consumed 1.333s CPU time, 23.6M memory peak. Apr 30 12:46:25.353698 systemd-logind[1479]: Session 18 logged out. Waiting for processes to exit. Apr 30 12:46:25.355757 systemd-logind[1479]: Removed session 18. Apr 30 12:46:25.473130 containerd[1499]: time="2025-04-30T12:46:25.473049089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-95s8l,Uid:bb31ad15-f483-427b-80ba-de86b76f4a83,Namespace:kube-system,Attempt:0,}" Apr 30 12:46:25.500446 containerd[1499]: time="2025-04-30T12:46:25.500331138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:46:25.500446 containerd[1499]: time="2025-04-30T12:46:25.500386618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:46:25.500446 containerd[1499]: time="2025-04-30T12:46:25.500403779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:46:25.500962 containerd[1499]: time="2025-04-30T12:46:25.500691102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:46:25.522496 systemd[1]: Started sshd@18-91.99.0.103:22-139.178.89.65:56596.service - OpenSSH per-connection server daemon (139.178.89.65:56596). Apr 30 12:46:25.527916 systemd[1]: Started cri-containerd-b5a8a8ab317b34f53d1ba3ea64eaaeafa734421500cb077ba761e23a9b03c6ab.scope - libcontainer container b5a8a8ab317b34f53d1ba3ea64eaaeafa734421500cb077ba761e23a9b03c6ab. Apr 30 12:46:25.558433 containerd[1499]: time="2025-04-30T12:46:25.558393836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-95s8l,Uid:bb31ad15-f483-427b-80ba-de86b76f4a83,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5a8a8ab317b34f53d1ba3ea64eaaeafa734421500cb077ba761e23a9b03c6ab\"" Apr 30 12:46:25.563074 containerd[1499]: time="2025-04-30T12:46:25.563021772Z" level=info msg="CreateContainer within sandbox \"b5a8a8ab317b34f53d1ba3ea64eaaeafa734421500cb077ba761e23a9b03c6ab\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:46:25.574455 containerd[1499]: time="2025-04-30T12:46:25.574397709Z" level=info msg="CreateContainer within sandbox \"b5a8a8ab317b34f53d1ba3ea64eaaeafa734421500cb077ba761e23a9b03c6ab\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"32aa0b01277fc3725fe42bc470da452a6adfd21fcb54c6c18ad1cb3dcf9aa924\"" Apr 30 12:46:25.575389 containerd[1499]: time="2025-04-30T12:46:25.575318600Z" level=info msg="StartContainer for \"32aa0b01277fc3725fe42bc470da452a6adfd21fcb54c6c18ad1cb3dcf9aa924\"" Apr 30 12:46:25.605802 systemd[1]: Started cri-containerd-32aa0b01277fc3725fe42bc470da452a6adfd21fcb54c6c18ad1cb3dcf9aa924.scope - libcontainer container 32aa0b01277fc3725fe42bc470da452a6adfd21fcb54c6c18ad1cb3dcf9aa924. Apr 30 12:46:25.638941 containerd[1499]: time="2025-04-30T12:46:25.638808924Z" level=info msg="StartContainer for \"32aa0b01277fc3725fe42bc470da452a6adfd21fcb54c6c18ad1cb3dcf9aa924\" returns successfully" Apr 30 12:46:25.651018 systemd[1]: cri-containerd-32aa0b01277fc3725fe42bc470da452a6adfd21fcb54c6c18ad1cb3dcf9aa924.scope: Deactivated successfully. Apr 30 12:46:25.691892 containerd[1499]: time="2025-04-30T12:46:25.691757681Z" level=info msg="shim disconnected" id=32aa0b01277fc3725fe42bc470da452a6adfd21fcb54c6c18ad1cb3dcf9aa924 namespace=k8s.io Apr 30 12:46:25.691892 containerd[1499]: time="2025-04-30T12:46:25.691891963Z" level=warning msg="cleaning up after shim disconnected" id=32aa0b01277fc3725fe42bc470da452a6adfd21fcb54c6c18ad1cb3dcf9aa924 namespace=k8s.io Apr 30 12:46:25.692209 containerd[1499]: time="2025-04-30T12:46:25.691913643Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:25.931316 kubelet[2728]: E0430 12:46:25.931138 2728 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 12:46:26.408258 containerd[1499]: time="2025-04-30T12:46:26.408165382Z" level=info msg="CreateContainer within sandbox \"b5a8a8ab317b34f53d1ba3ea64eaaeafa734421500cb077ba761e23a9b03c6ab\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:46:26.427084 containerd[1499]: time="2025-04-30T12:46:26.427042008Z" level=info msg="CreateContainer within sandbox \"b5a8a8ab317b34f53d1ba3ea64eaaeafa734421500cb077ba761e23a9b03c6ab\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"046f26dad65f941669c4ae59ba5bc4dcbc01b897ca3366d19890344f4f6e0439\"" Apr 30 12:46:26.428087 containerd[1499]: time="2025-04-30T12:46:26.428040980Z" level=info msg="StartContainer for \"046f26dad65f941669c4ae59ba5bc4dcbc01b897ca3366d19890344f4f6e0439\"" Apr 30 12:46:26.463789 systemd[1]: Started cri-containerd-046f26dad65f941669c4ae59ba5bc4dcbc01b897ca3366d19890344f4f6e0439.scope - libcontainer container 046f26dad65f941669c4ae59ba5bc4dcbc01b897ca3366d19890344f4f6e0439. Apr 30 12:46:26.496533 containerd[1499]: time="2025-04-30T12:46:26.496380235Z" level=info msg="StartContainer for \"046f26dad65f941669c4ae59ba5bc4dcbc01b897ca3366d19890344f4f6e0439\" returns successfully" Apr 30 12:46:26.501433 systemd[1]: cri-containerd-046f26dad65f941669c4ae59ba5bc4dcbc01b897ca3366d19890344f4f6e0439.scope: Deactivated successfully. Apr 30 12:46:26.525849 sshd[4449]: Accepted publickey for core from 139.178.89.65 port 56596 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:46:26.528734 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:46:26.530681 containerd[1499]: time="2025-04-30T12:46:26.530489002Z" level=info msg="shim disconnected" id=046f26dad65f941669c4ae59ba5bc4dcbc01b897ca3366d19890344f4f6e0439 namespace=k8s.io Apr 30 12:46:26.530681 containerd[1499]: time="2025-04-30T12:46:26.530597884Z" level=warning msg="cleaning up after shim disconnected" id=046f26dad65f941669c4ae59ba5bc4dcbc01b897ca3366d19890344f4f6e0439 namespace=k8s.io Apr 30 12:46:26.530817 containerd[1499]: time="2025-04-30T12:46:26.530701885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:26.535474 systemd-logind[1479]: New session 19 of user core. Apr 30 12:46:26.543483 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 12:46:27.211634 sshd[4589]: Connection closed by 139.178.89.65 port 56596 Apr 30 12:46:27.212170 sshd-session[4449]: pam_unix(sshd:session): session closed for user core Apr 30 12:46:27.220342 systemd[1]: sshd@18-91.99.0.103:22-139.178.89.65:56596.service: Deactivated successfully. Apr 30 12:46:27.224229 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 12:46:27.225581 systemd-logind[1479]: Session 19 logged out. Waiting for processes to exit. Apr 30 12:46:27.226733 systemd-logind[1479]: Removed session 19. Apr 30 12:46:27.387172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-046f26dad65f941669c4ae59ba5bc4dcbc01b897ca3366d19890344f4f6e0439-rootfs.mount: Deactivated successfully. Apr 30 12:46:27.396998 systemd[1]: Started sshd@19-91.99.0.103:22-139.178.89.65:60726.service - OpenSSH per-connection server daemon (139.178.89.65:60726). Apr 30 12:46:27.418158 containerd[1499]: time="2025-04-30T12:46:27.417880154Z" level=info msg="CreateContainer within sandbox \"b5a8a8ab317b34f53d1ba3ea64eaaeafa734421500cb077ba761e23a9b03c6ab\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:46:27.443063 containerd[1499]: time="2025-04-30T12:46:27.443011051Z" level=info msg="CreateContainer within sandbox \"b5a8a8ab317b34f53d1ba3ea64eaaeafa734421500cb077ba761e23a9b03c6ab\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3cf2fdfcdde02d9e59db50739024ba6d18905ce6db914a759fa7241470666ffb\"" Apr 30 12:46:27.445540 containerd[1499]: time="2025-04-30T12:46:27.443911102Z" level=info msg="StartContainer for \"3cf2fdfcdde02d9e59db50739024ba6d18905ce6db914a759fa7241470666ffb\"" Apr 30 12:46:27.476907 systemd[1]: Started cri-containerd-3cf2fdfcdde02d9e59db50739024ba6d18905ce6db914a759fa7241470666ffb.scope - libcontainer container 3cf2fdfcdde02d9e59db50739024ba6d18905ce6db914a759fa7241470666ffb. Apr 30 12:46:27.517963 containerd[1499]: time="2025-04-30T12:46:27.517810617Z" level=info msg="StartContainer for \"3cf2fdfcdde02d9e59db50739024ba6d18905ce6db914a759fa7241470666ffb\" returns successfully" Apr 30 12:46:27.521423 systemd[1]: cri-containerd-3cf2fdfcdde02d9e59db50739024ba6d18905ce6db914a759fa7241470666ffb.scope: Deactivated successfully. Apr 30 12:46:27.551972 containerd[1499]: time="2025-04-30T12:46:27.551904221Z" level=info msg="shim disconnected" id=3cf2fdfcdde02d9e59db50739024ba6d18905ce6db914a759fa7241470666ffb namespace=k8s.io Apr 30 12:46:27.551972 containerd[1499]: time="2025-04-30T12:46:27.551971221Z" level=warning msg="cleaning up after shim disconnected" id=3cf2fdfcdde02d9e59db50739024ba6d18905ce6db914a759fa7241470666ffb namespace=k8s.io Apr 30 12:46:27.552253 containerd[1499]: time="2025-04-30T12:46:27.551985901Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:27.563883 containerd[1499]: time="2025-04-30T12:46:27.563826362Z" level=warning msg="cleanup warnings time=\"2025-04-30T12:46:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 12:46:28.387245 sshd[4596]: Accepted publickey for core from 139.178.89.65 port 60726 ssh2: RSA SHA256:TXzQOW6GE2yBm6JTL9qUK5kY/W46dvHYICoPFUu9TZE Apr 30 12:46:28.387713 systemd[1]: run-containerd-runc-k8s.io-3cf2fdfcdde02d9e59db50739024ba6d18905ce6db914a759fa7241470666ffb-runc.Dd0Ar5.mount: Deactivated successfully. Apr 30 12:46:28.387822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cf2fdfcdde02d9e59db50739024ba6d18905ce6db914a759fa7241470666ffb-rootfs.mount: Deactivated successfully. Apr 30 12:46:28.389681 sshd-session[4596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:46:28.394709 systemd-logind[1479]: New session 20 of user core. Apr 30 12:46:28.400881 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 12:46:28.423299 containerd[1499]: time="2025-04-30T12:46:28.423159855Z" level=info msg="CreateContainer within sandbox \"b5a8a8ab317b34f53d1ba3ea64eaaeafa734421500cb077ba761e23a9b03c6ab\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:46:28.439722 containerd[1499]: time="2025-04-30T12:46:28.439654089Z" level=info msg="CreateContainer within sandbox \"b5a8a8ab317b34f53d1ba3ea64eaaeafa734421500cb077ba761e23a9b03c6ab\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"14d63235bb51d7b9e21661ac4bbe70ce4ad68a2ff660805fc9fbe8e00b79e7ad\"" Apr 30 12:46:28.440821 containerd[1499]: time="2025-04-30T12:46:28.440785662Z" level=info msg="StartContainer for \"14d63235bb51d7b9e21661ac4bbe70ce4ad68a2ff660805fc9fbe8e00b79e7ad\"" Apr 30 12:46:28.481771 systemd[1]: Started cri-containerd-14d63235bb51d7b9e21661ac4bbe70ce4ad68a2ff660805fc9fbe8e00b79e7ad.scope - libcontainer container 14d63235bb51d7b9e21661ac4bbe70ce4ad68a2ff660805fc9fbe8e00b79e7ad. Apr 30 12:46:28.510275 systemd[1]: cri-containerd-14d63235bb51d7b9e21661ac4bbe70ce4ad68a2ff660805fc9fbe8e00b79e7ad.scope: Deactivated successfully. Apr 30 12:46:28.511658 containerd[1499]: time="2025-04-30T12:46:28.511581814Z" level=info msg="StartContainer for \"14d63235bb51d7b9e21661ac4bbe70ce4ad68a2ff660805fc9fbe8e00b79e7ad\" returns successfully" Apr 30 12:46:28.537925 containerd[1499]: time="2025-04-30T12:46:28.537842962Z" level=info msg="shim disconnected" id=14d63235bb51d7b9e21661ac4bbe70ce4ad68a2ff660805fc9fbe8e00b79e7ad namespace=k8s.io Apr 30 12:46:28.537925 containerd[1499]: time="2025-04-30T12:46:28.537924363Z" level=warning msg="cleaning up after shim disconnected" id=14d63235bb51d7b9e21661ac4bbe70ce4ad68a2ff660805fc9fbe8e00b79e7ad namespace=k8s.io Apr 30 12:46:28.538425 containerd[1499]: time="2025-04-30T12:46:28.537941243Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:46:28.820326 kubelet[2728]: E0430 12:46:28.819733 2728 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-kk5kz" podUID="3bb54f6f-ed63-49ce-9bda-40d4e847dc24" Apr 30 12:46:29.388068 systemd[1]: run-containerd-runc-k8s.io-14d63235bb51d7b9e21661ac4bbe70ce4ad68a2ff660805fc9fbe8e00b79e7ad-runc.GDuqiR.mount: Deactivated successfully. Apr 30 12:46:29.388313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14d63235bb51d7b9e21661ac4bbe70ce4ad68a2ff660805fc9fbe8e00b79e7ad-rootfs.mount: Deactivated successfully. Apr 30 12:46:29.429345 containerd[1499]: time="2025-04-30T12:46:29.429254711Z" level=info msg="CreateContainer within sandbox \"b5a8a8ab317b34f53d1ba3ea64eaaeafa734421500cb077ba761e23a9b03c6ab\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:46:29.450520 containerd[1499]: time="2025-04-30T12:46:29.450443078Z" level=info msg="CreateContainer within sandbox \"b5a8a8ab317b34f53d1ba3ea64eaaeafa734421500cb077ba761e23a9b03c6ab\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"16bb240537967ae535eed5adfff6275426712cfe0c0c6a9ad1cd724a14d17481\"" Apr 30 12:46:29.453471 containerd[1499]: time="2025-04-30T12:46:29.451770934Z" level=info msg="StartContainer for \"16bb240537967ae535eed5adfff6275426712cfe0c0c6a9ad1cd724a14d17481\"" Apr 30 12:46:29.489906 systemd[1]: Started cri-containerd-16bb240537967ae535eed5adfff6275426712cfe0c0c6a9ad1cd724a14d17481.scope - libcontainer container 16bb240537967ae535eed5adfff6275426712cfe0c0c6a9ad1cd724a14d17481. Apr 30 12:46:29.520361 containerd[1499]: time="2025-04-30T12:46:29.520186811Z" level=info msg="StartContainer for \"16bb240537967ae535eed5adfff6275426712cfe0c0c6a9ad1cd724a14d17481\" returns successfully" Apr 30 12:46:29.811934 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 30 12:46:30.538618 kubelet[2728]: I0430 12:46:30.536331 2728 setters.go:580] "Node became not ready" node="ci-4230-1-1-9-a0dc1fa777" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T12:46:30Z","lastTransitionTime":"2025-04-30T12:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 12:46:30.820165 kubelet[2728]: E0430 12:46:30.819939 2728 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-kk5kz" podUID="3bb54f6f-ed63-49ce-9bda-40d4e847dc24" Apr 30 12:46:31.115635 systemd[1]: run-containerd-runc-k8s.io-16bb240537967ae535eed5adfff6275426712cfe0c0c6a9ad1cd724a14d17481-runc.7lDljg.mount: Deactivated successfully. Apr 30 12:46:32.744802 systemd-networkd[1398]: lxc_health: Link UP Apr 30 12:46:32.759348 systemd-networkd[1398]: lxc_health: Gained carrier Apr 30 12:46:33.267360 systemd[1]: run-containerd-runc-k8s.io-16bb240537967ae535eed5adfff6275426712cfe0c0c6a9ad1cd724a14d17481-runc.quuEu9.mount: Deactivated successfully. Apr 30 12:46:33.496149 kubelet[2728]: I0430 12:46:33.496082 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-95s8l" podStartSLOduration=8.496065449 podStartE2EDuration="8.496065449s" podCreationTimestamp="2025-04-30 12:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:46:30.461461375 +0000 UTC m=+204.768909334" watchObservedRunningTime="2025-04-30 12:46:33.496065449 +0000 UTC m=+207.803513368" Apr 30 12:46:33.819763 systemd-networkd[1398]: lxc_health: Gained IPv6LL Apr 30 12:46:37.651701 systemd[1]: run-containerd-runc-k8s.io-16bb240537967ae535eed5adfff6275426712cfe0c0c6a9ad1cd724a14d17481-runc.nVpCYZ.mount: Deactivated successfully. Apr 30 12:46:39.792937 systemd[1]: run-containerd-runc-k8s.io-16bb240537967ae535eed5adfff6275426712cfe0c0c6a9ad1cd724a14d17481-runc.DndUrm.mount: Deactivated successfully. Apr 30 12:46:40.006161 sshd[4653]: Connection closed by 139.178.89.65 port 60726 Apr 30 12:46:40.007288 sshd-session[4596]: pam_unix(sshd:session): session closed for user core Apr 30 12:46:40.012035 systemd[1]: sshd@19-91.99.0.103:22-139.178.89.65:60726.service: Deactivated successfully. Apr 30 12:46:40.015400 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 12:46:40.016745 systemd-logind[1479]: Session 20 logged out. Waiting for processes to exit. Apr 30 12:46:40.018475 systemd-logind[1479]: Removed session 20. Apr 30 12:47:05.840474 containerd[1499]: time="2025-04-30T12:47:05.840417044Z" level=info msg="StopPodSandbox for \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\"" Apr 30 12:47:05.842783 containerd[1499]: time="2025-04-30T12:47:05.840558806Z" level=info msg="TearDown network for sandbox \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\" successfully" Apr 30 12:47:05.842783 containerd[1499]: time="2025-04-30T12:47:05.840659046Z" level=info msg="StopPodSandbox for \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\" returns successfully" Apr 30 12:47:05.842783 containerd[1499]: time="2025-04-30T12:47:05.841625495Z" level=info msg="RemovePodSandbox for \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\"" Apr 30 12:47:05.842783 containerd[1499]: time="2025-04-30T12:47:05.841695096Z" level=info msg="Forcibly stopping sandbox \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\"" Apr 30 12:47:05.842783 containerd[1499]: time="2025-04-30T12:47:05.841811377Z" level=info msg="TearDown network for sandbox \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\" successfully" Apr 30 12:47:05.846341 containerd[1499]: time="2025-04-30T12:47:05.846277698Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 12:47:05.846466 containerd[1499]: time="2025-04-30T12:47:05.846361058Z" level=info msg="RemovePodSandbox \"c95555e594a06ce1a1fbb12d92aa5d70b48a1f87ea41b35902a9bfb3e3abd8c8\" returns successfully" Apr 30 12:47:05.846985 containerd[1499]: time="2025-04-30T12:47:05.846941104Z" level=info msg="StopPodSandbox for \"c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f\"" Apr 30 12:47:05.847063 containerd[1499]: time="2025-04-30T12:47:05.847026144Z" level=info msg="TearDown network for sandbox \"c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f\" successfully" Apr 30 12:47:05.847063 containerd[1499]: time="2025-04-30T12:47:05.847037505Z" level=info msg="StopPodSandbox for \"c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f\" returns successfully" Apr 30 12:47:05.847368 containerd[1499]: time="2025-04-30T12:47:05.847316867Z" level=info msg="RemovePodSandbox for \"c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f\"" Apr 30 12:47:05.847368 containerd[1499]: time="2025-04-30T12:47:05.847347947Z" level=info msg="Forcibly stopping sandbox \"c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f\"" Apr 30 12:47:05.847509 containerd[1499]: time="2025-04-30T12:47:05.847390468Z" level=info msg="TearDown network for sandbox \"c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f\" successfully" Apr 30 12:47:05.851798 containerd[1499]: time="2025-04-30T12:47:05.851726147Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 12:47:05.851798 containerd[1499]: time="2025-04-30T12:47:05.851796668Z" level=info msg="RemovePodSandbox \"c5b8865868e54aab37a42135139854d3253b867543226e5a03ea3dd341f81e9f\" returns successfully" Apr 30 12:47:12.541339 kubelet[2728]: E0430 12:47:12.541208 2728 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56324->10.0.0.2:2379: read: connection timed out" Apr 30 12:47:12.548996 systemd[1]: cri-containerd-f0b7650c4419b627291b39b667897108047c28c9b68ca82c249c26646f43383d.scope: Deactivated successfully. Apr 30 12:47:12.549722 systemd[1]: cri-containerd-f0b7650c4419b627291b39b667897108047c28c9b68ca82c249c26646f43383d.scope: Consumed 3.597s CPU time, 22.4M memory peak. Apr 30 12:47:12.574772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0b7650c4419b627291b39b667897108047c28c9b68ca82c249c26646f43383d-rootfs.mount: Deactivated successfully. Apr 30 12:47:12.581070 containerd[1499]: time="2025-04-30T12:47:12.581000703Z" level=info msg="shim disconnected" id=f0b7650c4419b627291b39b667897108047c28c9b68ca82c249c26646f43383d namespace=k8s.io Apr 30 12:47:12.581070 containerd[1499]: time="2025-04-30T12:47:12.581059863Z" level=warning msg="cleaning up after shim disconnected" id=f0b7650c4419b627291b39b667897108047c28c9b68ca82c249c26646f43383d namespace=k8s.io Apr 30 12:47:12.581070 containerd[1499]: time="2025-04-30T12:47:12.581072943Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:47:13.221646 systemd[1]: cri-containerd-71bb467caeb48b42cf436636432a42474ffca0f9ee4f354c58743c393e9ec0ee.scope: Deactivated successfully. Apr 30 12:47:13.222173 systemd[1]: cri-containerd-71bb467caeb48b42cf436636432a42474ffca0f9ee4f354c58743c393e9ec0ee.scope: Consumed 5.140s CPU time, 55.7M memory peak. Apr 30 12:47:13.244533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71bb467caeb48b42cf436636432a42474ffca0f9ee4f354c58743c393e9ec0ee-rootfs.mount: Deactivated successfully. Apr 30 12:47:13.251133 containerd[1499]: time="2025-04-30T12:47:13.251012797Z" level=info msg="shim disconnected" id=71bb467caeb48b42cf436636432a42474ffca0f9ee4f354c58743c393e9ec0ee namespace=k8s.io Apr 30 12:47:13.251133 containerd[1499]: time="2025-04-30T12:47:13.251130238Z" level=warning msg="cleaning up after shim disconnected" id=71bb467caeb48b42cf436636432a42474ffca0f9ee4f354c58743c393e9ec0ee namespace=k8s.io Apr 30 12:47:13.251133 containerd[1499]: time="2025-04-30T12:47:13.251140199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:47:13.262246 containerd[1499]: time="2025-04-30T12:47:13.262037053Z" level=warning msg="cleanup warnings time=\"2025-04-30T12:47:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 12:47:13.554395 kubelet[2728]: I0430 12:47:13.554211 2728 scope.go:117] "RemoveContainer" containerID="f0b7650c4419b627291b39b667897108047c28c9b68ca82c249c26646f43383d" Apr 30 12:47:13.558598 kubelet[2728]: I0430 12:47:13.558000 2728 scope.go:117] "RemoveContainer" containerID="71bb467caeb48b42cf436636432a42474ffca0f9ee4f354c58743c393e9ec0ee" Apr 30 12:47:13.559658 containerd[1499]: time="2025-04-30T12:47:13.557884550Z" level=info msg="CreateContainer within sandbox \"e7bc5e199fa706a7686698669a1f2b85b5a348cf74cbfa929b8dd672f1a54166\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 12:47:13.561519 containerd[1499]: time="2025-04-30T12:47:13.561455061Z" level=info msg="CreateContainer within sandbox \"93081d4e0e8c101bb36c6f9a0555776e73af93014d183f846e56ac255f39f76b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 12:47:13.576747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4112514029.mount: Deactivated successfully. Apr 30 12:47:13.583875 containerd[1499]: time="2025-04-30T12:47:13.583821456Z" level=info msg="CreateContainer within sandbox \"e7bc5e199fa706a7686698669a1f2b85b5a348cf74cbfa929b8dd672f1a54166\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5c663692bbcd01536cf661042fc05b34e26b9ad96e54b4e26e52889c642b311e\"" Apr 30 12:47:13.584685 containerd[1499]: time="2025-04-30T12:47:13.584417461Z" level=info msg="StartContainer for \"5c663692bbcd01536cf661042fc05b34e26b9ad96e54b4e26e52889c642b311e\"" Apr 30 12:47:13.587838 containerd[1499]: time="2025-04-30T12:47:13.587428247Z" level=info msg="CreateContainer within sandbox \"93081d4e0e8c101bb36c6f9a0555776e73af93014d183f846e56ac255f39f76b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"11d38ae003b497a78b8e90670195a679e83895c65bc12d8c72db2e646c7edc85\"" Apr 30 12:47:13.589986 containerd[1499]: time="2025-04-30T12:47:13.589679667Z" level=info msg="StartContainer for \"11d38ae003b497a78b8e90670195a679e83895c65bc12d8c72db2e646c7edc85\"" Apr 30 12:47:13.623829 systemd[1]: Started cri-containerd-5c663692bbcd01536cf661042fc05b34e26b9ad96e54b4e26e52889c642b311e.scope - libcontainer container 5c663692bbcd01536cf661042fc05b34e26b9ad96e54b4e26e52889c642b311e. Apr 30 12:47:13.636939 systemd[1]: Started cri-containerd-11d38ae003b497a78b8e90670195a679e83895c65bc12d8c72db2e646c7edc85.scope - libcontainer container 11d38ae003b497a78b8e90670195a679e83895c65bc12d8c72db2e646c7edc85. Apr 30 12:47:13.687297 containerd[1499]: time="2025-04-30T12:47:13.686903674Z" level=info msg="StartContainer for \"5c663692bbcd01536cf661042fc05b34e26b9ad96e54b4e26e52889c642b311e\" returns successfully" Apr 30 12:47:13.689864 containerd[1499]: time="2025-04-30T12:47:13.689817579Z" level=info msg="StartContainer for \"11d38ae003b497a78b8e90670195a679e83895c65bc12d8c72db2e646c7edc85\" returns successfully" Apr 30 12:47:16.829105 kubelet[2728]: E0430 12:47:16.828899 2728 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56128->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-1-1-9-a0dc1fa777.183b196aa6974945 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-1-1-9-a0dc1fa777,UID:abda4258b8f1ce54c7adfde85ec4e227,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-9-a0dc1fa777,},FirstTimestamp:2025-04-30 12:47:06.388359493 +0000 UTC m=+240.695807412,LastTimestamp:2025-04-30 12:47:06.388359493 +0000 UTC m=+240.695807412,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-9-a0dc1fa777,}" Apr 30 12:47:18.109289 kubelet[2728]: I0430 12:47:18.109143 2728 status_manager.go:853] "Failed to get status for pod" podUID="a40a2e631a2bbab3f55a3137f7cbc8f1" pod="kube-system/kube-scheduler-ci-4230-1-1-9-a0dc1fa777" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56254->10.0.0.2:2379: read: connection timed out"