Jun 20 19:08:02.885419 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 20 19:08:02.885453 kernel: Linux version 6.6.94-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Fri Jun 20 17:15:00 -00 2025 Jun 20 19:08:02.885464 kernel: KASLR enabled Jun 20 19:08:02.885470 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jun 20 19:08:02.885475 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jun 20 19:08:02.885481 kernel: random: crng init done Jun 20 19:08:02.885488 kernel: secureboot: Secure boot disabled Jun 20 19:08:02.885494 kernel: ACPI: Early table checksum verification disabled Jun 20 19:08:02.885500 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jun 20 19:08:02.885507 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jun 20 19:08:02.885513 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:02.885519 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:02.885525 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:02.885531 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:02.885539 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:02.885546 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:02.885552 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:02.885559 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:02.885565 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:08:02.885571 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jun 20 19:08:02.885577 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jun 20 19:08:02.885583 kernel: NUMA: Failed to initialise from firmware Jun 20 19:08:02.885589 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jun 20 19:08:02.885596 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jun 20 19:08:02.885601 kernel: Zone ranges: Jun 20 19:08:02.885609 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jun 20 19:08:02.885615 kernel: DMA32 empty Jun 20 19:08:02.885621 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jun 20 19:08:02.885627 kernel: Movable zone start for each node Jun 20 19:08:02.885633 kernel: Early memory node ranges Jun 20 19:08:02.885639 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jun 20 19:08:02.885645 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jun 20 19:08:02.885651 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jun 20 19:08:02.885657 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jun 20 19:08:02.885663 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jun 20 19:08:02.885669 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jun 20 19:08:02.885676 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jun 20 19:08:02.885683 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jun 20 19:08:02.885689 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jun 20 19:08:02.885695 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jun 20 19:08:02.885705 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jun 20 19:08:02.885711 kernel: psci: probing for conduit method from ACPI. Jun 20 19:08:02.885718 kernel: psci: PSCIv1.1 detected in firmware. Jun 20 19:08:02.885725 kernel: psci: Using standard PSCI v0.2 function IDs Jun 20 19:08:02.885732 kernel: psci: Trusted OS migration not required Jun 20 19:08:02.885739 kernel: psci: SMC Calling Convention v1.1 Jun 20 19:08:02.885745 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jun 20 19:08:02.885752 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jun 20 19:08:02.885758 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jun 20 19:08:02.885765 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 20 19:08:02.885771 kernel: Detected PIPT I-cache on CPU0 Jun 20 19:08:02.885777 kernel: CPU features: detected: GIC system register CPU interface Jun 20 19:08:02.885784 kernel: CPU features: detected: Hardware dirty bit management Jun 20 19:08:02.885792 kernel: CPU features: detected: Spectre-v4 Jun 20 19:08:02.885799 kernel: CPU features: detected: Spectre-BHB Jun 20 19:08:02.885805 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 20 19:08:02.885812 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 20 19:08:02.885854 kernel: CPU features: detected: ARM erratum 1418040 Jun 20 19:08:02.885862 kernel: CPU features: detected: SSBS not fully self-synchronizing Jun 20 19:08:02.885869 kernel: alternatives: applying boot alternatives Jun 20 19:08:02.885876 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=8a081d870e25287d755f6d580d3ffafd8d53f08173c09683922f11f1a622a40e Jun 20 19:08:02.885883 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:08:02.885890 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:08:02.885896 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:08:02.885906 kernel: Fallback order for Node 0: 0 Jun 20 19:08:02.885912 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jun 20 19:08:02.885918 kernel: Policy zone: Normal Jun 20 19:08:02.885925 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:08:02.885931 kernel: software IO TLB: area num 2. Jun 20 19:08:02.885938 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jun 20 19:08:02.885945 kernel: Memory: 3883832K/4096000K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 212168K reserved, 0K cma-reserved) Jun 20 19:08:02.885951 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 19:08:02.885958 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:08:02.885965 kernel: rcu: RCU event tracing is enabled. Jun 20 19:08:02.885972 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 19:08:02.885978 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:08:02.885986 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:08:02.885993 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:08:02.885999 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 19:08:02.886006 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 20 19:08:02.886012 kernel: GICv3: 256 SPIs implemented Jun 20 19:08:02.886018 kernel: GICv3: 0 Extended SPIs implemented Jun 20 19:08:02.886025 kernel: Root IRQ handler: gic_handle_irq Jun 20 19:08:02.886031 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 20 19:08:02.886037 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jun 20 19:08:02.886044 kernel: ITS [mem 0x08080000-0x0809ffff] Jun 20 19:08:02.886050 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jun 20 19:08:02.886058 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jun 20 19:08:02.886065 kernel: GICv3: using LPI property table @0x00000001000e0000 Jun 20 19:08:02.886072 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jun 20 19:08:02.886078 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:08:02.886084 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:08:02.886091 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 20 19:08:02.886097 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 20 19:08:02.886104 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 20 19:08:02.886110 kernel: Console: colour dummy device 80x25 Jun 20 19:08:02.886117 kernel: ACPI: Core revision 20230628 Jun 20 19:08:02.886124 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 20 19:08:02.886132 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:08:02.886139 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jun 20 19:08:02.886145 kernel: landlock: Up and running. Jun 20 19:08:02.886160 kernel: SELinux: Initializing. Jun 20 19:08:02.886168 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:08:02.886175 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:08:02.886181 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:08:02.886188 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:08:02.886195 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:08:02.886204 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:08:02.886210 kernel: Platform MSI: ITS@0x8080000 domain created Jun 20 19:08:02.886217 kernel: PCI/MSI: ITS@0x8080000 domain created Jun 20 19:08:02.886224 kernel: Remapping and enabling EFI services. Jun 20 19:08:02.886230 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:08:02.886237 kernel: Detected PIPT I-cache on CPU1 Jun 20 19:08:02.886244 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jun 20 19:08:02.886250 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jun 20 19:08:02.886257 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 20 19:08:02.886265 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 20 19:08:02.886272 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:08:02.886283 kernel: SMP: Total of 2 processors activated. Jun 20 19:08:02.886292 kernel: CPU features: detected: 32-bit EL0 Support Jun 20 19:08:02.886299 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 20 19:08:02.886306 kernel: CPU features: detected: Common not Private translations Jun 20 19:08:02.886313 kernel: CPU features: detected: CRC32 instructions Jun 20 19:08:02.886320 kernel: CPU features: detected: Enhanced Virtualization Traps Jun 20 19:08:02.886327 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 20 19:08:02.886335 kernel: CPU features: detected: LSE atomic instructions Jun 20 19:08:02.886342 kernel: CPU features: detected: Privileged Access Never Jun 20 19:08:02.886349 kernel: CPU features: detected: RAS Extension Support Jun 20 19:08:02.886356 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jun 20 19:08:02.886363 kernel: CPU: All CPU(s) started at EL1 Jun 20 19:08:02.886370 kernel: alternatives: applying system-wide alternatives Jun 20 19:08:02.886377 kernel: devtmpfs: initialized Jun 20 19:08:02.886384 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:08:02.886392 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 19:08:02.886399 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:08:02.886406 kernel: SMBIOS 3.0.0 present. Jun 20 19:08:02.886413 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jun 20 19:08:02.886420 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:08:02.886427 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 20 19:08:02.886434 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 20 19:08:02.886442 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 20 19:08:02.886449 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:08:02.886458 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Jun 20 19:08:02.886464 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:08:02.886471 kernel: cpuidle: using governor menu Jun 20 19:08:02.886478 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 20 19:08:02.886485 kernel: ASID allocator initialised with 32768 entries Jun 20 19:08:02.886492 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:08:02.886500 kernel: Serial: AMBA PL011 UART driver Jun 20 19:08:02.886507 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 20 19:08:02.886514 kernel: Modules: 0 pages in range for non-PLT usage Jun 20 19:08:02.886522 kernel: Modules: 509264 pages in range for PLT usage Jun 20 19:08:02.886529 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:08:02.886536 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:08:02.886543 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 20 19:08:02.886550 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 20 19:08:02.886557 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:08:02.886564 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:08:02.886571 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 20 19:08:02.886577 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 20 19:08:02.886586 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:08:02.886593 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:08:02.886600 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:08:02.886607 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:08:02.886614 kernel: ACPI: Interpreter enabled Jun 20 19:08:02.886621 kernel: ACPI: Using GIC for interrupt routing Jun 20 19:08:02.886628 kernel: ACPI: MCFG table detected, 1 entries Jun 20 19:08:02.886635 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jun 20 19:08:02.886642 kernel: printk: console [ttyAMA0] enabled Jun 20 19:08:02.886650 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 19:08:02.886793 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:08:02.886884 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 20 19:08:02.886954 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 20 19:08:02.887017 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jun 20 19:08:02.887079 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jun 20 19:08:02.887088 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jun 20 19:08:02.887109 kernel: PCI host bridge to bus 0000:00 Jun 20 19:08:02.887430 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jun 20 19:08:02.887515 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 20 19:08:02.887573 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jun 20 19:08:02.887630 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 19:08:02.887712 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jun 20 19:08:02.887802 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jun 20 19:08:02.887925 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jun 20 19:08:02.887993 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jun 20 19:08:02.888078 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:02.888143 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jun 20 19:08:02.890354 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:02.890436 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jun 20 19:08:02.890526 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:02.890604 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jun 20 19:08:02.890687 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:02.890766 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jun 20 19:08:02.890873 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:02.890955 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jun 20 19:08:02.891048 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:02.891122 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jun 20 19:08:02.891228 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:02.891306 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jun 20 19:08:02.891389 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:02.891465 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jun 20 19:08:02.891552 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jun 20 19:08:02.891626 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jun 20 19:08:02.891714 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jun 20 19:08:02.891789 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jun 20 19:08:02.891899 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jun 20 19:08:02.891982 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jun 20 19:08:02.892064 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jun 20 19:08:02.892138 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jun 20 19:08:02.892265 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jun 20 19:08:02.892343 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jun 20 19:08:02.892429 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jun 20 19:08:02.892495 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jun 20 19:08:02.892560 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jun 20 19:08:02.892638 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jun 20 19:08:02.892708 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jun 20 19:08:02.892783 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jun 20 19:08:02.892906 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jun 20 19:08:02.892978 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jun 20 19:08:02.893053 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jun 20 19:08:02.893119 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jun 20 19:08:02.894878 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jun 20 19:08:02.894975 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jun 20 19:08:02.895059 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jun 20 19:08:02.895128 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jun 20 19:08:02.895229 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jun 20 19:08:02.895299 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jun 20 19:08:02.895370 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jun 20 19:08:02.895433 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jun 20 19:08:02.895498 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jun 20 19:08:02.895561 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jun 20 19:08:02.895624 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jun 20 19:08:02.895691 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jun 20 19:08:02.895754 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jun 20 19:08:02.895854 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jun 20 19:08:02.895931 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jun 20 19:08:02.895997 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jun 20 19:08:02.896060 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jun 20 19:08:02.896128 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jun 20 19:08:02.896234 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jun 20 19:08:02.896304 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jun 20 19:08:02.896377 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jun 20 19:08:02.896442 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jun 20 19:08:02.896503 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jun 20 19:08:02.896569 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jun 20 19:08:02.896633 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jun 20 19:08:02.896697 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jun 20 19:08:02.896766 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jun 20 19:08:02.896842 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jun 20 19:08:02.896914 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jun 20 19:08:02.896985 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jun 20 19:08:02.897068 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jun 20 19:08:02.897142 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jun 20 19:08:02.897323 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jun 20 19:08:02.897390 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jun 20 19:08:02.897451 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jun 20 19:08:02.897518 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jun 20 19:08:02.897584 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jun 20 19:08:02.897647 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jun 20 19:08:02.897709 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jun 20 19:08:02.897771 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jun 20 19:08:02.897869 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jun 20 19:08:02.897936 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jun 20 19:08:02.898005 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jun 20 19:08:02.898066 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jun 20 19:08:02.898129 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jun 20 19:08:02.898209 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jun 20 19:08:02.898273 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jun 20 19:08:02.898336 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jun 20 19:08:02.898402 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jun 20 19:08:02.898475 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jun 20 19:08:02.898554 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jun 20 19:08:02.898622 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jun 20 19:08:02.898695 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jun 20 19:08:02.898756 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jun 20 19:08:02.898990 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jun 20 19:08:02.899067 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jun 20 19:08:02.899145 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jun 20 19:08:02.899230 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jun 20 19:08:02.899297 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jun 20 19:08:02.899362 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jun 20 19:08:02.899431 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jun 20 19:08:02.899495 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jun 20 19:08:02.899560 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jun 20 19:08:02.899625 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jun 20 19:08:02.899695 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jun 20 19:08:02.899761 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jun 20 19:08:02.899870 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jun 20 19:08:02.899944 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jun 20 19:08:02.900012 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jun 20 19:08:02.900077 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jun 20 19:08:02.900148 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jun 20 19:08:02.900625 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jun 20 19:08:02.900710 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jun 20 19:08:02.900779 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jun 20 19:08:02.900866 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jun 20 19:08:02.900931 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jun 20 19:08:02.900994 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jun 20 19:08:02.901056 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jun 20 19:08:02.901126 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jun 20 19:08:02.901250 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jun 20 19:08:02.901314 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jun 20 19:08:02.901375 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jun 20 19:08:02.901436 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jun 20 19:08:02.901504 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jun 20 19:08:02.901571 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jun 20 19:08:02.901633 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jun 20 19:08:02.901694 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jun 20 19:08:02.901756 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jun 20 19:08:02.901825 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jun 20 19:08:02.901902 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jun 20 19:08:02.901966 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jun 20 19:08:02.902028 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jun 20 19:08:02.902095 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jun 20 19:08:02.904251 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jun 20 19:08:02.904365 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jun 20 19:08:02.904434 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jun 20 19:08:02.904498 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jun 20 19:08:02.904560 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jun 20 19:08:02.904621 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jun 20 19:08:02.904684 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jun 20 19:08:02.904762 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jun 20 19:08:02.904861 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jun 20 19:08:02.905719 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jun 20 19:08:02.905807 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jun 20 19:08:02.905928 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jun 20 19:08:02.906002 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jun 20 19:08:02.906079 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jun 20 19:08:02.906175 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jun 20 19:08:02.906280 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jun 20 19:08:02.906353 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jun 20 19:08:02.906420 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jun 20 19:08:02.906488 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jun 20 19:08:02.906556 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jun 20 19:08:02.906626 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jun 20 19:08:02.906696 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jun 20 19:08:02.906763 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jun 20 19:08:02.906849 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jun 20 19:08:02.906927 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jun 20 19:08:02.906994 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jun 20 19:08:02.908032 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jun 20 19:08:02.908131 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jun 20 19:08:02.908251 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jun 20 19:08:02.908312 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 20 19:08:02.908368 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jun 20 19:08:02.908449 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jun 20 19:08:02.908519 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jun 20 19:08:02.908576 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jun 20 19:08:02.908642 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jun 20 19:08:02.908701 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jun 20 19:08:02.908757 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jun 20 19:08:02.908845 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jun 20 19:08:02.908927 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jun 20 19:08:02.908987 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jun 20 19:08:02.909051 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jun 20 19:08:02.909110 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jun 20 19:08:02.909264 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jun 20 19:08:02.909345 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jun 20 19:08:02.909410 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jun 20 19:08:02.909470 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jun 20 19:08:02.909534 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jun 20 19:08:02.909591 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jun 20 19:08:02.909652 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jun 20 19:08:02.909717 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jun 20 19:08:02.909775 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jun 20 19:08:02.909847 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jun 20 19:08:02.909917 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jun 20 19:08:02.909977 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jun 20 19:08:02.910035 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jun 20 19:08:02.910103 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jun 20 19:08:02.910194 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jun 20 19:08:02.910257 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jun 20 19:08:02.910267 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 20 19:08:02.910274 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 20 19:08:02.910282 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 20 19:08:02.910290 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 20 19:08:02.910300 kernel: iommu: Default domain type: Translated Jun 20 19:08:02.910308 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 20 19:08:02.910315 kernel: efivars: Registered efivars operations Jun 20 19:08:02.910323 kernel: vgaarb: loaded Jun 20 19:08:02.910331 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 20 19:08:02.910340 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:08:02.910348 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:08:02.910356 kernel: pnp: PnP ACPI init Jun 20 19:08:02.910432 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jun 20 19:08:02.910445 kernel: pnp: PnP ACPI: found 1 devices Jun 20 19:08:02.910453 kernel: NET: Registered PF_INET protocol family Jun 20 19:08:02.910461 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:08:02.910469 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 19:08:02.910477 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:08:02.910484 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 19:08:02.910492 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 19:08:02.910499 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 19:08:02.910509 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:08:02.910516 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:08:02.910524 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:08:02.910598 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jun 20 19:08:02.910609 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:08:02.910617 kernel: kvm [1]: HYP mode not available Jun 20 19:08:02.910624 kernel: Initialise system trusted keyrings Jun 20 19:08:02.910631 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 19:08:02.910639 kernel: Key type asymmetric registered Jun 20 19:08:02.910646 kernel: Asymmetric key parser 'x509' registered Jun 20 19:08:02.910666 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:08:02.910674 kernel: io scheduler mq-deadline registered Jun 20 19:08:02.910681 kernel: io scheduler kyber registered Jun 20 19:08:02.910689 kernel: io scheduler bfq registered Jun 20 19:08:02.910697 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jun 20 19:08:02.910767 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jun 20 19:08:02.910872 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jun 20 19:08:02.910951 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:02.911019 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jun 20 19:08:02.911084 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jun 20 19:08:02.911149 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:02.911263 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jun 20 19:08:02.911329 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jun 20 19:08:02.911396 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:02.911461 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jun 20 19:08:02.911523 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jun 20 19:08:02.911585 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:02.911649 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jun 20 19:08:02.911711 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jun 20 19:08:02.911775 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:02.911857 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jun 20 19:08:02.911924 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jun 20 19:08:02.911987 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:02.912053 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jun 20 19:08:02.912115 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jun 20 19:08:02.912237 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:02.912318 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jun 20 19:08:02.912383 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jun 20 19:08:02.912444 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:02.912454 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jun 20 19:08:02.912516 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jun 20 19:08:02.912579 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jun 20 19:08:02.912644 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jun 20 19:08:02.912654 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 20 19:08:02.912662 kernel: ACPI: button: Power Button [PWRB] Jun 20 19:08:02.912669 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 20 19:08:02.912736 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jun 20 19:08:02.912808 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jun 20 19:08:02.912848 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:08:02.912857 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jun 20 19:08:02.912939 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jun 20 19:08:02.912950 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jun 20 19:08:02.912958 kernel: thunder_xcv, ver 1.0 Jun 20 19:08:02.912965 kernel: thunder_bgx, ver 1.0 Jun 20 19:08:02.912973 kernel: nicpf, ver 1.0 Jun 20 19:08:02.912980 kernel: nicvf, ver 1.0 Jun 20 19:08:02.913054 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 20 19:08:02.913115 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-06-20T19:08:02 UTC (1750446482) Jun 20 19:08:02.913127 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 19:08:02.913135 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jun 20 19:08:02.913143 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jun 20 19:08:02.913150 kernel: watchdog: Hard watchdog permanently disabled Jun 20 19:08:02.913193 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:08:02.913200 kernel: Segment Routing with IPv6 Jun 20 19:08:02.913208 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:08:02.913215 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:08:02.913223 kernel: Key type dns_resolver registered Jun 20 19:08:02.913233 kernel: registered taskstats version 1 Jun 20 19:08:02.913241 kernel: Loading compiled-in X.509 certificates Jun 20 19:08:02.913248 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.94-flatcar: 8506faa781fda315da94c2790de0e5c860361c93' Jun 20 19:08:02.913256 kernel: Key type .fscrypt registered Jun 20 19:08:02.913263 kernel: Key type fscrypt-provisioning registered Jun 20 19:08:02.913271 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:08:02.913278 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:08:02.913285 kernel: ima: No architecture policies found Jun 20 19:08:02.913293 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 20 19:08:02.913302 kernel: clk: Disabling unused clocks Jun 20 19:08:02.913310 kernel: Freeing unused kernel memory: 38336K Jun 20 19:08:02.913317 kernel: Run /init as init process Jun 20 19:08:02.913325 kernel: with arguments: Jun 20 19:08:02.913332 kernel: /init Jun 20 19:08:02.913339 kernel: with environment: Jun 20 19:08:02.913346 kernel: HOME=/ Jun 20 19:08:02.913354 kernel: TERM=linux Jun 20 19:08:02.913361 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:08:02.913372 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:08:02.913382 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:08:02.913391 systemd[1]: Detected virtualization kvm. Jun 20 19:08:02.913399 systemd[1]: Detected architecture arm64. Jun 20 19:08:02.913406 systemd[1]: Running in initrd. Jun 20 19:08:02.913414 systemd[1]: No hostname configured, using default hostname. Jun 20 19:08:02.913422 systemd[1]: Hostname set to . Jun 20 19:08:02.913432 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:08:02.913440 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:08:02.913448 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:08:02.913457 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:08:02.913465 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:08:02.913473 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:08:02.913482 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:08:02.913492 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:08:02.913501 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:08:02.913509 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:08:02.913517 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:08:02.913525 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:08:02.913533 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:08:02.913542 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:08:02.913549 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:08:02.913559 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:08:02.913568 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:08:02.913576 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:08:02.913584 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:08:02.913592 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:08:02.913600 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:08:02.913608 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:08:02.913618 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:08:02.913626 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:08:02.913636 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:08:02.913644 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:08:02.913652 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:08:02.913660 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:08:02.913668 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:08:02.913677 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:08:02.913685 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:08:02.913693 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:08:02.913703 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:08:02.913739 systemd-journald[236]: Collecting audit messages is disabled. Jun 20 19:08:02.913761 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:08:02.913769 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:08:02.913778 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:08:02.913786 kernel: Bridge firewalling registered Jun 20 19:08:02.913793 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:08:02.913802 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:08:02.913810 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:08:02.913831 systemd-journald[236]: Journal started Jun 20 19:08:02.913851 systemd-journald[236]: Runtime Journal (/run/log/journal/6c5fb4a6c9b246f4a3253d4aab4df554) is 8M, max 76.6M, 68.6M free. Jun 20 19:08:02.879992 systemd-modules-load[237]: Inserted module 'overlay' Jun 20 19:08:02.907429 systemd-modules-load[237]: Inserted module 'br_netfilter' Jun 20 19:08:02.922197 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:08:02.927230 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:08:02.932480 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:08:02.937231 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:08:02.943565 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:08:02.953438 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:08:02.959368 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:08:02.960994 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:08:02.964420 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:08:02.966173 dracut-cmdline[267]: dracut-dracut-053 Jun 20 19:08:02.968428 dracut-cmdline[267]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=8a081d870e25287d755f6d580d3ffafd8d53f08173c09683922f11f1a622a40e Jun 20 19:08:02.984437 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:08:02.992616 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:08:03.024924 systemd-resolved[300]: Positive Trust Anchors: Jun 20 19:08:03.025636 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:08:03.026471 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:08:03.035056 systemd-resolved[300]: Defaulting to hostname 'linux'. Jun 20 19:08:03.036065 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:08:03.037034 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:08:03.055223 kernel: SCSI subsystem initialized Jun 20 19:08:03.060201 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:08:03.068211 kernel: iscsi: registered transport (tcp) Jun 20 19:08:03.081341 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:08:03.081452 kernel: QLogic iSCSI HBA Driver Jun 20 19:08:03.144040 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:08:03.150396 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:08:03.170358 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:08:03.170423 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:08:03.171221 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 20 19:08:03.223232 kernel: raid6: neonx8 gen() 15666 MB/s Jun 20 19:08:03.240197 kernel: raid6: neonx4 gen() 15719 MB/s Jun 20 19:08:03.257205 kernel: raid6: neonx2 gen() 13125 MB/s Jun 20 19:08:03.274214 kernel: raid6: neonx1 gen() 10456 MB/s Jun 20 19:08:03.291213 kernel: raid6: int64x8 gen() 6757 MB/s Jun 20 19:08:03.308261 kernel: raid6: int64x4 gen() 7286 MB/s Jun 20 19:08:03.325226 kernel: raid6: int64x2 gen() 6019 MB/s Jun 20 19:08:03.342221 kernel: raid6: int64x1 gen() 5034 MB/s Jun 20 19:08:03.342300 kernel: raid6: using algorithm neonx4 gen() 15719 MB/s Jun 20 19:08:03.359216 kernel: raid6: .... xor() 12256 MB/s, rmw enabled Jun 20 19:08:03.359297 kernel: raid6: using neon recovery algorithm Jun 20 19:08:03.364422 kernel: xor: measuring software checksum speed Jun 20 19:08:03.364483 kernel: 8regs : 21590 MB/sec Jun 20 19:08:03.364503 kernel: 32regs : 21636 MB/sec Jun 20 19:08:03.365259 kernel: arm64_neon : 27974 MB/sec Jun 20 19:08:03.365329 kernel: xor: using function: arm64_neon (27974 MB/sec) Jun 20 19:08:03.417282 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:08:03.434546 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:08:03.442482 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:08:03.460614 systemd-udevd[457]: Using default interface naming scheme 'v255'. Jun 20 19:08:03.464505 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:08:03.475380 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:08:03.493093 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jun 20 19:08:03.532316 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:08:03.538431 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:08:03.590906 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:08:03.599937 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:08:03.622998 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:08:03.625694 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:08:03.627614 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:08:03.629631 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:08:03.635517 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:08:03.659440 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:08:03.695329 kernel: scsi host0: Virtio SCSI HBA Jun 20 19:08:03.695561 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 20 19:08:03.697193 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jun 20 19:08:03.727191 kernel: ACPI: bus type USB registered Jun 20 19:08:03.732233 kernel: usbcore: registered new interface driver usbfs Jun 20 19:08:03.732284 kernel: usbcore: registered new interface driver hub Jun 20 19:08:03.737922 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:08:03.742292 kernel: usbcore: registered new device driver usb Jun 20 19:08:03.740862 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:08:03.741735 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:08:03.743339 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:08:03.748334 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:08:03.749717 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:08:03.762766 kernel: sr 0:0:0:0: Power-on or device reset occurred Jun 20 19:08:03.763018 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jun 20 19:08:03.763127 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 19:08:03.762731 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:08:03.772106 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jun 20 19:08:03.780621 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jun 20 19:08:03.780833 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jun 20 19:08:03.783192 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jun 20 19:08:03.783405 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jun 20 19:08:03.786181 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jun 20 19:08:03.786366 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jun 20 19:08:03.784489 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:08:03.787589 kernel: hub 1-0:1.0: USB hub found Jun 20 19:08:03.787763 kernel: hub 1-0:1.0: 4 ports detected Jun 20 19:08:03.790402 kernel: sd 0:0:0:1: Power-on or device reset occurred Jun 20 19:08:03.790583 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jun 20 19:08:03.790854 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:08:03.793296 kernel: sd 0:0:0:1: [sda] Write Protect is off Jun 20 19:08:03.793463 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jun 20 19:08:03.793547 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 20 19:08:03.795373 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jun 20 19:08:03.795581 kernel: hub 2-0:1.0: USB hub found Jun 20 19:08:03.795680 kernel: hub 2-0:1.0: 4 ports detected Jun 20 19:08:03.799609 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 19:08:03.799652 kernel: GPT:17805311 != 80003071 Jun 20 19:08:03.799662 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 19:08:03.799672 kernel: GPT:17805311 != 80003071 Jun 20 19:08:03.799681 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 19:08:03.799690 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:08:03.801178 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jun 20 19:08:03.825281 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:08:03.852862 kernel: BTRFS: device fsid c1b254aa-fc5c-4606-9f4d-9a81b9ab3a0f devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (518) Jun 20 19:08:03.857190 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (503) Jun 20 19:08:03.875997 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jun 20 19:08:03.887343 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jun 20 19:08:03.896398 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jun 20 19:08:03.903232 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jun 20 19:08:03.903872 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jun 20 19:08:03.914438 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:08:03.921748 disk-uuid[574]: Primary Header is updated. Jun 20 19:08:03.921748 disk-uuid[574]: Secondary Entries is updated. Jun 20 19:08:03.921748 disk-uuid[574]: Secondary Header is updated. Jun 20 19:08:03.928178 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:08:04.038924 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jun 20 19:08:04.173780 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jun 20 19:08:04.173883 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jun 20 19:08:04.174081 kernel: usbcore: registered new interface driver usbhid Jun 20 19:08:04.174092 kernel: usbhid: USB HID core driver Jun 20 19:08:04.281273 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jun 20 19:08:04.410182 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jun 20 19:08:04.464235 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jun 20 19:08:04.942485 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:08:04.944203 disk-uuid[575]: The operation has completed successfully. Jun 20 19:08:05.004474 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:08:05.004578 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:08:05.038451 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:08:05.045395 sh[590]: Success Jun 20 19:08:05.058182 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 20 19:08:05.126714 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:08:05.134300 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:08:05.143193 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:08:05.162350 kernel: BTRFS info (device dm-0): first mount of filesystem c1b254aa-fc5c-4606-9f4d-9a81b9ab3a0f Jun 20 19:08:05.162426 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 20 19:08:05.162447 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 20 19:08:05.163290 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 20 19:08:05.163328 kernel: BTRFS info (device dm-0): using free space tree Jun 20 19:08:05.171246 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 20 19:08:05.173362 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:08:05.174765 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:08:05.188248 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:08:05.191421 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:08:05.213551 kernel: BTRFS info (device sda6): first mount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 19:08:05.213613 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 19:08:05.213627 kernel: BTRFS info (device sda6): using free space tree Jun 20 19:08:05.218209 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 19:08:05.218274 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 19:08:05.224335 kernel: BTRFS info (device sda6): last unmount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 19:08:05.227484 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:08:05.234490 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:08:05.319210 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:08:05.329631 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:08:05.343350 ignition[679]: Ignition 2.20.0 Jun 20 19:08:05.343360 ignition[679]: Stage: fetch-offline Jun 20 19:08:05.343400 ignition[679]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:08:05.343408 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:08:05.343566 ignition[679]: parsed url from cmdline: "" Jun 20 19:08:05.343570 ignition[679]: no config URL provided Jun 20 19:08:05.343575 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:08:05.343581 ignition[679]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:08:05.347186 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:08:05.343587 ignition[679]: failed to fetch config: resource requires networking Jun 20 19:08:05.343784 ignition[679]: Ignition finished successfully Jun 20 19:08:05.362556 systemd-networkd[774]: lo: Link UP Jun 20 19:08:05.362570 systemd-networkd[774]: lo: Gained carrier Jun 20 19:08:05.367436 systemd-networkd[774]: Enumeration completed Jun 20 19:08:05.367886 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:08:05.368579 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:08:05.368582 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:08:05.369964 systemd[1]: Reached target network.target - Network. Jun 20 19:08:05.371026 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:08:05.371029 systemd-networkd[774]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:08:05.372301 systemd-networkd[774]: eth0: Link UP Jun 20 19:08:05.372304 systemd-networkd[774]: eth0: Gained carrier Jun 20 19:08:05.372312 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:08:05.376382 systemd-networkd[774]: eth1: Link UP Jun 20 19:08:05.376385 systemd-networkd[774]: eth1: Gained carrier Jun 20 19:08:05.376395 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:08:05.381501 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 19:08:05.395552 ignition[778]: Ignition 2.20.0 Jun 20 19:08:05.395569 ignition[778]: Stage: fetch Jun 20 19:08:05.395793 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:08:05.395855 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:08:05.395978 ignition[778]: parsed url from cmdline: "" Jun 20 19:08:05.395982 ignition[778]: no config URL provided Jun 20 19:08:05.395988 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:08:05.395999 ignition[778]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:08:05.396209 ignition[778]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jun 20 19:08:05.397677 ignition[778]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jun 20 19:08:05.407321 systemd-networkd[774]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 19:08:05.441286 systemd-networkd[774]: eth0: DHCPv4 address 49.12.190.100/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jun 20 19:08:05.598877 ignition[778]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jun 20 19:08:05.605884 ignition[778]: GET result: OK Jun 20 19:08:05.605996 ignition[778]: parsing config with SHA512: 2ec9a743423312a27a1e875673dce7701c276469c5dd68e2a52eebd9d961cec2d7a802a4d458143f58a83f1d42185dc83c440070ae1f85bad43dc453774fe710 Jun 20 19:08:05.613694 unknown[778]: fetched base config from "system" Jun 20 19:08:05.613706 unknown[778]: fetched base config from "system" Jun 20 19:08:05.614735 ignition[778]: fetch: fetch complete Jun 20 19:08:05.613716 unknown[778]: fetched user config from "hetzner" Jun 20 19:08:05.614744 ignition[778]: fetch: fetch passed Jun 20 19:08:05.614840 ignition[778]: Ignition finished successfully Jun 20 19:08:05.617244 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 19:08:05.621430 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:08:05.638001 ignition[785]: Ignition 2.20.0 Jun 20 19:08:05.638009 ignition[785]: Stage: kargs Jun 20 19:08:05.638221 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:08:05.638232 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:08:05.641278 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:08:05.639342 ignition[785]: kargs: kargs passed Jun 20 19:08:05.639409 ignition[785]: Ignition finished successfully Jun 20 19:08:05.646382 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:08:05.661518 ignition[792]: Ignition 2.20.0 Jun 20 19:08:05.661536 ignition[792]: Stage: disks Jun 20 19:08:05.661743 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:08:05.661755 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:08:05.662770 ignition[792]: disks: disks passed Jun 20 19:08:05.666934 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:08:05.662887 ignition[792]: Ignition finished successfully Jun 20 19:08:05.668117 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:08:05.670898 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:08:05.672624 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:08:05.673731 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:08:05.674717 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:08:05.682502 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:08:05.700505 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jun 20 19:08:05.705331 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:08:05.713413 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:08:05.762463 kernel: EXT4-fs (sda9): mounted filesystem f172a629-efc5-4850-a631-f3c62b46134c r/w with ordered data mode. Quota mode: none. Jun 20 19:08:05.763336 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:08:05.764741 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:08:05.775448 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:08:05.780321 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:08:05.783773 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 19:08:05.786567 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:08:05.786675 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:08:05.792816 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:08:05.794537 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (809) Jun 20 19:08:05.797066 kernel: BTRFS info (device sda6): first mount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 19:08:05.797135 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 19:08:05.797151 kernel: BTRFS info (device sda6): using free space tree Jun 20 19:08:05.798759 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:08:05.806243 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 19:08:05.806317 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 19:08:05.823068 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:08:05.870141 coreos-metadata[811]: Jun 20 19:08:05.870 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jun 20 19:08:05.872563 coreos-metadata[811]: Jun 20 19:08:05.872 INFO Fetch successful Jun 20 19:08:05.875145 coreos-metadata[811]: Jun 20 19:08:05.874 INFO wrote hostname ci-4230-2-0-5-45318d0d95 to /sysroot/etc/hostname Jun 20 19:08:05.877723 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:08:05.882060 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:08:05.888006 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:08:05.893533 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:08:05.899414 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:08:06.011289 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:08:06.016422 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:08:06.021298 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:08:06.028185 kernel: BTRFS info (device sda6): last unmount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 19:08:06.060542 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:08:06.062354 ignition[926]: INFO : Ignition 2.20.0 Jun 20 19:08:06.062354 ignition[926]: INFO : Stage: mount Jun 20 19:08:06.062354 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:08:06.062354 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:08:06.068045 ignition[926]: INFO : mount: mount passed Jun 20 19:08:06.068045 ignition[926]: INFO : Ignition finished successfully Jun 20 19:08:06.064745 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:08:06.071365 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:08:06.164125 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:08:06.171620 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:08:06.184714 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (937) Jun 20 19:08:06.184769 kernel: BTRFS info (device sda6): first mount of filesystem 068a5250-b7b4-4dc6-8e6c-a1610cec1941 Jun 20 19:08:06.184781 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 19:08:06.186182 kernel: BTRFS info (device sda6): using free space tree Jun 20 19:08:06.190539 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 19:08:06.190605 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 19:08:06.194657 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:08:06.217636 ignition[954]: INFO : Ignition 2.20.0 Jun 20 19:08:06.217636 ignition[954]: INFO : Stage: files Jun 20 19:08:06.218750 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:08:06.218750 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:08:06.223490 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:08:06.223490 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:08:06.223490 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:08:06.227553 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:08:06.227553 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:08:06.227553 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:08:06.225811 unknown[954]: wrote ssh authorized keys file for user: core Jun 20 19:08:06.230829 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jun 20 19:08:06.230829 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jun 20 19:08:06.327558 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:08:06.463309 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jun 20 19:08:06.463309 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:08:06.465711 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jun 20 19:08:06.760656 systemd-networkd[774]: eth0: Gained IPv6LL Jun 20 19:08:06.888361 systemd-networkd[774]: eth1: Gained IPv6LL Jun 20 19:08:06.977122 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 19:08:07.077658 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:08:07.077658 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:08:07.080896 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:08:07.080896 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:08:07.080896 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:08:07.080896 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:08:07.080896 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:08:07.080896 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:08:07.080896 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:08:07.080896 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:08:07.080896 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:08:07.080896 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 20 19:08:07.080896 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 20 19:08:07.080896 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 20 19:08:07.080896 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jun 20 19:08:07.699253 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 19:08:08.107824 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 20 19:08:08.107824 ignition[954]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 19:08:08.112022 ignition[954]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:08:08.112022 ignition[954]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:08:08.112022 ignition[954]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 19:08:08.112022 ignition[954]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 20 19:08:08.112022 ignition[954]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jun 20 19:08:08.112022 ignition[954]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jun 20 19:08:08.112022 ignition[954]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 20 19:08:08.112022 ignition[954]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:08:08.112022 ignition[954]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:08:08.112022 ignition[954]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:08:08.112022 ignition[954]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:08:08.112022 ignition[954]: INFO : files: files passed Jun 20 19:08:08.112022 ignition[954]: INFO : Ignition finished successfully Jun 20 19:08:08.112551 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:08:08.119358 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:08:08.123760 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:08:08.127688 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:08:08.127835 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:08:08.140578 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:08:08.140578 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:08:08.143357 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:08:08.145969 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:08:08.146877 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:08:08.152335 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:08:08.192683 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:08:08.192848 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:08:08.194037 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:08:08.195588 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:08:08.196942 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:08:08.206402 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:08:08.220027 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:08:08.232497 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:08:08.246939 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:08:08.247706 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:08:08.249316 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:08:08.251100 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:08:08.251305 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:08:08.252838 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:08:08.254486 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:08:08.255590 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:08:08.256590 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:08:08.257802 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:08:08.259010 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:08:08.260110 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:08:08.261309 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:08:08.263053 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:08:08.263714 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:08:08.264560 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:08:08.264690 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:08:08.265967 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:08:08.266663 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:08:08.267777 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:08:08.268226 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:08:08.268888 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:08:08.269006 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:08:08.270504 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:08:08.270631 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:08:08.271967 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:08:08.272064 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:08:08.273014 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 19:08:08.273115 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:08:08.283461 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:08:08.288860 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:08:08.289419 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:08:08.289553 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:08:08.293446 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:08:08.293557 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:08:08.299584 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:08:08.299670 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:08:08.309775 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:08:08.311828 ignition[1007]: INFO : Ignition 2.20.0 Jun 20 19:08:08.311828 ignition[1007]: INFO : Stage: umount Jun 20 19:08:08.314810 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:08:08.314810 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 19:08:08.314810 ignition[1007]: INFO : umount: umount passed Jun 20 19:08:08.314810 ignition[1007]: INFO : Ignition finished successfully Jun 20 19:08:08.315441 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:08:08.315577 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:08:08.316336 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:08:08.316378 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:08:08.318346 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:08:08.318425 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:08:08.318966 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 19:08:08.319001 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 19:08:08.319673 systemd[1]: Stopped target network.target - Network. Jun 20 19:08:08.320527 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:08:08.320586 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:08:08.321545 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:08:08.322324 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:08:08.326233 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:08:08.328132 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:08:08.329190 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:08:08.330106 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:08:08.330166 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:08:08.331107 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:08:08.331145 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:08:08.333000 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:08:08.333096 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:08:08.334188 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:08:08.334255 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:08:08.335651 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:08:08.336717 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:08:08.337905 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:08:08.338025 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:08:08.339247 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:08:08.339328 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:08:08.347425 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:08:08.347592 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:08:08.350834 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:08:08.351089 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:08:08.351241 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:08:08.355076 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:08:08.355517 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:08:08.355555 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:08:08.362348 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:08:08.363989 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:08:08.364105 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:08:08.365712 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:08:08.365753 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:08:08.368350 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:08:08.368408 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:08:08.369554 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:08:08.369594 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:08:08.372093 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:08:08.374401 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:08:08.374463 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:08:08.387390 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:08:08.387531 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:08:08.395119 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:08:08.395401 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:08:08.397364 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:08:08.397408 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:08:08.398149 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:08:08.398240 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:08:08.399326 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:08:08.399395 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:08:08.400773 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:08:08.400825 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:08:08.402178 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:08:08.402222 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:08:08.407385 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:08:08.407943 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:08:08.407998 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:08:08.410260 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 20 19:08:08.410314 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:08:08.410969 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:08:08.411013 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:08:08.412770 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:08:08.412825 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:08:08.415866 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:08:08.415928 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:08:08.421634 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:08:08.421763 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:08:08.422866 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:08:08.429350 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:08:08.436570 systemd[1]: Switching root. Jun 20 19:08:08.473076 systemd-journald[236]: Journal stopped Jun 20 19:08:09.478910 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jun 20 19:08:09.478983 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:08:09.479002 kernel: SELinux: policy capability open_perms=1 Jun 20 19:08:09.479013 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:08:09.479023 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:08:09.479032 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:08:09.479042 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:08:09.479056 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:08:09.479065 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:08:09.479075 kernel: audit: type=1403 audit(1750446488.640:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:08:09.479087 systemd[1]: Successfully loaded SELinux policy in 35.434ms. Jun 20 19:08:09.479106 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.914ms. Jun 20 19:08:09.479118 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:08:09.479129 systemd[1]: Detected virtualization kvm. Jun 20 19:08:09.479140 systemd[1]: Detected architecture arm64. Jun 20 19:08:09.479150 systemd[1]: Detected first boot. Jun 20 19:08:09.484445 systemd[1]: Hostname set to . Jun 20 19:08:09.484465 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:08:09.484481 zram_generator::config[1054]: No configuration found. Jun 20 19:08:09.484494 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:08:09.484505 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:08:09.484518 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:08:09.484532 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:08:09.484543 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:08:09.484554 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:08:09.484565 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:08:09.484577 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:08:09.484588 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:08:09.484598 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:08:09.484609 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:08:09.484619 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:08:09.484630 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:08:09.484640 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:08:09.484651 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:08:09.484661 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:08:09.484677 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:08:09.484688 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:08:09.484699 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:08:09.484710 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:08:09.484721 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jun 20 19:08:09.484732 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:08:09.484744 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:08:09.484755 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:08:09.484766 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:08:09.484790 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:08:09.484803 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:08:09.484813 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:08:09.484824 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:08:09.484837 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:08:09.484849 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:08:09.484862 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:08:09.484873 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:08:09.484889 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:08:09.484902 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:08:09.484912 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:08:09.484923 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:08:09.484943 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:08:09.484955 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:08:09.484967 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:08:09.484978 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:08:09.484989 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:08:09.484999 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:08:09.485011 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:08:09.485021 systemd[1]: Reached target machines.target - Containers. Jun 20 19:08:09.485034 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:08:09.485045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:08:09.485060 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:08:09.485070 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:08:09.485081 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:08:09.485091 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:08:09.485104 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:08:09.485116 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:08:09.485129 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:08:09.485145 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:08:09.485706 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:08:09.485726 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:08:09.485740 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:08:09.485755 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:08:09.485769 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:08:09.485890 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:08:09.485910 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:08:09.485925 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:08:09.485936 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:08:09.485947 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:08:09.485958 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:08:09.485969 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:08:09.485981 systemd[1]: Stopped verity-setup.service. Jun 20 19:08:09.485992 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:08:09.486003 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:08:09.486013 kernel: loop: module loaded Jun 20 19:08:09.486029 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:08:09.486040 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:08:09.486055 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:08:09.486067 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:08:09.486078 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:08:09.486090 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:08:09.486100 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:08:09.486110 kernel: fuse: init (API version 7.39) Jun 20 19:08:09.486124 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:08:09.486137 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:08:09.486162 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:08:09.486176 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:08:09.486189 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:08:09.486201 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:08:09.486214 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:08:09.486230 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:08:09.486243 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:08:09.486254 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:08:09.486270 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:08:09.486281 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:08:09.486294 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:08:09.486344 systemd-journald[1122]: Collecting audit messages is disabled. Jun 20 19:08:09.486371 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:08:09.486382 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:08:09.486393 systemd-journald[1122]: Journal started Jun 20 19:08:09.486418 systemd-journald[1122]: Runtime Journal (/run/log/journal/6c5fb4a6c9b246f4a3253d4aab4df554) is 8M, max 76.6M, 68.6M free. Jun 20 19:08:09.505216 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:08:09.505289 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:08:09.505313 kernel: ACPI: bus type drm_connector registered Jun 20 19:08:09.202356 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:08:09.217864 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 19:08:09.218843 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:08:09.507702 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:08:09.507742 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:08:09.510433 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:08:09.519292 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:08:09.528418 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:08:09.531240 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:08:09.534346 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:08:09.535344 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:08:09.535511 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:08:09.536335 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:08:09.536482 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:08:09.537388 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:08:09.538818 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:08:09.541436 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:08:09.543212 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:08:09.562394 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:08:09.576908 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:08:09.580983 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:08:09.589353 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:08:09.592202 kernel: loop0: detected capacity change from 0 to 123192 Jun 20 19:08:09.606364 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:08:09.629186 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:08:09.631059 systemd-journald[1122]: Time spent on flushing to /var/log/journal/6c5fb4a6c9b246f4a3253d4aab4df554 is 76.471ms for 1146 entries. Jun 20 19:08:09.631059 systemd-journald[1122]: System Journal (/var/log/journal/6c5fb4a6c9b246f4a3253d4aab4df554) is 8M, max 584.8M, 576.8M free. Jun 20 19:08:09.722462 systemd-journald[1122]: Received client request to flush runtime journal. Jun 20 19:08:09.722507 kernel: loop1: detected capacity change from 0 to 8 Jun 20 19:08:09.648314 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:08:09.661535 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 20 19:08:09.666403 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:08:09.677072 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jun 20 19:08:09.677085 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jun 20 19:08:09.698322 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:08:09.709384 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:08:09.714325 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 20 19:08:09.726654 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:08:09.730322 kernel: loop2: detected capacity change from 0 to 207008 Jun 20 19:08:09.740627 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:08:09.778257 kernel: loop3: detected capacity change from 0 to 113512 Jun 20 19:08:09.793736 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:08:09.803930 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:08:09.813426 kernel: loop4: detected capacity change from 0 to 123192 Jun 20 19:08:09.828233 kernel: loop5: detected capacity change from 0 to 8 Jun 20 19:08:09.831547 kernel: loop6: detected capacity change from 0 to 207008 Jun 20 19:08:09.841428 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jun 20 19:08:09.841784 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jun 20 19:08:09.851139 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:08:09.854328 kernel: loop7: detected capacity change from 0 to 113512 Jun 20 19:08:09.877616 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jun 20 19:08:09.878503 (sd-merge)[1198]: Merged extensions into '/usr'. Jun 20 19:08:09.886878 systemd[1]: Reload requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:08:09.886897 systemd[1]: Reloading... Jun 20 19:08:09.999179 zram_generator::config[1224]: No configuration found. Jun 20 19:08:10.093379 ldconfig[1147]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:08:10.168619 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:08:10.230846 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:08:10.231432 systemd[1]: Reloading finished in 343 ms. Jun 20 19:08:10.245835 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:08:10.248627 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:08:10.260451 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:08:10.274472 systemd[1]: Starting ensure-sysext.service... Jun 20 19:08:10.278578 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:08:10.289971 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:08:10.302324 systemd[1]: Reload requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:08:10.302356 systemd[1]: Reloading... Jun 20 19:08:10.339866 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:08:10.340081 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:08:10.340726 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:08:10.340951 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jun 20 19:08:10.341004 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jun 20 19:08:10.349222 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:08:10.349234 systemd-tmpfiles[1267]: Skipping /boot Jun 20 19:08:10.364396 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:08:10.364411 systemd-tmpfiles[1267]: Skipping /boot Jun 20 19:08:10.410182 zram_generator::config[1296]: No configuration found. Jun 20 19:08:10.520306 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:08:10.581224 systemd[1]: Reloading finished in 278 ms. Jun 20 19:08:10.597188 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:08:10.608249 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:08:10.623601 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:08:10.628486 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:08:10.632170 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:08:10.636692 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:08:10.645525 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:08:10.648929 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:08:10.651857 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:08:10.655433 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:08:10.661472 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:08:10.672494 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:08:10.673168 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:08:10.673291 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:08:10.676226 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:08:10.677108 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:08:10.677221 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:08:10.682102 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:08:10.692118 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:08:10.692919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:08:10.693044 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:08:10.698463 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:08:10.701855 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:08:10.703688 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:08:10.705232 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:08:10.706467 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:08:10.707252 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:08:10.710022 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:08:10.710231 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:08:10.711597 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:08:10.714228 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:08:10.723271 systemd[1]: Finished ensure-sysext.service. Jun 20 19:08:10.732646 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:08:10.732829 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:08:10.744533 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 19:08:10.755623 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:08:10.759989 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:08:10.762948 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:08:10.772797 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:08:10.778747 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Jun 20 19:08:10.786364 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:08:10.791853 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:08:10.793632 augenrules[1380]: No rules Jun 20 19:08:10.796016 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:08:10.796622 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:08:10.834013 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:08:10.856401 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:08:10.898043 systemd-resolved[1339]: Positive Trust Anchors: Jun 20 19:08:10.898076 systemd-resolved[1339]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:08:10.898109 systemd-resolved[1339]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:08:10.903858 systemd-resolved[1339]: Using system hostname 'ci-4230-2-0-5-45318d0d95'. Jun 20 19:08:10.909139 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 19:08:10.910643 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:08:10.916192 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:08:10.917098 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:08:10.949596 systemd-networkd[1392]: lo: Link UP Jun 20 19:08:10.950247 systemd-networkd[1392]: lo: Gained carrier Jun 20 19:08:10.951035 systemd-networkd[1392]: Enumeration completed Jun 20 19:08:10.951137 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:08:10.952971 systemd[1]: Reached target network.target - Network. Jun 20 19:08:10.961856 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:08:10.965363 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:08:10.991623 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:08:11.002185 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 20 19:08:11.069752 systemd-networkd[1392]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:08:11.069934 systemd-networkd[1392]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:08:11.071303 systemd-networkd[1392]: eth1: Link UP Jun 20 19:08:11.071460 systemd-networkd[1392]: eth1: Gained carrier Jun 20 19:08:11.071531 systemd-networkd[1392]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:08:11.080249 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:08:11.098359 systemd-networkd[1392]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 19:08:11.099307 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Jun 20 19:08:11.104882 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:08:11.104893 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:08:11.106833 systemd-networkd[1392]: eth0: Link UP Jun 20 19:08:11.106988 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Jun 20 19:08:11.107119 systemd-networkd[1392]: eth0: Gained carrier Jun 20 19:08:11.107146 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:08:11.112520 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Jun 20 19:08:11.121201 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1409) Jun 20 19:08:11.154990 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jun 20 19:08:11.155122 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:08:11.165514 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:08:11.169635 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:08:11.175382 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:08:11.175999 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:08:11.176039 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:08:11.176061 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:08:11.176812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:08:11.177025 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:08:11.179339 systemd-networkd[1392]: eth0: DHCPv4 address 49.12.190.100/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jun 20 19:08:11.181956 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Jun 20 19:08:11.185882 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:08:11.186535 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:08:11.194833 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:08:11.195282 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:08:11.204960 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:08:11.205024 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:08:11.224513 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jun 20 19:08:11.230412 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:08:11.242189 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jun 20 19:08:11.242274 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 20 19:08:11.242293 kernel: [drm] features: -context_init Jun 20 19:08:11.242305 kernel: [drm] number of scanouts: 1 Jun 20 19:08:11.242319 kernel: [drm] number of cap sets: 0 Jun 20 19:08:11.250202 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jun 20 19:08:11.252473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:08:11.253680 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:08:11.263656 kernel: Console: switching to colour frame buffer device 160x50 Jun 20 19:08:11.283073 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:08:11.283243 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jun 20 19:08:11.284264 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:08:11.291737 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:08:11.303401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:08:11.382857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:08:11.462708 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 20 19:08:11.470369 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 20 19:08:11.483224 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 19:08:11.511599 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 20 19:08:11.513390 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:08:11.514597 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:08:11.515650 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:08:11.516585 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:08:11.517432 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:08:11.518138 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:08:11.518824 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:08:11.519512 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:08:11.519550 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:08:11.519994 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:08:11.523205 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:08:11.525358 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:08:11.528756 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:08:11.529867 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:08:11.530562 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:08:11.534513 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:08:11.535939 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:08:11.538795 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 20 19:08:11.540478 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:08:11.541277 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:08:11.541800 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:08:11.542383 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:08:11.542415 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:08:11.547361 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:08:11.552283 lvm[1461]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 19:08:11.553718 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:08:11.557633 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:08:11.561697 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:08:11.571359 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:08:11.571914 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:08:11.576093 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:08:11.580614 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:08:11.585591 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jun 20 19:08:11.591792 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:08:11.595714 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:08:11.600399 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:08:11.601895 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:08:11.604211 jq[1465]: false Jun 20 19:08:11.604363 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:08:11.606382 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:08:11.612380 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:08:11.613932 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 20 19:08:11.633953 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:08:11.634198 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:08:11.657299 coreos-metadata[1463]: Jun 20 19:08:11.657 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jun 20 19:08:11.668073 coreos-metadata[1463]: Jun 20 19:08:11.665 INFO Fetch successful Jun 20 19:08:11.668073 coreos-metadata[1463]: Jun 20 19:08:11.665 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jun 20 19:08:11.668228 extend-filesystems[1466]: Found loop4 Jun 20 19:08:11.668228 extend-filesystems[1466]: Found loop5 Jun 20 19:08:11.668228 extend-filesystems[1466]: Found loop6 Jun 20 19:08:11.668228 extend-filesystems[1466]: Found loop7 Jun 20 19:08:11.668228 extend-filesystems[1466]: Found sda Jun 20 19:08:11.668228 extend-filesystems[1466]: Found sda1 Jun 20 19:08:11.668228 extend-filesystems[1466]: Found sda2 Jun 20 19:08:11.668228 extend-filesystems[1466]: Found sda3 Jun 20 19:08:11.668228 extend-filesystems[1466]: Found usr Jun 20 19:08:11.668228 extend-filesystems[1466]: Found sda4 Jun 20 19:08:11.668228 extend-filesystems[1466]: Found sda6 Jun 20 19:08:11.668228 extend-filesystems[1466]: Found sda7 Jun 20 19:08:11.668228 extend-filesystems[1466]: Found sda9 Jun 20 19:08:11.668228 extend-filesystems[1466]: Checking size of /dev/sda9 Jun 20 19:08:11.708017 jq[1477]: true Jun 20 19:08:11.668136 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:08:11.708375 coreos-metadata[1463]: Jun 20 19:08:11.666 INFO Fetch successful Jun 20 19:08:11.708411 extend-filesystems[1466]: Resized partition /dev/sda9 Jun 20 19:08:11.704660 dbus-daemon[1464]: [system] SELinux support is enabled Jun 20 19:08:11.668700 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:08:11.708842 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:08:11.711920 (ntainerd)[1495]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:08:11.722433 tar[1480]: linux-arm64/LICENSE Jun 20 19:08:11.722433 tar[1480]: linux-arm64/helm Jun 20 19:08:11.722651 extend-filesystems[1507]: resize2fs 1.47.1 (20-May-2024) Jun 20 19:08:11.728421 jq[1494]: true Jun 20 19:08:11.715058 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:08:11.715115 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:08:11.718341 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:08:11.718362 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:08:11.719499 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:08:11.719825 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:08:11.741906 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jun 20 19:08:11.767886 update_engine[1476]: I20250620 19:08:11.764027 1476 main.cc:92] Flatcar Update Engine starting Jun 20 19:08:11.776439 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:08:11.781026 update_engine[1476]: I20250620 19:08:11.780370 1476 update_check_scheduler.cc:74] Next update check in 10m15s Jun 20 19:08:11.792536 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:08:11.844148 systemd-logind[1475]: New seat seat0. Jun 20 19:08:11.847142 systemd-logind[1475]: Watching system buttons on /dev/input/event0 (Power Button) Jun 20 19:08:11.847172 systemd-logind[1475]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jun 20 19:08:11.855727 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1407) Jun 20 19:08:11.848733 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:08:11.923704 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:08:11.926294 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jun 20 19:08:11.927184 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:08:11.954239 bash[1531]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:08:11.954591 extend-filesystems[1507]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jun 20 19:08:11.954591 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 5 Jun 20 19:08:11.954591 extend-filesystems[1507]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jun 20 19:08:11.961925 extend-filesystems[1466]: Resized filesystem in /dev/sda9 Jun 20 19:08:11.961925 extend-filesystems[1466]: Found sr0 Jun 20 19:08:11.956493 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:08:11.956710 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:08:11.961492 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:08:11.981845 systemd[1]: Starting sshkeys.service... Jun 20 19:08:12.003246 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 19:08:12.011543 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 19:08:12.115950 coreos-metadata[1545]: Jun 20 19:08:12.113 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jun 20 19:08:12.115950 coreos-metadata[1545]: Jun 20 19:08:12.115 INFO Fetch successful Jun 20 19:08:12.117780 unknown[1545]: wrote ssh authorized keys file for user: core Jun 20 19:08:12.159668 update-ssh-keys[1550]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:08:12.160920 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 19:08:12.166352 systemd[1]: Finished sshkeys.service. Jun 20 19:08:12.184747 containerd[1495]: time="2025-06-20T19:08:12.184596000Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jun 20 19:08:12.211517 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:08:12.264863 containerd[1495]: time="2025-06-20T19:08:12.264812440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:12.269586 containerd[1495]: time="2025-06-20T19:08:12.269453960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.94-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:08:12.269586 containerd[1495]: time="2025-06-20T19:08:12.269506200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 20 19:08:12.269586 containerd[1495]: time="2025-06-20T19:08:12.269525760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 20 19:08:12.271583 containerd[1495]: time="2025-06-20T19:08:12.271367240Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 20 19:08:12.271583 containerd[1495]: time="2025-06-20T19:08:12.271395560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:12.271583 containerd[1495]: time="2025-06-20T19:08:12.271475360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:08:12.271583 containerd[1495]: time="2025-06-20T19:08:12.271487640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:12.272026 containerd[1495]: time="2025-06-20T19:08:12.271738840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:08:12.272026 containerd[1495]: time="2025-06-20T19:08:12.271813080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:12.272026 containerd[1495]: time="2025-06-20T19:08:12.271830840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:08:12.272026 containerd[1495]: time="2025-06-20T19:08:12.271841280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:12.272026 containerd[1495]: time="2025-06-20T19:08:12.271933800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:12.272215 containerd[1495]: time="2025-06-20T19:08:12.272195400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:12.272452 containerd[1495]: time="2025-06-20T19:08:12.272349120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:08:12.272452 containerd[1495]: time="2025-06-20T19:08:12.272365880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 20 19:08:12.274297 containerd[1495]: time="2025-06-20T19:08:12.274272600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 20 19:08:12.274460 containerd[1495]: time="2025-06-20T19:08:12.274343280Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:08:12.278728 containerd[1495]: time="2025-06-20T19:08:12.278573160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 20 19:08:12.278728 containerd[1495]: time="2025-06-20T19:08:12.278649160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 20 19:08:12.278728 containerd[1495]: time="2025-06-20T19:08:12.278673280Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 20 19:08:12.278728 containerd[1495]: time="2025-06-20T19:08:12.278688920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 20 19:08:12.278728 containerd[1495]: time="2025-06-20T19:08:12.278731000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 20 19:08:12.279167 containerd[1495]: time="2025-06-20T19:08:12.278938280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 20 19:08:12.280428 containerd[1495]: time="2025-06-20T19:08:12.280135920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 20 19:08:12.280428 containerd[1495]: time="2025-06-20T19:08:12.280276560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 20 19:08:12.280428 containerd[1495]: time="2025-06-20T19:08:12.280293560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 20 19:08:12.280428 containerd[1495]: time="2025-06-20T19:08:12.280308720Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 20 19:08:12.280428 containerd[1495]: time="2025-06-20T19:08:12.280321760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 20 19:08:12.280428 containerd[1495]: time="2025-06-20T19:08:12.280333960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 20 19:08:12.280428 containerd[1495]: time="2025-06-20T19:08:12.280345920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 20 19:08:12.280428 containerd[1495]: time="2025-06-20T19:08:12.280359640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 20 19:08:12.280428 containerd[1495]: time="2025-06-20T19:08:12.280382880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 20 19:08:12.280428 containerd[1495]: time="2025-06-20T19:08:12.280396360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 20 19:08:12.280428 containerd[1495]: time="2025-06-20T19:08:12.280408560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 20 19:08:12.280428 containerd[1495]: time="2025-06-20T19:08:12.280420120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 20 19:08:12.280684 containerd[1495]: time="2025-06-20T19:08:12.280439920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 20 19:08:12.280684 containerd[1495]: time="2025-06-20T19:08:12.280453800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 20 19:08:12.280684 containerd[1495]: time="2025-06-20T19:08:12.280465320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 20 19:08:12.280684 containerd[1495]: time="2025-06-20T19:08:12.280477720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 20 19:08:12.280684 containerd[1495]: time="2025-06-20T19:08:12.280489680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 20 19:08:12.280684 containerd[1495]: time="2025-06-20T19:08:12.280503320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 20 19:08:12.280684 containerd[1495]: time="2025-06-20T19:08:12.280514840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 20 19:08:12.280684 containerd[1495]: time="2025-06-20T19:08:12.280528160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 20 19:08:12.280684 containerd[1495]: time="2025-06-20T19:08:12.280540440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 20 19:08:12.280684 containerd[1495]: time="2025-06-20T19:08:12.280554120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 20 19:08:12.280684 containerd[1495]: time="2025-06-20T19:08:12.280565400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 20 19:08:12.280684 containerd[1495]: time="2025-06-20T19:08:12.280583360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 20 19:08:12.280684 containerd[1495]: time="2025-06-20T19:08:12.280597280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 20 19:08:12.280684 containerd[1495]: time="2025-06-20T19:08:12.280611680Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 20 19:08:12.280684 containerd[1495]: time="2025-06-20T19:08:12.280631600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 20 19:08:12.281010 containerd[1495]: time="2025-06-20T19:08:12.280646200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 20 19:08:12.281010 containerd[1495]: time="2025-06-20T19:08:12.280657400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 20 19:08:12.283383 containerd[1495]: time="2025-06-20T19:08:12.282681120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 20 19:08:12.283383 containerd[1495]: time="2025-06-20T19:08:12.282720760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 20 19:08:12.283383 containerd[1495]: time="2025-06-20T19:08:12.282733440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 20 19:08:12.283383 containerd[1495]: time="2025-06-20T19:08:12.282745280Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 20 19:08:12.283383 containerd[1495]: time="2025-06-20T19:08:12.282763440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 20 19:08:12.283383 containerd[1495]: time="2025-06-20T19:08:12.282779520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 20 19:08:12.283383 containerd[1495]: time="2025-06-20T19:08:12.282791240Z" level=info msg="NRI interface is disabled by configuration." Jun 20 19:08:12.283383 containerd[1495]: time="2025-06-20T19:08:12.282801320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 20 19:08:12.283589 containerd[1495]: time="2025-06-20T19:08:12.283141200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 20 19:08:12.283589 containerd[1495]: time="2025-06-20T19:08:12.283233800Z" level=info msg="Connect containerd service" Jun 20 19:08:12.283589 containerd[1495]: time="2025-06-20T19:08:12.283271480Z" level=info msg="using legacy CRI server" Jun 20 19:08:12.283589 containerd[1495]: time="2025-06-20T19:08:12.283278360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:08:12.283589 containerd[1495]: time="2025-06-20T19:08:12.283540400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 20 19:08:12.287032 containerd[1495]: time="2025-06-20T19:08:12.286554040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:08:12.287032 containerd[1495]: time="2025-06-20T19:08:12.286742600Z" level=info msg="Start subscribing containerd event" Jun 20 19:08:12.287032 containerd[1495]: time="2025-06-20T19:08:12.286823840Z" level=info msg="Start recovering state" Jun 20 19:08:12.287032 containerd[1495]: time="2025-06-20T19:08:12.286890560Z" level=info msg="Start event monitor" Jun 20 19:08:12.287032 containerd[1495]: time="2025-06-20T19:08:12.286902320Z" level=info msg="Start snapshots syncer" Jun 20 19:08:12.287032 containerd[1495]: time="2025-06-20T19:08:12.286912440Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:08:12.287032 containerd[1495]: time="2025-06-20T19:08:12.286920120Z" level=info msg="Start streaming server" Jun 20 19:08:12.288809 containerd[1495]: time="2025-06-20T19:08:12.287991880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:08:12.288809 containerd[1495]: time="2025-06-20T19:08:12.288055000Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:08:12.290978 containerd[1495]: time="2025-06-20T19:08:12.290266480Z" level=info msg="containerd successfully booted in 0.110246s" Jun 20 19:08:12.290369 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:08:12.462902 tar[1480]: linux-arm64/README.md Jun 20 19:08:12.481987 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:08:12.648430 systemd-networkd[1392]: eth1: Gained IPv6LL Jun 20 19:08:12.651697 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Jun 20 19:08:12.656440 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:08:12.659314 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:08:12.674497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:12.678579 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:08:12.726659 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:08:12.796765 sshd_keygen[1487]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:08:12.820016 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:08:12.828860 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:08:12.835424 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:08:12.835624 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:08:12.843620 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:08:12.854679 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:08:12.863512 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:08:12.867549 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 20 19:08:12.869135 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:08:13.160390 systemd-networkd[1392]: eth0: Gained IPv6LL Jun 20 19:08:13.161444 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Jun 20 19:08:13.568600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:13.571094 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:08:13.575096 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:08:13.576274 systemd[1]: Startup finished in 786ms (kernel) + 5.952s (initrd) + 4.972s (userspace) = 11.711s. Jun 20 19:08:14.129408 kubelet[1595]: E0620 19:08:14.129348 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:08:14.132601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:08:14.132950 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:08:14.133862 systemd[1]: kubelet.service: Consumed 964ms CPU time, 258.2M memory peak. Jun 20 19:08:24.384020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:08:24.393534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:24.507447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:24.512377 (kubelet)[1613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:08:24.561020 kubelet[1613]: E0620 19:08:24.560961 1613 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:08:24.567421 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:08:24.567601 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:08:24.568185 systemd[1]: kubelet.service: Consumed 152ms CPU time, 106.9M memory peak. Jun 20 19:08:34.702354 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:08:34.719543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:34.845931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:34.857643 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:08:34.907052 kubelet[1629]: E0620 19:08:34.906977 1629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:08:34.911380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:08:34.911895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:08:34.912547 systemd[1]: kubelet.service: Consumed 160ms CPU time, 107.5M memory peak. Jun 20 19:08:43.349316 systemd-timesyncd[1366]: Contacted time server 217.14.146.53:123 (2.flatcar.pool.ntp.org). Jun 20 19:08:43.349497 systemd-timesyncd[1366]: Initial clock synchronization to Fri 2025-06-20 19:08:43.265573 UTC. Jun 20 19:08:44.953720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 19:08:44.969513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:45.091428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:45.100612 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:08:45.153996 kubelet[1644]: E0620 19:08:45.153932 1644 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:08:45.156556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:08:45.156954 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:08:45.157587 systemd[1]: kubelet.service: Consumed 162ms CPU time, 107M memory peak. Jun 20 19:08:55.202735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 19:08:55.211547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:55.340418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:55.342338 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:08:55.388386 kubelet[1659]: E0620 19:08:55.388316 1659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:08:55.391195 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:08:55.391466 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:08:55.392016 systemd[1]: kubelet.service: Consumed 153ms CPU time, 107.3M memory peak. Jun 20 19:08:56.910271 update_engine[1476]: I20250620 19:08:56.909953 1476 update_attempter.cc:509] Updating boot flags... Jun 20 19:08:56.961267 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1675) Jun 20 19:08:57.014442 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1675) Jun 20 19:09:05.452388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 20 19:09:05.464579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:09:05.596123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:09:05.610140 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:09:05.664518 kubelet[1692]: E0620 19:09:05.664428 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:09:05.667118 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:09:05.667674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:09:05.668040 systemd[1]: kubelet.service: Consumed 164ms CPU time, 107.5M memory peak. Jun 20 19:09:15.702881 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jun 20 19:09:15.709529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:09:15.843186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:09:15.847594 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:09:15.895089 kubelet[1706]: E0620 19:09:15.894989 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:09:15.898474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:09:15.898657 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:09:15.899109 systemd[1]: kubelet.service: Consumed 157ms CPU time, 108.7M memory peak. Jun 20 19:09:25.952415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jun 20 19:09:25.965608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:09:26.078424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:09:26.082587 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:09:26.125874 kubelet[1721]: E0620 19:09:26.125773 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:09:26.128010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:09:26.128185 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:09:26.128688 systemd[1]: kubelet.service: Consumed 147ms CPU time, 107.1M memory peak. Jun 20 19:09:36.202863 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jun 20 19:09:36.214507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:09:36.355367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:09:36.355957 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:09:36.402271 kubelet[1736]: E0620 19:09:36.402222 1736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:09:36.405463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:09:36.405606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:09:36.406119 systemd[1]: kubelet.service: Consumed 155ms CPU time, 105.1M memory peak. Jun 20 19:09:46.452604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jun 20 19:09:46.461135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:09:46.581814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:09:46.596124 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:09:46.653165 kubelet[1751]: E0620 19:09:46.653097 1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:09:46.655999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:09:46.656243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:09:46.656891 systemd[1]: kubelet.service: Consumed 168ms CPU time, 107M memory peak. Jun 20 19:09:56.702506 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jun 20 19:09:56.715519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:09:56.844343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:09:56.846604 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:09:56.854027 (kubelet)[1766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:09:56.854138 systemd[1]: Started sshd@0-49.12.190.100:22-147.75.109.163:37966.service - OpenSSH per-connection server daemon (147.75.109.163:37966). Jun 20 19:09:56.903202 kubelet[1766]: E0620 19:09:56.902290 1766 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:09:56.904433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:09:56.904568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:09:56.906258 systemd[1]: kubelet.service: Consumed 150ms CPU time, 107M memory peak. Jun 20 19:09:57.868637 sshd[1768]: Accepted publickey for core from 147.75.109.163 port 37966 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:09:57.872036 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:57.881257 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:09:57.889569 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:09:57.899581 systemd-logind[1475]: New session 1 of user core. Jun 20 19:09:57.904519 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:09:57.912872 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:09:57.918004 (systemd)[1778]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:09:57.920963 systemd-logind[1475]: New session c1 of user core. Jun 20 19:09:58.061528 systemd[1778]: Queued start job for default target default.target. Jun 20 19:09:58.070566 systemd[1778]: Created slice app.slice - User Application Slice. Jun 20 19:09:58.070626 systemd[1778]: Reached target paths.target - Paths. Jun 20 19:09:58.070702 systemd[1778]: Reached target timers.target - Timers. Jun 20 19:09:58.072991 systemd[1778]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:09:58.087000 systemd[1778]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:09:58.087472 systemd[1778]: Reached target sockets.target - Sockets. Jun 20 19:09:58.087795 systemd[1778]: Reached target basic.target - Basic System. Jun 20 19:09:58.088077 systemd[1778]: Reached target default.target - Main User Target. Jun 20 19:09:58.088105 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:09:58.088653 systemd[1778]: Startup finished in 160ms. Jun 20 19:09:58.096548 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:09:58.803563 systemd[1]: Started sshd@1-49.12.190.100:22-147.75.109.163:37970.service - OpenSSH per-connection server daemon (147.75.109.163:37970). Jun 20 19:09:59.815941 sshd[1789]: Accepted publickey for core from 147.75.109.163 port 37970 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:09:59.820629 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:59.826810 systemd-logind[1475]: New session 2 of user core. Jun 20 19:09:59.834484 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:10:00.521181 sshd[1791]: Connection closed by 147.75.109.163 port 37970 Jun 20 19:10:00.520707 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:00.524243 systemd-logind[1475]: Session 2 logged out. Waiting for processes to exit. Jun 20 19:10:00.525220 systemd[1]: sshd@1-49.12.190.100:22-147.75.109.163:37970.service: Deactivated successfully. Jun 20 19:10:00.527313 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 19:10:00.529974 systemd-logind[1475]: Removed session 2. Jun 20 19:10:00.694821 systemd[1]: Started sshd@2-49.12.190.100:22-147.75.109.163:37974.service - OpenSSH per-connection server daemon (147.75.109.163:37974). Jun 20 19:10:01.672125 sshd[1797]: Accepted publickey for core from 147.75.109.163 port 37974 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:10:01.674411 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:01.681922 systemd-logind[1475]: New session 3 of user core. Jun 20 19:10:01.688557 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:10:02.346398 sshd[1799]: Connection closed by 147.75.109.163 port 37974 Jun 20 19:10:02.345850 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:02.349281 systemd[1]: sshd@2-49.12.190.100:22-147.75.109.163:37974.service: Deactivated successfully. Jun 20 19:10:02.350986 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 19:10:02.352702 systemd-logind[1475]: Session 3 logged out. Waiting for processes to exit. Jun 20 19:10:02.354116 systemd-logind[1475]: Removed session 3. Jun 20 19:10:02.528954 systemd[1]: Started sshd@3-49.12.190.100:22-147.75.109.163:37978.service - OpenSSH per-connection server daemon (147.75.109.163:37978). Jun 20 19:10:03.538991 sshd[1805]: Accepted publickey for core from 147.75.109.163 port 37978 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:10:03.541315 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:03.547800 systemd-logind[1475]: New session 4 of user core. Jun 20 19:10:03.563495 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:10:04.237027 sshd[1807]: Connection closed by 147.75.109.163 port 37978 Jun 20 19:10:04.237813 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:04.245539 systemd[1]: sshd@3-49.12.190.100:22-147.75.109.163:37978.service: Deactivated successfully. Jun 20 19:10:04.248900 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:10:04.250885 systemd-logind[1475]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:10:04.251889 systemd-logind[1475]: Removed session 4. Jun 20 19:10:04.414755 systemd[1]: Started sshd@4-49.12.190.100:22-147.75.109.163:37984.service - OpenSSH per-connection server daemon (147.75.109.163:37984). Jun 20 19:10:05.411387 sshd[1813]: Accepted publickey for core from 147.75.109.163 port 37984 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:10:05.413350 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:05.418235 systemd-logind[1475]: New session 5 of user core. Jun 20 19:10:05.425554 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:10:05.946748 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:10:05.947036 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:10:05.961735 sudo[1816]: pam_unix(sudo:session): session closed for user root Jun 20 19:10:06.125136 sshd[1815]: Connection closed by 147.75.109.163 port 37984 Jun 20 19:10:06.123795 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:06.129446 systemd[1]: sshd@4-49.12.190.100:22-147.75.109.163:37984.service: Deactivated successfully. Jun 20 19:10:06.131478 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:10:06.132665 systemd-logind[1475]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:10:06.133913 systemd-logind[1475]: Removed session 5. Jun 20 19:10:06.307767 systemd[1]: Started sshd@5-49.12.190.100:22-147.75.109.163:35126.service - OpenSSH per-connection server daemon (147.75.109.163:35126). Jun 20 19:10:06.952626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jun 20 19:10:06.960633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:07.102441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:07.102550 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:10:07.151183 kubelet[1832]: E0620 19:10:07.150707 1832 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:10:07.154080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:10:07.154349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:10:07.154975 systemd[1]: kubelet.service: Consumed 156ms CPU time, 105.2M memory peak. Jun 20 19:10:07.305951 sshd[1822]: Accepted publickey for core from 147.75.109.163 port 35126 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:10:07.308277 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:07.314205 systemd-logind[1475]: New session 6 of user core. Jun 20 19:10:07.317376 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:10:07.837299 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:10:07.837651 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:10:07.842597 sudo[1841]: pam_unix(sudo:session): session closed for user root Jun 20 19:10:07.849897 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:10:07.850818 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:10:07.873242 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:10:07.910966 augenrules[1863]: No rules Jun 20 19:10:07.912975 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:10:07.913464 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:10:07.915512 sudo[1840]: pam_unix(sudo:session): session closed for user root Jun 20 19:10:08.077619 sshd[1839]: Connection closed by 147.75.109.163 port 35126 Jun 20 19:10:08.078567 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:08.084643 systemd[1]: sshd@5-49.12.190.100:22-147.75.109.163:35126.service: Deactivated successfully. Jun 20 19:10:08.086942 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:10:08.089230 systemd-logind[1475]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:10:08.090953 systemd-logind[1475]: Removed session 6. Jun 20 19:10:08.256584 systemd[1]: Started sshd@6-49.12.190.100:22-147.75.109.163:35138.service - OpenSSH per-connection server daemon (147.75.109.163:35138). Jun 20 19:10:09.246338 sshd[1872]: Accepted publickey for core from 147.75.109.163 port 35138 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:10:09.248258 sshd-session[1872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:09.253811 systemd-logind[1475]: New session 7 of user core. Jun 20 19:10:09.258506 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:10:09.774430 sudo[1875]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:10:09.774750 sudo[1875]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:10:10.126635 (dockerd)[1893]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:10:10.127106 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:10:10.377809 dockerd[1893]: time="2025-06-20T19:10:10.377608116Z" level=info msg="Starting up" Jun 20 19:10:10.478124 dockerd[1893]: time="2025-06-20T19:10:10.478042161Z" level=info msg="Loading containers: start." Jun 20 19:10:10.668260 kernel: Initializing XFRM netlink socket Jun 20 19:10:10.777422 systemd-networkd[1392]: docker0: Link UP Jun 20 19:10:10.820855 dockerd[1893]: time="2025-06-20T19:10:10.819560338Z" level=info msg="Loading containers: done." Jun 20 19:10:10.841922 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3861003529-merged.mount: Deactivated successfully. Jun 20 19:10:10.844257 dockerd[1893]: time="2025-06-20T19:10:10.844202521Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:10:10.844364 dockerd[1893]: time="2025-06-20T19:10:10.844316160Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jun 20 19:10:10.844550 dockerd[1893]: time="2025-06-20T19:10:10.844499959Z" level=info msg="Daemon has completed initialization" Jun 20 19:10:10.889187 dockerd[1893]: time="2025-06-20T19:10:10.889025624Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:10:10.890275 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:10:12.069256 containerd[1495]: time="2025-06-20T19:10:12.068958275Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 20 19:10:12.703499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1885456920.mount: Deactivated successfully. Jun 20 19:10:13.852531 containerd[1495]: time="2025-06-20T19:10:13.852386583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:13.854710 containerd[1495]: time="2025-06-20T19:10:13.854571975Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328286" Jun 20 19:10:13.855861 containerd[1495]: time="2025-06-20T19:10:13.855800050Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:13.860463 containerd[1495]: time="2025-06-20T19:10:13.860387833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:13.862114 containerd[1495]: time="2025-06-20T19:10:13.862007587Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.792995032s" Jun 20 19:10:13.862114 containerd[1495]: time="2025-06-20T19:10:13.862049827Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jun 20 19:10:13.863064 containerd[1495]: time="2025-06-20T19:10:13.863009903Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 20 19:10:15.388242 containerd[1495]: time="2025-06-20T19:10:15.387644383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:15.390191 containerd[1495]: time="2025-06-20T19:10:15.389907614Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529248" Jun 20 19:10:15.391314 containerd[1495]: time="2025-06-20T19:10:15.391257809Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:15.400406 containerd[1495]: time="2025-06-20T19:10:15.399976137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:15.401420 containerd[1495]: time="2025-06-20T19:10:15.401367012Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.538305629s" Jun 20 19:10:15.401420 containerd[1495]: time="2025-06-20T19:10:15.401416611Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jun 20 19:10:15.402112 containerd[1495]: time="2025-06-20T19:10:15.402067169Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 20 19:10:16.583068 containerd[1495]: time="2025-06-20T19:10:16.582986476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:16.585143 containerd[1495]: time="2025-06-20T19:10:16.584732910Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484161" Jun 20 19:10:16.586444 containerd[1495]: time="2025-06-20T19:10:16.586393584Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:16.591002 containerd[1495]: time="2025-06-20T19:10:16.590931247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:16.592988 containerd[1495]: time="2025-06-20T19:10:16.592064603Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.189954274s" Jun 20 19:10:16.592988 containerd[1495]: time="2025-06-20T19:10:16.592103523Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jun 20 19:10:16.592988 containerd[1495]: time="2025-06-20T19:10:16.592867080Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 20 19:10:17.202238 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jun 20 19:10:17.215826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:17.366422 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:17.368341 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:10:17.412420 kubelet[2159]: E0620 19:10:17.412062 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:10:17.415302 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:10:17.415440 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:10:17.415770 systemd[1]: kubelet.service: Consumed 147ms CPU time, 108.7M memory peak. Jun 20 19:10:17.717507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3812050139.mount: Deactivated successfully. Jun 20 19:10:18.018796 containerd[1495]: time="2025-06-20T19:10:18.018223901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:18.021347 containerd[1495]: time="2025-06-20T19:10:18.021297970Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378432" Jun 20 19:10:18.022895 containerd[1495]: time="2025-06-20T19:10:18.022841004Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:18.025720 containerd[1495]: time="2025-06-20T19:10:18.025635914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:18.026671 containerd[1495]: time="2025-06-20T19:10:18.026223472Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.433312872s" Jun 20 19:10:18.026671 containerd[1495]: time="2025-06-20T19:10:18.026258072Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jun 20 19:10:18.027584 containerd[1495]: time="2025-06-20T19:10:18.027029749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:10:18.619096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2186566383.mount: Deactivated successfully. Jun 20 19:10:19.318311 containerd[1495]: time="2025-06-20T19:10:19.318212763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:19.320188 containerd[1495]: time="2025-06-20T19:10:19.319900156Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jun 20 19:10:19.321567 containerd[1495]: time="2025-06-20T19:10:19.321488711Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:19.327664 containerd[1495]: time="2025-06-20T19:10:19.327310410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:19.327664 containerd[1495]: time="2025-06-20T19:10:19.327523889Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.30045954s" Jun 20 19:10:19.327664 containerd[1495]: time="2025-06-20T19:10:19.327562049Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jun 20 19:10:19.328499 containerd[1495]: time="2025-06-20T19:10:19.328306806Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:10:19.840016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount477896380.mount: Deactivated successfully. Jun 20 19:10:19.846712 containerd[1495]: time="2025-06-20T19:10:19.846637382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:19.849217 containerd[1495]: time="2025-06-20T19:10:19.848675139Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jun 20 19:10:19.850384 containerd[1495]: time="2025-06-20T19:10:19.850337057Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:19.855369 containerd[1495]: time="2025-06-20T19:10:19.855330091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:19.857695 containerd[1495]: time="2025-06-20T19:10:19.857623128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 529.268922ms" Jun 20 19:10:19.857796 containerd[1495]: time="2025-06-20T19:10:19.857692208Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jun 20 19:10:19.858710 containerd[1495]: time="2025-06-20T19:10:19.858464447Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 20 19:10:20.482197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3361919425.mount: Deactivated successfully. Jun 20 19:10:22.458450 containerd[1495]: time="2025-06-20T19:10:22.458345297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:22.460891 containerd[1495]: time="2025-06-20T19:10:22.460801619Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812537" Jun 20 19:10:22.463197 containerd[1495]: time="2025-06-20T19:10:22.462034880Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:22.470605 containerd[1495]: time="2025-06-20T19:10:22.470520905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:22.471381 containerd[1495]: time="2025-06-20T19:10:22.471329799Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.612829112s" Jun 20 19:10:22.471381 containerd[1495]: time="2025-06-20T19:10:22.471370560Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jun 20 19:10:27.452512 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jun 20 19:10:27.461746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:27.590170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:27.595575 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:10:27.644374 kubelet[2309]: E0620 19:10:27.644328 2309 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:10:27.647502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:10:27.647856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:10:27.648501 systemd[1]: kubelet.service: Consumed 148ms CPU time, 106M memory peak. Jun 20 19:10:28.106447 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:28.106690 systemd[1]: kubelet.service: Consumed 148ms CPU time, 106M memory peak. Jun 20 19:10:28.119763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:28.156019 systemd[1]: Reload requested from client PID 2324 ('systemctl') (unit session-7.scope)... Jun 20 19:10:28.156034 systemd[1]: Reloading... Jun 20 19:10:28.295196 zram_generator::config[2369]: No configuration found. Jun 20 19:10:28.413446 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:10:28.508027 systemd[1]: Reloading finished in 351 ms. Jun 20 19:10:28.556328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:28.561601 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:28.566219 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:10:28.566460 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:28.566517 systemd[1]: kubelet.service: Consumed 104ms CPU time, 95M memory peak. Jun 20 19:10:28.575001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:28.688355 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:28.701788 (kubelet)[2420]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:10:28.754323 kubelet[2420]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:10:28.756190 kubelet[2420]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:10:28.756190 kubelet[2420]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:10:28.756190 kubelet[2420]: I0620 19:10:28.754803 2420 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:10:29.744393 kubelet[2420]: I0620 19:10:29.743930 2420 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:10:29.744393 kubelet[2420]: I0620 19:10:29.743987 2420 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:10:29.744692 kubelet[2420]: I0620 19:10:29.744496 2420 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:10:29.779417 kubelet[2420]: E0620 19:10:29.779370 2420 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://49.12.190.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.12.190.100:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:10:29.781966 kubelet[2420]: I0620 19:10:29.781778 2420 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:10:29.792779 kubelet[2420]: E0620 19:10:29.792707 2420 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 19:10:29.792779 kubelet[2420]: I0620 19:10:29.792750 2420 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 19:10:29.795466 kubelet[2420]: I0620 19:10:29.795434 2420 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:10:29.795969 kubelet[2420]: I0620 19:10:29.795923 2420 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:10:29.796210 kubelet[2420]: I0620 19:10:29.795963 2420 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-0-5-45318d0d95","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:10:29.796318 kubelet[2420]: I0620 19:10:29.796282 2420 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:10:29.796318 kubelet[2420]: I0620 19:10:29.796295 2420 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:10:29.796572 kubelet[2420]: I0620 19:10:29.796541 2420 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:10:29.802687 kubelet[2420]: I0620 19:10:29.802613 2420 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:10:29.803212 kubelet[2420]: I0620 19:10:29.802833 2420 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:10:29.803212 kubelet[2420]: I0620 19:10:29.802866 2420 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:10:29.803212 kubelet[2420]: I0620 19:10:29.802878 2420 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:10:29.808327 kubelet[2420]: W0620 19:10:29.808257 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.12.190.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-5-45318d0d95&limit=500&resourceVersion=0": dial tcp 49.12.190.100:6443: connect: connection refused Jun 20 19:10:29.808327 kubelet[2420]: E0620 19:10:29.808331 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.12.190.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-5-45318d0d95&limit=500&resourceVersion=0\": dial tcp 49.12.190.100:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:10:29.808805 kubelet[2420]: W0620 19:10:29.808734 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.12.190.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.12.190.100:6443: connect: connection refused Jun 20 19:10:29.808805 kubelet[2420]: E0620 19:10:29.808790 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.12.190.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.12.190.100:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:10:29.809274 kubelet[2420]: I0620 19:10:29.809247 2420 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 19:10:29.810344 kubelet[2420]: I0620 19:10:29.809915 2420 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:10:29.810344 kubelet[2420]: W0620 19:10:29.810055 2420 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:10:29.812118 kubelet[2420]: I0620 19:10:29.812083 2420 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:10:29.812118 kubelet[2420]: I0620 19:10:29.812123 2420 server.go:1287] "Started kubelet" Jun 20 19:10:29.814822 kubelet[2420]: I0620 19:10:29.814780 2420 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:10:29.815845 kubelet[2420]: I0620 19:10:29.815823 2420 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:10:29.816406 kubelet[2420]: E0620 19:10:29.816087 2420 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.12.190.100:6443/api/v1/namespaces/default/events\": dial tcp 49.12.190.100:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-0-5-45318d0d95.184ad5ef803ae753 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-0-5-45318d0d95,UID:ci-4230-2-0-5-45318d0d95,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-0-5-45318d0d95,},FirstTimestamp:2025-06-20 19:10:29.812102995 +0000 UTC m=+1.104319305,LastTimestamp:2025-06-20 19:10:29.812102995 +0000 UTC m=+1.104319305,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-0-5-45318d0d95,}" Jun 20 19:10:29.817258 kubelet[2420]: I0620 19:10:29.815841 2420 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:10:29.817706 kubelet[2420]: I0620 19:10:29.817688 2420 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:10:29.819346 kubelet[2420]: I0620 19:10:29.819146 2420 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:10:29.820944 kubelet[2420]: I0620 19:10:29.820900 2420 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:10:29.825860 kubelet[2420]: E0620 19:10:29.824914 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-0-5-45318d0d95\" not found" Jun 20 19:10:29.825860 kubelet[2420]: I0620 19:10:29.824964 2420 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:10:29.826092 kubelet[2420]: I0620 19:10:29.825876 2420 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:10:29.826092 kubelet[2420]: I0620 19:10:29.825977 2420 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:10:29.827206 kubelet[2420]: W0620 19:10:29.826910 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.12.190.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.12.190.100:6443: connect: connection refused Jun 20 19:10:29.827206 kubelet[2420]: E0620 19:10:29.827177 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.12.190.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.12.190.100:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:10:29.828729 kubelet[2420]: I0620 19:10:29.828675 2420 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:10:29.828834 kubelet[2420]: I0620 19:10:29.828795 2420 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:10:29.831237 kubelet[2420]: E0620 19:10:29.830559 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.190.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-5-45318d0d95?timeout=10s\": dial tcp 49.12.190.100:6443: connect: connection refused" interval="200ms" Jun 20 19:10:29.831237 kubelet[2420]: E0620 19:10:29.830731 2420 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:10:29.831237 kubelet[2420]: I0620 19:10:29.830879 2420 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:10:29.848502 kubelet[2420]: I0620 19:10:29.848447 2420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:10:29.856619 kubelet[2420]: I0620 19:10:29.856584 2420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:10:29.856789 kubelet[2420]: I0620 19:10:29.856775 2420 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:10:29.856853 kubelet[2420]: I0620 19:10:29.856843 2420 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:10:29.856910 kubelet[2420]: I0620 19:10:29.856901 2420 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:10:29.857019 kubelet[2420]: E0620 19:10:29.856993 2420 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:10:29.863344 kubelet[2420]: I0620 19:10:29.863309 2420 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:10:29.863344 kubelet[2420]: I0620 19:10:29.863330 2420 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:10:29.863344 kubelet[2420]: I0620 19:10:29.863351 2420 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:10:29.864294 kubelet[2420]: W0620 19:10:29.864128 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.12.190.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.12.190.100:6443: connect: connection refused Jun 20 19:10:29.864294 kubelet[2420]: E0620 19:10:29.864209 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.12.190.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.12.190.100:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:10:29.865638 kubelet[2420]: I0620 19:10:29.865593 2420 policy_none.go:49] "None policy: Start" Jun 20 19:10:29.865690 kubelet[2420]: I0620 19:10:29.865630 2420 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:10:29.865690 kubelet[2420]: I0620 19:10:29.865667 2420 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:10:29.874273 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:10:29.891319 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:10:29.896851 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:10:29.904483 kubelet[2420]: I0620 19:10:29.904376 2420 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:10:29.904700 kubelet[2420]: I0620 19:10:29.904628 2420 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:10:29.904778 kubelet[2420]: I0620 19:10:29.904698 2420 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:10:29.905411 kubelet[2420]: I0620 19:10:29.905032 2420 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:10:29.907883 kubelet[2420]: E0620 19:10:29.907842 2420 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:10:29.908073 kubelet[2420]: E0620 19:10:29.907991 2420 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-0-5-45318d0d95\" not found" Jun 20 19:10:29.972775 systemd[1]: Created slice kubepods-burstable-pod0fd15b6a02023c0a6e08d2f517c8567d.slice - libcontainer container kubepods-burstable-pod0fd15b6a02023c0a6e08d2f517c8567d.slice. Jun 20 19:10:29.987021 kubelet[2420]: E0620 19:10:29.986959 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-5-45318d0d95\" not found" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:29.994519 systemd[1]: Created slice kubepods-burstable-podf875a00ed52eb7afe63526ee19738ebe.slice - libcontainer container kubepods-burstable-podf875a00ed52eb7afe63526ee19738ebe.slice. Jun 20 19:10:30.006631 kubelet[2420]: E0620 19:10:30.005713 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-5-45318d0d95\" not found" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:30.007665 kubelet[2420]: I0620 19:10:30.007593 2420 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:30.008290 kubelet[2420]: E0620 19:10:30.008235 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.12.190.100:6443/api/v1/nodes\": dial tcp 49.12.190.100:6443: connect: connection refused" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:30.011441 systemd[1]: Created slice kubepods-burstable-pod28d3ab29192e431f85c32d727ced7364.slice - libcontainer container kubepods-burstable-pod28d3ab29192e431f85c32d727ced7364.slice. Jun 20 19:10:30.013818 kubelet[2420]: E0620 19:10:30.013778 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-5-45318d0d95\" not found" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:30.031775 kubelet[2420]: E0620 19:10:30.031674 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.190.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-5-45318d0d95?timeout=10s\": dial tcp 49.12.190.100:6443: connect: connection refused" interval="400ms" Jun 20 19:10:30.127967 kubelet[2420]: I0620 19:10:30.127468 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fd15b6a02023c0a6e08d2f517c8567d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-0-5-45318d0d95\" (UID: \"0fd15b6a02023c0a6e08d2f517c8567d\") " pod="kube-system/kube-apiserver-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:30.127967 kubelet[2420]: I0620 19:10:30.127536 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f875a00ed52eb7afe63526ee19738ebe-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-0-5-45318d0d95\" (UID: \"f875a00ed52eb7afe63526ee19738ebe\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:30.127967 kubelet[2420]: I0620 19:10:30.127575 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f875a00ed52eb7afe63526ee19738ebe-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-0-5-45318d0d95\" (UID: \"f875a00ed52eb7afe63526ee19738ebe\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:30.127967 kubelet[2420]: I0620 19:10:30.127612 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f875a00ed52eb7afe63526ee19738ebe-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-0-5-45318d0d95\" (UID: \"f875a00ed52eb7afe63526ee19738ebe\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:30.127967 kubelet[2420]: I0620 19:10:30.127692 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28d3ab29192e431f85c32d727ced7364-kubeconfig\") pod \"kube-scheduler-ci-4230-2-0-5-45318d0d95\" (UID: \"28d3ab29192e431f85c32d727ced7364\") " pod="kube-system/kube-scheduler-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:30.128422 kubelet[2420]: I0620 19:10:30.127746 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fd15b6a02023c0a6e08d2f517c8567d-k8s-certs\") pod \"kube-apiserver-ci-4230-2-0-5-45318d0d95\" (UID: \"0fd15b6a02023c0a6e08d2f517c8567d\") " pod="kube-system/kube-apiserver-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:30.128422 kubelet[2420]: I0620 19:10:30.127776 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f875a00ed52eb7afe63526ee19738ebe-ca-certs\") pod \"kube-controller-manager-ci-4230-2-0-5-45318d0d95\" (UID: \"f875a00ed52eb7afe63526ee19738ebe\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:30.128422 kubelet[2420]: I0620 19:10:30.127806 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f875a00ed52eb7afe63526ee19738ebe-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-0-5-45318d0d95\" (UID: \"f875a00ed52eb7afe63526ee19738ebe\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:30.128422 kubelet[2420]: I0620 19:10:30.127835 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fd15b6a02023c0a6e08d2f517c8567d-ca-certs\") pod \"kube-apiserver-ci-4230-2-0-5-45318d0d95\" (UID: \"0fd15b6a02023c0a6e08d2f517c8567d\") " pod="kube-system/kube-apiserver-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:30.211264 kubelet[2420]: I0620 19:10:30.211218 2420 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:30.211757 kubelet[2420]: E0620 19:10:30.211710 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.12.190.100:6443/api/v1/nodes\": dial tcp 49.12.190.100:6443: connect: connection refused" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:30.290058 containerd[1495]: time="2025-06-20T19:10:30.289831884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-0-5-45318d0d95,Uid:0fd15b6a02023c0a6e08d2f517c8567d,Namespace:kube-system,Attempt:0,}" Jun 20 19:10:30.308202 containerd[1495]: time="2025-06-20T19:10:30.307910601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-0-5-45318d0d95,Uid:f875a00ed52eb7afe63526ee19738ebe,Namespace:kube-system,Attempt:0,}" Jun 20 19:10:30.316402 containerd[1495]: time="2025-06-20T19:10:30.316026347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-0-5-45318d0d95,Uid:28d3ab29192e431f85c32d727ced7364,Namespace:kube-system,Attempt:0,}" Jun 20 19:10:30.432894 kubelet[2420]: E0620 19:10:30.432803 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.190.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-5-45318d0d95?timeout=10s\": dial tcp 49.12.190.100:6443: connect: connection refused" interval="800ms" Jun 20 19:10:30.617144 kubelet[2420]: I0620 19:10:30.616635 2420 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:30.617296 kubelet[2420]: E0620 19:10:30.617151 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.12.190.100:6443/api/v1/nodes\": dial tcp 49.12.190.100:6443: connect: connection refused" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:30.811557 kubelet[2420]: W0620 19:10:30.811446 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.12.190.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.12.190.100:6443: connect: connection refused Jun 20 19:10:30.811557 kubelet[2420]: E0620 19:10:30.811502 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.12.190.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.12.190.100:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:10:30.829470 kubelet[2420]: W0620 19:10:30.829343 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.12.190.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.12.190.100:6443: connect: connection refused Jun 20 19:10:30.829470 kubelet[2420]: E0620 19:10:30.829431 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.12.190.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.12.190.100:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:10:30.853597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount339501674.mount: Deactivated successfully. Jun 20 19:10:30.865131 containerd[1495]: time="2025-06-20T19:10:30.865030123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:10:30.868289 containerd[1495]: time="2025-06-20T19:10:30.867985282Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:10:30.869934 containerd[1495]: time="2025-06-20T19:10:30.869865906Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 19:10:30.871001 containerd[1495]: time="2025-06-20T19:10:30.870949761Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:10:30.872019 containerd[1495]: time="2025-06-20T19:10:30.871950974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 19:10:30.873508 containerd[1495]: time="2025-06-20T19:10:30.873456713Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:10:30.874070 containerd[1495]: time="2025-06-20T19:10:30.874006361Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jun 20 19:10:30.878024 containerd[1495]: time="2025-06-20T19:10:30.877696489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:10:30.881326 containerd[1495]: time="2025-06-20T19:10:30.879967679Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.83989ms" Jun 20 19:10:30.882082 containerd[1495]: time="2025-06-20T19:10:30.882035306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 574.021943ms" Jun 20 19:10:30.887418 containerd[1495]: time="2025-06-20T19:10:30.887350055Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 597.354249ms" Jun 20 19:10:30.958357 kubelet[2420]: W0620 19:10:30.958283 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.12.190.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-5-45318d0d95&limit=500&resourceVersion=0": dial tcp 49.12.190.100:6443: connect: connection refused Jun 20 19:10:30.958357 kubelet[2420]: E0620 19:10:30.958351 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.12.190.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-5-45318d0d95&limit=500&resourceVersion=0\": dial tcp 49.12.190.100:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:10:31.017906 containerd[1495]: time="2025-06-20T19:10:31.017562310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:10:31.017906 containerd[1495]: time="2025-06-20T19:10:31.017649311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:10:31.017906 containerd[1495]: time="2025-06-20T19:10:31.017665551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:31.017906 containerd[1495]: time="2025-06-20T19:10:31.017751912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:31.021619 containerd[1495]: time="2025-06-20T19:10:31.021377678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:10:31.022663 containerd[1495]: time="2025-06-20T19:10:31.022484492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:10:31.022663 containerd[1495]: time="2025-06-20T19:10:31.022506893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:31.022663 containerd[1495]: time="2025-06-20T19:10:31.022590574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:31.024249 containerd[1495]: time="2025-06-20T19:10:31.024162753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:10:31.024408 containerd[1495]: time="2025-06-20T19:10:31.024231234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:10:31.024408 containerd[1495]: time="2025-06-20T19:10:31.024243714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:31.024408 containerd[1495]: time="2025-06-20T19:10:31.024316955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:31.049374 systemd[1]: Started cri-containerd-0e754706684a7a7d03900a95c20409023bbb8b8f8772915442c0271c4f44d060.scope - libcontainer container 0e754706684a7a7d03900a95c20409023bbb8b8f8772915442c0271c4f44d060. Jun 20 19:10:31.051741 systemd[1]: Started cri-containerd-4d72bc8f0ee732c0f1816f0ccf56cfeb36a5c2dffe1cb9cdf7510bc25de56ba1.scope - libcontainer container 4d72bc8f0ee732c0f1816f0ccf56cfeb36a5c2dffe1cb9cdf7510bc25de56ba1. Jun 20 19:10:31.061514 systemd[1]: Started cri-containerd-d4fc2fd0ee46b2dce1e67d1509a8ef8dd16cec5b3f4419e78b04ea1760b935b7.scope - libcontainer container d4fc2fd0ee46b2dce1e67d1509a8ef8dd16cec5b3f4419e78b04ea1760b935b7. Jun 20 19:10:31.109189 kubelet[2420]: W0620 19:10:31.107586 2420 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.12.190.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.12.190.100:6443: connect: connection refused Jun 20 19:10:31.109189 kubelet[2420]: E0620 19:10:31.108186 2420 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.12.190.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.12.190.100:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:10:31.127572 containerd[1495]: time="2025-06-20T19:10:31.126978933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-0-5-45318d0d95,Uid:f875a00ed52eb7afe63526ee19738ebe,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4fc2fd0ee46b2dce1e67d1509a8ef8dd16cec5b3f4419e78b04ea1760b935b7\"" Jun 20 19:10:31.130005 containerd[1495]: time="2025-06-20T19:10:31.129971971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-0-5-45318d0d95,Uid:0fd15b6a02023c0a6e08d2f517c8567d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e754706684a7a7d03900a95c20409023bbb8b8f8772915442c0271c4f44d060\"" Jun 20 19:10:31.132427 containerd[1495]: time="2025-06-20T19:10:31.132383161Z" level=info msg="CreateContainer within sandbox \"d4fc2fd0ee46b2dce1e67d1509a8ef8dd16cec5b3f4419e78b04ea1760b935b7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:10:31.132784 containerd[1495]: time="2025-06-20T19:10:31.132755846Z" level=info msg="CreateContainer within sandbox \"0e754706684a7a7d03900a95c20409023bbb8b8f8772915442c0271c4f44d060\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:10:31.138800 containerd[1495]: time="2025-06-20T19:10:31.138634600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-0-5-45318d0d95,Uid:28d3ab29192e431f85c32d727ced7364,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d72bc8f0ee732c0f1816f0ccf56cfeb36a5c2dffe1cb9cdf7510bc25de56ba1\"" Jun 20 19:10:31.142352 containerd[1495]: time="2025-06-20T19:10:31.142271846Z" level=info msg="CreateContainer within sandbox \"4d72bc8f0ee732c0f1816f0ccf56cfeb36a5c2dffe1cb9cdf7510bc25de56ba1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:10:31.152921 containerd[1495]: time="2025-06-20T19:10:31.152691458Z" level=info msg="CreateContainer within sandbox \"d4fc2fd0ee46b2dce1e67d1509a8ef8dd16cec5b3f4419e78b04ea1760b935b7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dfa0471fc07a36796181f4c6761242f1bdfb8830ebd16a55269b3d25f4df7815\"" Jun 20 19:10:31.153562 containerd[1495]: time="2025-06-20T19:10:31.153521548Z" level=info msg="StartContainer for \"dfa0471fc07a36796181f4c6761242f1bdfb8830ebd16a55269b3d25f4df7815\"" Jun 20 19:10:31.160871 containerd[1495]: time="2025-06-20T19:10:31.160681639Z" level=info msg="CreateContainer within sandbox \"0e754706684a7a7d03900a95c20409023bbb8b8f8772915442c0271c4f44d060\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7eac992f443934abd634563bc2c0cccf902795e822b729dab3cc9f472ab08070\"" Jun 20 19:10:31.161927 containerd[1495]: time="2025-06-20T19:10:31.161873094Z" level=info msg="StartContainer for \"7eac992f443934abd634563bc2c0cccf902795e822b729dab3cc9f472ab08070\"" Jun 20 19:10:31.172028 containerd[1495]: time="2025-06-20T19:10:31.171910901Z" level=info msg="CreateContainer within sandbox \"4d72bc8f0ee732c0f1816f0ccf56cfeb36a5c2dffe1cb9cdf7510bc25de56ba1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4ba3d36773ad0c9d61c8b828a4f7ccac61fdbea2b5ecba7dbcf61727491cfec7\"" Jun 20 19:10:31.173187 containerd[1495]: time="2025-06-20T19:10:31.172479468Z" level=info msg="StartContainer for \"4ba3d36773ad0c9d61c8b828a4f7ccac61fdbea2b5ecba7dbcf61727491cfec7\"" Jun 20 19:10:31.189375 systemd[1]: Started cri-containerd-dfa0471fc07a36796181f4c6761242f1bdfb8830ebd16a55269b3d25f4df7815.scope - libcontainer container dfa0471fc07a36796181f4c6761242f1bdfb8830ebd16a55269b3d25f4df7815. Jun 20 19:10:31.210368 systemd[1]: Started cri-containerd-7eac992f443934abd634563bc2c0cccf902795e822b729dab3cc9f472ab08070.scope - libcontainer container 7eac992f443934abd634563bc2c0cccf902795e822b729dab3cc9f472ab08070. Jun 20 19:10:31.215059 systemd[1]: Started cri-containerd-4ba3d36773ad0c9d61c8b828a4f7ccac61fdbea2b5ecba7dbcf61727491cfec7.scope - libcontainer container 4ba3d36773ad0c9d61c8b828a4f7ccac61fdbea2b5ecba7dbcf61727491cfec7. Jun 20 19:10:31.233429 kubelet[2420]: E0620 19:10:31.233380 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.190.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-5-45318d0d95?timeout=10s\": dial tcp 49.12.190.100:6443: connect: connection refused" interval="1.6s" Jun 20 19:10:31.256733 containerd[1495]: time="2025-06-20T19:10:31.256435969Z" level=info msg="StartContainer for \"7eac992f443934abd634563bc2c0cccf902795e822b729dab3cc9f472ab08070\" returns successfully" Jun 20 19:10:31.285501 containerd[1495]: time="2025-06-20T19:10:31.285408815Z" level=info msg="StartContainer for \"dfa0471fc07a36796181f4c6761242f1bdfb8830ebd16a55269b3d25f4df7815\" returns successfully" Jun 20 19:10:31.301597 containerd[1495]: time="2025-06-20T19:10:31.301496259Z" level=info msg="StartContainer for \"4ba3d36773ad0c9d61c8b828a4f7ccac61fdbea2b5ecba7dbcf61727491cfec7\" returns successfully" Jun 20 19:10:31.319640 kubelet[2420]: E0620 19:10:31.319463 2420 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.12.190.100:6443/api/v1/namespaces/default/events\": dial tcp 49.12.190.100:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-0-5-45318d0d95.184ad5ef803ae753 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-0-5-45318d0d95,UID:ci-4230-2-0-5-45318d0d95,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-0-5-45318d0d95,},FirstTimestamp:2025-06-20 19:10:29.812102995 +0000 UTC m=+1.104319305,LastTimestamp:2025-06-20 19:10:29.812102995 +0000 UTC m=+1.104319305,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-0-5-45318d0d95,}" Jun 20 19:10:31.420260 kubelet[2420]: I0620 19:10:31.419867 2420 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:31.876124 kubelet[2420]: E0620 19:10:31.875569 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-5-45318d0d95\" not found" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:31.878801 kubelet[2420]: E0620 19:10:31.878302 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-5-45318d0d95\" not found" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:31.881688 kubelet[2420]: E0620 19:10:31.881639 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-5-45318d0d95\" not found" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:32.884168 kubelet[2420]: E0620 19:10:32.882759 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-5-45318d0d95\" not found" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:32.885112 kubelet[2420]: E0620 19:10:32.884850 2420 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-5-45318d0d95\" not found" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:33.399648 kubelet[2420]: E0620 19:10:33.399609 2420 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-2-0-5-45318d0d95\" not found" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:33.434232 kubelet[2420]: I0620 19:10:33.434014 2420 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:33.434232 kubelet[2420]: E0620 19:10:33.434053 2420 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230-2-0-5-45318d0d95\": node \"ci-4230-2-0-5-45318d0d95\" not found" Jun 20 19:10:33.512563 kubelet[2420]: E0620 19:10:33.512529 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-0-5-45318d0d95\" not found" Jun 20 19:10:33.530841 kubelet[2420]: I0620 19:10:33.530530 2420 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:33.582503 kubelet[2420]: E0620 19:10:33.582273 2420 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-0-5-45318d0d95\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:33.582503 kubelet[2420]: I0620 19:10:33.582307 2420 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:33.591624 kubelet[2420]: E0620 19:10:33.591390 2420 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-0-5-45318d0d95\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:33.591624 kubelet[2420]: I0620 19:10:33.591429 2420 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:33.597722 kubelet[2420]: E0620 19:10:33.597626 2420 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-0-5-45318d0d95\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:33.812090 kubelet[2420]: I0620 19:10:33.811821 2420 apiserver.go:52] "Watching apiserver" Jun 20 19:10:33.826634 kubelet[2420]: I0620 19:10:33.826606 2420 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:10:33.884064 kubelet[2420]: I0620 19:10:33.883785 2420 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:33.888612 kubelet[2420]: E0620 19:10:33.888390 2420 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-0-5-45318d0d95\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:35.653968 systemd[1]: Reload requested from client PID 2697 ('systemctl') (unit session-7.scope)... Jun 20 19:10:35.653993 systemd[1]: Reloading... Jun 20 19:10:35.797226 zram_generator::config[2745]: No configuration found. Jun 20 19:10:35.901091 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:10:36.011773 systemd[1]: Reloading finished in 357 ms. Jun 20 19:10:36.044358 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:36.060701 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:10:36.062237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:36.062324 systemd[1]: kubelet.service: Consumed 1.536s CPU time, 131.3M memory peak. Jun 20 19:10:36.073694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:10:36.234571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:10:36.235893 (kubelet)[2787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:10:36.304552 kubelet[2787]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:10:36.304552 kubelet[2787]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:10:36.304552 kubelet[2787]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:10:36.305022 kubelet[2787]: I0620 19:10:36.304934 2787 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:10:36.320628 kubelet[2787]: I0620 19:10:36.320571 2787 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:10:36.320628 kubelet[2787]: I0620 19:10:36.320610 2787 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:10:36.321069 kubelet[2787]: I0620 19:10:36.321032 2787 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:10:36.322511 kubelet[2787]: I0620 19:10:36.322460 2787 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:10:36.325458 kubelet[2787]: I0620 19:10:36.325394 2787 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:10:36.329368 kubelet[2787]: E0620 19:10:36.329320 2787 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 19:10:36.329368 kubelet[2787]: I0620 19:10:36.329359 2787 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 19:10:36.331872 kubelet[2787]: I0620 19:10:36.331834 2787 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:10:36.332071 kubelet[2787]: I0620 19:10:36.332032 2787 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:10:36.332282 kubelet[2787]: I0620 19:10:36.332066 2787 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-0-5-45318d0d95","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:10:36.332416 kubelet[2787]: I0620 19:10:36.332287 2787 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:10:36.332416 kubelet[2787]: I0620 19:10:36.332298 2787 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:10:36.332416 kubelet[2787]: I0620 19:10:36.332342 2787 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:10:36.332513 kubelet[2787]: I0620 19:10:36.332494 2787 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:10:36.332513 kubelet[2787]: I0620 19:10:36.332508 2787 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:10:36.332564 kubelet[2787]: I0620 19:10:36.332525 2787 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:10:36.339987 kubelet[2787]: I0620 19:10:36.332535 2787 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:10:36.340812 kubelet[2787]: I0620 19:10:36.340654 2787 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 19:10:36.341566 kubelet[2787]: I0620 19:10:36.341307 2787 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:10:36.342886 kubelet[2787]: I0620 19:10:36.342723 2787 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:10:36.342886 kubelet[2787]: I0620 19:10:36.342771 2787 server.go:1287] "Started kubelet" Jun 20 19:10:36.345182 kubelet[2787]: I0620 19:10:36.344803 2787 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:10:36.349938 kubelet[2787]: I0620 19:10:36.349882 2787 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:10:36.352349 kubelet[2787]: I0620 19:10:36.351447 2787 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:10:36.352589 kubelet[2787]: I0620 19:10:36.352521 2787 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:10:36.352805 kubelet[2787]: I0620 19:10:36.352784 2787 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:10:36.353393 kubelet[2787]: I0620 19:10:36.353367 2787 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:10:36.354011 kubelet[2787]: I0620 19:10:36.353980 2787 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:10:36.354296 kubelet[2787]: E0620 19:10:36.354266 2787 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-0-5-45318d0d95\" not found" Jun 20 19:10:36.357565 kubelet[2787]: I0620 19:10:36.357513 2787 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:10:36.357761 kubelet[2787]: I0620 19:10:36.357738 2787 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:10:36.359894 kubelet[2787]: I0620 19:10:36.359807 2787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:10:36.361065 kubelet[2787]: I0620 19:10:36.361020 2787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:10:36.361065 kubelet[2787]: I0620 19:10:36.361058 2787 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:10:36.361218 kubelet[2787]: I0620 19:10:36.361081 2787 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:10:36.361218 kubelet[2787]: I0620 19:10:36.361087 2787 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:10:36.361279 kubelet[2787]: E0620 19:10:36.361136 2787 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:10:36.381567 kubelet[2787]: I0620 19:10:36.381526 2787 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:10:36.381883 kubelet[2787]: I0620 19:10:36.381859 2787 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:10:36.391759 kubelet[2787]: I0620 19:10:36.390527 2787 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:10:36.454222 kubelet[2787]: I0620 19:10:36.454195 2787 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:10:36.454489 kubelet[2787]: I0620 19:10:36.454471 2787 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:10:36.454656 kubelet[2787]: I0620 19:10:36.454639 2787 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:10:36.455011 kubelet[2787]: I0620 19:10:36.454988 2787 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:10:36.455103 kubelet[2787]: I0620 19:10:36.455076 2787 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:10:36.455246 kubelet[2787]: I0620 19:10:36.455236 2787 policy_none.go:49] "None policy: Start" Jun 20 19:10:36.455328 kubelet[2787]: I0620 19:10:36.455315 2787 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:10:36.455407 kubelet[2787]: I0620 19:10:36.455398 2787 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:10:36.455615 kubelet[2787]: I0620 19:10:36.455600 2787 state_mem.go:75] "Updated machine memory state" Jun 20 19:10:36.461987 kubelet[2787]: E0620 19:10:36.461938 2787 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:10:36.462417 kubelet[2787]: I0620 19:10:36.462398 2787 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:10:36.462785 kubelet[2787]: I0620 19:10:36.462760 2787 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:10:36.462963 kubelet[2787]: I0620 19:10:36.462908 2787 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:10:36.463863 kubelet[2787]: I0620 19:10:36.463834 2787 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:10:36.466448 kubelet[2787]: E0620 19:10:36.466409 2787 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:10:36.578114 kubelet[2787]: I0620 19:10:36.578003 2787 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:36.592863 kubelet[2787]: I0620 19:10:36.592196 2787 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:36.593006 kubelet[2787]: I0620 19:10:36.592932 2787 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-0-5-45318d0d95" Jun 20 19:10:36.656694 sudo[2819]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 19:10:36.657005 sudo[2819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 19:10:36.665934 kubelet[2787]: I0620 19:10:36.663018 2787 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:36.665934 kubelet[2787]: I0620 19:10:36.663530 2787 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:36.665934 kubelet[2787]: I0620 19:10:36.663892 2787 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:36.760041 kubelet[2787]: I0620 19:10:36.759848 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fd15b6a02023c0a6e08d2f517c8567d-ca-certs\") pod \"kube-apiserver-ci-4230-2-0-5-45318d0d95\" (UID: \"0fd15b6a02023c0a6e08d2f517c8567d\") " pod="kube-system/kube-apiserver-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:36.760852 kubelet[2787]: I0620 19:10:36.760417 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fd15b6a02023c0a6e08d2f517c8567d-k8s-certs\") pod \"kube-apiserver-ci-4230-2-0-5-45318d0d95\" (UID: \"0fd15b6a02023c0a6e08d2f517c8567d\") " pod="kube-system/kube-apiserver-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:36.761263 kubelet[2787]: I0620 19:10:36.761217 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28d3ab29192e431f85c32d727ced7364-kubeconfig\") pod \"kube-scheduler-ci-4230-2-0-5-45318d0d95\" (UID: \"28d3ab29192e431f85c32d727ced7364\") " pod="kube-system/kube-scheduler-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:36.761617 kubelet[2787]: I0620 19:10:36.761550 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f875a00ed52eb7afe63526ee19738ebe-ca-certs\") pod \"kube-controller-manager-ci-4230-2-0-5-45318d0d95\" (UID: \"f875a00ed52eb7afe63526ee19738ebe\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:36.761982 kubelet[2787]: I0620 19:10:36.761868 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f875a00ed52eb7afe63526ee19738ebe-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-0-5-45318d0d95\" (UID: \"f875a00ed52eb7afe63526ee19738ebe\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:36.762599 kubelet[2787]: I0620 19:10:36.762452 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f875a00ed52eb7afe63526ee19738ebe-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-0-5-45318d0d95\" (UID: \"f875a00ed52eb7afe63526ee19738ebe\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:36.763018 kubelet[2787]: I0620 19:10:36.762909 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f875a00ed52eb7afe63526ee19738ebe-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-0-5-45318d0d95\" (UID: \"f875a00ed52eb7afe63526ee19738ebe\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:36.763018 kubelet[2787]: I0620 19:10:36.762941 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f875a00ed52eb7afe63526ee19738ebe-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-0-5-45318d0d95\" (UID: \"f875a00ed52eb7afe63526ee19738ebe\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:36.763018 kubelet[2787]: I0620 19:10:36.762987 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fd15b6a02023c0a6e08d2f517c8567d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-0-5-45318d0d95\" (UID: \"0fd15b6a02023c0a6e08d2f517c8567d\") " pod="kube-system/kube-apiserver-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:37.164749 sudo[2819]: pam_unix(sudo:session): session closed for user root Jun 20 19:10:37.338612 kubelet[2787]: I0620 19:10:37.338534 2787 apiserver.go:52] "Watching apiserver" Jun 20 19:10:37.358350 kubelet[2787]: I0620 19:10:37.358278 2787 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:10:37.431347 kubelet[2787]: I0620 19:10:37.429527 2787 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:37.431347 kubelet[2787]: I0620 19:10:37.429610 2787 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:37.445786 kubelet[2787]: E0620 19:10:37.445722 2787 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-0-5-45318d0d95\" already exists" pod="kube-system/kube-controller-manager-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:37.448209 kubelet[2787]: E0620 19:10:37.447340 2787 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-0-5-45318d0d95\" already exists" pod="kube-system/kube-apiserver-ci-4230-2-0-5-45318d0d95" Jun 20 19:10:37.469522 kubelet[2787]: I0620 19:10:37.469203 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-0-5-45318d0d95" podStartSLOduration=1.469144488 podStartE2EDuration="1.469144488s" podCreationTimestamp="2025-06-20 19:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:10:37.468598922 +0000 UTC m=+1.227048605" watchObservedRunningTime="2025-06-20 19:10:37.469144488 +0000 UTC m=+1.227594131" Jun 20 19:10:37.486076 kubelet[2787]: I0620 19:10:37.486009 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-0-5-45318d0d95" podStartSLOduration=1.485988662 podStartE2EDuration="1.485988662s" podCreationTimestamp="2025-06-20 19:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:10:37.484288044 +0000 UTC m=+1.242737767" watchObservedRunningTime="2025-06-20 19:10:37.485988662 +0000 UTC m=+1.244438305" Jun 20 19:10:37.519858 kubelet[2787]: I0620 19:10:37.519793 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-0-5-45318d0d95" podStartSLOduration=1.51977173 podStartE2EDuration="1.51977173s" podCreationTimestamp="2025-06-20 19:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:10:37.50031417 +0000 UTC m=+1.258763813" watchObservedRunningTime="2025-06-20 19:10:37.51977173 +0000 UTC m=+1.278221333" Jun 20 19:10:39.622391 sudo[1875]: pam_unix(sudo:session): session closed for user root Jun 20 19:10:39.783805 sshd[1874]: Connection closed by 147.75.109.163 port 35138 Jun 20 19:10:39.783677 sshd-session[1872]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:39.790081 systemd[1]: sshd@6-49.12.190.100:22-147.75.109.163:35138.service: Deactivated successfully. Jun 20 19:10:39.790237 systemd-logind[1475]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:10:39.792914 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:10:39.793358 systemd[1]: session-7.scope: Consumed 8.341s CPU time, 264.5M memory peak. Jun 20 19:10:39.794965 systemd-logind[1475]: Removed session 7. Jun 20 19:10:41.118667 kubelet[2787]: I0620 19:10:41.118501 2787 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:10:41.119580 containerd[1495]: time="2025-06-20T19:10:41.119458855Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:10:41.120204 kubelet[2787]: I0620 19:10:41.120164 2787 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:10:42.024669 systemd[1]: Created slice kubepods-besteffort-pod2fa353ad_20a5_4f5a_a251_ebde6dfd7b2f.slice - libcontainer container kubepods-besteffort-pod2fa353ad_20a5_4f5a_a251_ebde6dfd7b2f.slice. Jun 20 19:10:42.048536 systemd[1]: Created slice kubepods-burstable-pod24cc1d18_459b_43ce_9888_c4a1d2f80337.slice - libcontainer container kubepods-burstable-pod24cc1d18_459b_43ce_9888_c4a1d2f80337.slice. Jun 20 19:10:42.098098 kubelet[2787]: I0620 19:10:42.098045 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-etc-cni-netd\") pod \"cilium-rm6bh\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " pod="kube-system/cilium-rm6bh" Jun 20 19:10:42.098098 kubelet[2787]: I0620 19:10:42.098097 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2fa353ad-20a5-4f5a-a251-ebde6dfd7b2f-kube-proxy\") pod \"kube-proxy-5wf4x\" (UID: \"2fa353ad-20a5-4f5a-a251-ebde6dfd7b2f\") " pod="kube-system/kube-proxy-5wf4x" Jun 20 19:10:42.098538 kubelet[2787]: I0620 19:10:42.098124 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-hostproc\") pod \"cilium-rm6bh\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " pod="kube-system/cilium-rm6bh" Jun 20 19:10:42.098538 kubelet[2787]: I0620 19:10:42.098169 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-cilium-cgroup\") pod \"cilium-rm6bh\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " pod="kube-system/cilium-rm6bh" Jun 20 19:10:42.098538 kubelet[2787]: I0620 19:10:42.098193 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tfp6\" (UniqueName: \"kubernetes.io/projected/24cc1d18-459b-43ce-9888-c4a1d2f80337-kube-api-access-4tfp6\") pod \"cilium-rm6bh\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " pod="kube-system/cilium-rm6bh" Jun 20 19:10:42.098538 kubelet[2787]: I0620 19:10:42.098300 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-cni-path\") pod \"cilium-rm6bh\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " pod="kube-system/cilium-rm6bh" Jun 20 19:10:42.098538 kubelet[2787]: I0620 19:10:42.098344 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-lib-modules\") pod \"cilium-rm6bh\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " pod="kube-system/cilium-rm6bh" Jun 20 19:10:42.098538 kubelet[2787]: I0620 19:10:42.098384 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-xtables-lock\") pod \"cilium-rm6bh\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " pod="kube-system/cilium-rm6bh" Jun 20 19:10:42.098835 kubelet[2787]: I0620 19:10:42.098409 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/24cc1d18-459b-43ce-9888-c4a1d2f80337-clustermesh-secrets\") pod \"cilium-rm6bh\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " pod="kube-system/cilium-rm6bh" Jun 20 19:10:42.098835 kubelet[2787]: I0620 19:10:42.098449 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fa353ad-20a5-4f5a-a251-ebde6dfd7b2f-xtables-lock\") pod \"kube-proxy-5wf4x\" (UID: \"2fa353ad-20a5-4f5a-a251-ebde6dfd7b2f\") " pod="kube-system/kube-proxy-5wf4x" Jun 20 19:10:42.098835 kubelet[2787]: I0620 19:10:42.098491 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fa353ad-20a5-4f5a-a251-ebde6dfd7b2f-lib-modules\") pod \"kube-proxy-5wf4x\" (UID: \"2fa353ad-20a5-4f5a-a251-ebde6dfd7b2f\") " pod="kube-system/kube-proxy-5wf4x" Jun 20 19:10:42.098835 kubelet[2787]: I0620 19:10:42.098516 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-host-proc-sys-kernel\") pod \"cilium-rm6bh\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " pod="kube-system/cilium-rm6bh" Jun 20 19:10:42.098835 kubelet[2787]: I0620 19:10:42.098539 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq5tw\" (UniqueName: \"kubernetes.io/projected/2fa353ad-20a5-4f5a-a251-ebde6dfd7b2f-kube-api-access-dq5tw\") pod \"kube-proxy-5wf4x\" (UID: \"2fa353ad-20a5-4f5a-a251-ebde6dfd7b2f\") " pod="kube-system/kube-proxy-5wf4x" Jun 20 19:10:42.099041 kubelet[2787]: I0620 19:10:42.098603 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-cilium-run\") pod \"cilium-rm6bh\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " pod="kube-system/cilium-rm6bh" Jun 20 19:10:42.099041 kubelet[2787]: I0620 19:10:42.098627 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-bpf-maps\") pod \"cilium-rm6bh\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " pod="kube-system/cilium-rm6bh" Jun 20 19:10:42.099041 kubelet[2787]: I0620 19:10:42.098649 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-host-proc-sys-net\") pod \"cilium-rm6bh\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " pod="kube-system/cilium-rm6bh" Jun 20 19:10:42.099041 kubelet[2787]: I0620 19:10:42.098682 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24cc1d18-459b-43ce-9888-c4a1d2f80337-cilium-config-path\") pod \"cilium-rm6bh\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " pod="kube-system/cilium-rm6bh" Jun 20 19:10:42.099041 kubelet[2787]: I0620 19:10:42.098729 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/24cc1d18-459b-43ce-9888-c4a1d2f80337-hubble-tls\") pod \"cilium-rm6bh\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " pod="kube-system/cilium-rm6bh" Jun 20 19:10:42.266870 systemd[1]: Created slice kubepods-besteffort-podd9e78bce_0512_4be0_94c5_d8a7f9d382a9.slice - libcontainer container kubepods-besteffort-podd9e78bce_0512_4be0_94c5_d8a7f9d382a9.slice. Jun 20 19:10:42.300710 kubelet[2787]: I0620 19:10:42.300058 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9e78bce-0512-4be0-94c5-d8a7f9d382a9-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bqh7c\" (UID: \"d9e78bce-0512-4be0-94c5-d8a7f9d382a9\") " pod="kube-system/cilium-operator-6c4d7847fc-bqh7c" Jun 20 19:10:42.300710 kubelet[2787]: I0620 19:10:42.300624 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjzmb\" (UniqueName: \"kubernetes.io/projected/d9e78bce-0512-4be0-94c5-d8a7f9d382a9-kube-api-access-cjzmb\") pod \"cilium-operator-6c4d7847fc-bqh7c\" (UID: \"d9e78bce-0512-4be0-94c5-d8a7f9d382a9\") " pod="kube-system/cilium-operator-6c4d7847fc-bqh7c" Jun 20 19:10:42.332300 containerd[1495]: time="2025-06-20T19:10:42.332133654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5wf4x,Uid:2fa353ad-20a5-4f5a-a251-ebde6dfd7b2f,Namespace:kube-system,Attempt:0,}" Jun 20 19:10:42.357202 containerd[1495]: time="2025-06-20T19:10:42.356283944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rm6bh,Uid:24cc1d18-459b-43ce-9888-c4a1d2f80337,Namespace:kube-system,Attempt:0,}" Jun 20 19:10:42.362900 containerd[1495]: time="2025-06-20T19:10:42.362773960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:10:42.362900 containerd[1495]: time="2025-06-20T19:10:42.362832081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:10:42.362900 containerd[1495]: time="2025-06-20T19:10:42.362844201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:42.363710 containerd[1495]: time="2025-06-20T19:10:42.363371045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:42.393339 systemd[1]: Started cri-containerd-7c68cd0238e53f6455d92d26c25716c8fdc177002fc5b49809b7c635fd198576.scope - libcontainer container 7c68cd0238e53f6455d92d26c25716c8fdc177002fc5b49809b7c635fd198576. Jun 20 19:10:42.400068 containerd[1495]: time="2025-06-20T19:10:42.399331958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:10:42.400068 containerd[1495]: time="2025-06-20T19:10:42.399405998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:10:42.400068 containerd[1495]: time="2025-06-20T19:10:42.399418799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:42.403575 containerd[1495]: time="2025-06-20T19:10:42.400969092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:42.434543 systemd[1]: Started cri-containerd-3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76.scope - libcontainer container 3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76. Jun 20 19:10:42.444578 containerd[1495]: time="2025-06-20T19:10:42.444454270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5wf4x,Uid:2fa353ad-20a5-4f5a-a251-ebde6dfd7b2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c68cd0238e53f6455d92d26c25716c8fdc177002fc5b49809b7c635fd198576\"" Jun 20 19:10:42.452370 containerd[1495]: time="2025-06-20T19:10:42.451509051Z" level=info msg="CreateContainer within sandbox \"7c68cd0238e53f6455d92d26c25716c8fdc177002fc5b49809b7c635fd198576\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:10:42.477712 containerd[1495]: time="2025-06-20T19:10:42.477631358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rm6bh,Uid:24cc1d18-459b-43ce-9888-c4a1d2f80337,Namespace:kube-system,Attempt:0,} returns sandbox id \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\"" Jun 20 19:10:42.483758 containerd[1495]: time="2025-06-20T19:10:42.483639570Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 19:10:42.487464 containerd[1495]: time="2025-06-20T19:10:42.487241161Z" level=info msg="CreateContainer within sandbox \"7c68cd0238e53f6455d92d26c25716c8fdc177002fc5b49809b7c635fd198576\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a75997f99c8b2e2f05827cccccfa0083d6eb898b531d97869319458f492036ee\"" Jun 20 19:10:42.489369 containerd[1495]: time="2025-06-20T19:10:42.488963256Z" level=info msg="StartContainer for \"a75997f99c8b2e2f05827cccccfa0083d6eb898b531d97869319458f492036ee\"" Jun 20 19:10:42.527540 systemd[1]: Started cri-containerd-a75997f99c8b2e2f05827cccccfa0083d6eb898b531d97869319458f492036ee.scope - libcontainer container a75997f99c8b2e2f05827cccccfa0083d6eb898b531d97869319458f492036ee. Jun 20 19:10:42.569020 containerd[1495]: time="2025-06-20T19:10:42.567614899Z" level=info msg="StartContainer for \"a75997f99c8b2e2f05827cccccfa0083d6eb898b531d97869319458f492036ee\" returns successfully" Jun 20 19:10:42.575086 containerd[1495]: time="2025-06-20T19:10:42.575020723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bqh7c,Uid:d9e78bce-0512-4be0-94c5-d8a7f9d382a9,Namespace:kube-system,Attempt:0,}" Jun 20 19:10:42.612254 containerd[1495]: time="2025-06-20T19:10:42.611197757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:10:42.612254 containerd[1495]: time="2025-06-20T19:10:42.611254038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:10:42.612254 containerd[1495]: time="2025-06-20T19:10:42.611269438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:42.612254 containerd[1495]: time="2025-06-20T19:10:42.611351039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:10:42.643901 systemd[1]: Started cri-containerd-1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990.scope - libcontainer container 1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990. Jun 20 19:10:42.699672 containerd[1495]: time="2025-06-20T19:10:42.699565245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bqh7c,Uid:d9e78bce-0512-4be0-94c5-d8a7f9d382a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990\"" Jun 20 19:10:43.461678 kubelet[2787]: I0620 19:10:43.460997 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5wf4x" podStartSLOduration=2.46097176 podStartE2EDuration="2.46097176s" podCreationTimestamp="2025-06-20 19:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:10:43.460750079 +0000 UTC m=+7.219199802" watchObservedRunningTime="2025-06-20 19:10:43.46097176 +0000 UTC m=+7.219421483" Jun 20 19:10:46.757982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1929104519.mount: Deactivated successfully. Jun 20 19:10:48.346502 containerd[1495]: time="2025-06-20T19:10:48.346436216Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:48.348210 containerd[1495]: time="2025-06-20T19:10:48.347951147Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jun 20 19:10:48.348608 containerd[1495]: time="2025-06-20T19:10:48.348563951Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:48.351838 containerd[1495]: time="2025-06-20T19:10:48.351768254Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.868054604s" Jun 20 19:10:48.351838 containerd[1495]: time="2025-06-20T19:10:48.351824814Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jun 20 19:10:48.355052 containerd[1495]: time="2025-06-20T19:10:48.353989669Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 19:10:48.357806 containerd[1495]: time="2025-06-20T19:10:48.355375639Z" level=info msg="CreateContainer within sandbox \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:10:48.385019 containerd[1495]: time="2025-06-20T19:10:48.384959368Z" level=info msg="CreateContainer within sandbox \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a028b22aa97a0646ecf0b71246115b8f4b788b0de9889369edd480e3d0eaf20f\"" Jun 20 19:10:48.387233 containerd[1495]: time="2025-06-20T19:10:48.386940781Z" level=info msg="StartContainer for \"a028b22aa97a0646ecf0b71246115b8f4b788b0de9889369edd480e3d0eaf20f\"" Jun 20 19:10:48.427530 systemd[1]: run-containerd-runc-k8s.io-a028b22aa97a0646ecf0b71246115b8f4b788b0de9889369edd480e3d0eaf20f-runc.1nJjEt.mount: Deactivated successfully. Jun 20 19:10:48.439507 systemd[1]: Started cri-containerd-a028b22aa97a0646ecf0b71246115b8f4b788b0de9889369edd480e3d0eaf20f.scope - libcontainer container a028b22aa97a0646ecf0b71246115b8f4b788b0de9889369edd480e3d0eaf20f. Jun 20 19:10:48.477464 containerd[1495]: time="2025-06-20T19:10:48.477351418Z" level=info msg="StartContainer for \"a028b22aa97a0646ecf0b71246115b8f4b788b0de9889369edd480e3d0eaf20f\" returns successfully" Jun 20 19:10:48.494228 systemd[1]: cri-containerd-a028b22aa97a0646ecf0b71246115b8f4b788b0de9889369edd480e3d0eaf20f.scope: Deactivated successfully. Jun 20 19:10:48.704743 containerd[1495]: time="2025-06-20T19:10:48.704639179Z" level=info msg="shim disconnected" id=a028b22aa97a0646ecf0b71246115b8f4b788b0de9889369edd480e3d0eaf20f namespace=k8s.io Jun 20 19:10:48.704743 containerd[1495]: time="2025-06-20T19:10:48.704738299Z" level=warning msg="cleaning up after shim disconnected" id=a028b22aa97a0646ecf0b71246115b8f4b788b0de9889369edd480e3d0eaf20f namespace=k8s.io Jun 20 19:10:48.704743 containerd[1495]: time="2025-06-20T19:10:48.704752579Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:10:48.719888 containerd[1495]: time="2025-06-20T19:10:48.719817006Z" level=warning msg="cleanup warnings time=\"2025-06-20T19:10:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 19:10:49.374767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a028b22aa97a0646ecf0b71246115b8f4b788b0de9889369edd480e3d0eaf20f-rootfs.mount: Deactivated successfully. Jun 20 19:10:49.474384 containerd[1495]: time="2025-06-20T19:10:49.474150602Z" level=info msg="CreateContainer within sandbox \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:10:49.502000 containerd[1495]: time="2025-06-20T19:10:49.499887817Z" level=info msg="CreateContainer within sandbox \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"17cd884940fbbf1f71b7aa998b02fbc7adecadc23840a90bfb4635a84462a43d\"" Jun 20 19:10:49.502000 containerd[1495]: time="2025-06-20T19:10:49.501313347Z" level=info msg="StartContainer for \"17cd884940fbbf1f71b7aa998b02fbc7adecadc23840a90bfb4635a84462a43d\"" Jun 20 19:10:49.543429 systemd[1]: Started cri-containerd-17cd884940fbbf1f71b7aa998b02fbc7adecadc23840a90bfb4635a84462a43d.scope - libcontainer container 17cd884940fbbf1f71b7aa998b02fbc7adecadc23840a90bfb4635a84462a43d. Jun 20 19:10:49.586069 containerd[1495]: time="2025-06-20T19:10:49.586005242Z" level=info msg="StartContainer for \"17cd884940fbbf1f71b7aa998b02fbc7adecadc23840a90bfb4635a84462a43d\" returns successfully" Jun 20 19:10:49.601531 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:10:49.602315 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:10:49.602499 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:10:49.609748 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:10:49.609974 systemd[1]: cri-containerd-17cd884940fbbf1f71b7aa998b02fbc7adecadc23840a90bfb4635a84462a43d.scope: Deactivated successfully. Jun 20 19:10:49.637082 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:10:49.645375 containerd[1495]: time="2025-06-20T19:10:49.645256165Z" level=info msg="shim disconnected" id=17cd884940fbbf1f71b7aa998b02fbc7adecadc23840a90bfb4635a84462a43d namespace=k8s.io Jun 20 19:10:49.645375 containerd[1495]: time="2025-06-20T19:10:49.645371606Z" level=warning msg="cleaning up after shim disconnected" id=17cd884940fbbf1f71b7aa998b02fbc7adecadc23840a90bfb4635a84462a43d namespace=k8s.io Jun 20 19:10:49.645832 containerd[1495]: time="2025-06-20T19:10:49.645387326Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:10:50.374382 systemd[1]: run-containerd-runc-k8s.io-17cd884940fbbf1f71b7aa998b02fbc7adecadc23840a90bfb4635a84462a43d-runc.rrEIlH.mount: Deactivated successfully. Jun 20 19:10:50.374543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17cd884940fbbf1f71b7aa998b02fbc7adecadc23840a90bfb4635a84462a43d-rootfs.mount: Deactivated successfully. Jun 20 19:10:50.376645 containerd[1495]: time="2025-06-20T19:10:50.375764482Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:50.377854 containerd[1495]: time="2025-06-20T19:10:50.377797215Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jun 20 19:10:50.379144 containerd[1495]: time="2025-06-20T19:10:50.379101064Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:10:50.380747 containerd[1495]: time="2025-06-20T19:10:50.380616674Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.023396862s" Jun 20 19:10:50.380896 containerd[1495]: time="2025-06-20T19:10:50.380874995Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jun 20 19:10:50.386913 containerd[1495]: time="2025-06-20T19:10:50.386858195Z" level=info msg="CreateContainer within sandbox \"1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 19:10:50.404600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount902080738.mount: Deactivated successfully. Jun 20 19:10:50.409592 containerd[1495]: time="2025-06-20T19:10:50.409497263Z" level=info msg="CreateContainer within sandbox \"1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ef73b63f2731eadbb941446d458bc7fbe1255e9c45feea2495d3e3258610181e\"" Jun 20 19:10:50.410844 containerd[1495]: time="2025-06-20T19:10:50.410782112Z" level=info msg="StartContainer for \"ef73b63f2731eadbb941446d458bc7fbe1255e9c45feea2495d3e3258610181e\"" Jun 20 19:10:50.448564 systemd[1]: Started cri-containerd-ef73b63f2731eadbb941446d458bc7fbe1255e9c45feea2495d3e3258610181e.scope - libcontainer container ef73b63f2731eadbb941446d458bc7fbe1255e9c45feea2495d3e3258610181e. Jun 20 19:10:50.486743 containerd[1495]: time="2025-06-20T19:10:50.485202080Z" level=info msg="CreateContainer within sandbox \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:10:50.488148 containerd[1495]: time="2025-06-20T19:10:50.487509415Z" level=info msg="StartContainer for \"ef73b63f2731eadbb941446d458bc7fbe1255e9c45feea2495d3e3258610181e\" returns successfully" Jun 20 19:10:50.515262 containerd[1495]: time="2025-06-20T19:10:50.514563752Z" level=info msg="CreateContainer within sandbox \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a10a8f44aff2adfe51036327ab6e6232c32f3c6eb474dfac58dda5a83ca649ea\"" Jun 20 19:10:50.517880 containerd[1495]: time="2025-06-20T19:10:50.516439245Z" level=info msg="StartContainer for \"a10a8f44aff2adfe51036327ab6e6232c32f3c6eb474dfac58dda5a83ca649ea\"" Jun 20 19:10:50.553397 systemd[1]: Started cri-containerd-a10a8f44aff2adfe51036327ab6e6232c32f3c6eb474dfac58dda5a83ca649ea.scope - libcontainer container a10a8f44aff2adfe51036327ab6e6232c32f3c6eb474dfac58dda5a83ca649ea. Jun 20 19:10:50.605020 containerd[1495]: time="2025-06-20T19:10:50.604957066Z" level=info msg="StartContainer for \"a10a8f44aff2adfe51036327ab6e6232c32f3c6eb474dfac58dda5a83ca649ea\" returns successfully" Jun 20 19:10:50.613764 systemd[1]: cri-containerd-a10a8f44aff2adfe51036327ab6e6232c32f3c6eb474dfac58dda5a83ca649ea.scope: Deactivated successfully. Jun 20 19:10:50.693460 containerd[1495]: time="2025-06-20T19:10:50.693306725Z" level=info msg="shim disconnected" id=a10a8f44aff2adfe51036327ab6e6232c32f3c6eb474dfac58dda5a83ca649ea namespace=k8s.io Jun 20 19:10:50.693818 containerd[1495]: time="2025-06-20T19:10:50.693671448Z" level=warning msg="cleaning up after shim disconnected" id=a10a8f44aff2adfe51036327ab6e6232c32f3c6eb474dfac58dda5a83ca649ea namespace=k8s.io Jun 20 19:10:50.693818 containerd[1495]: time="2025-06-20T19:10:50.693687888Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:10:51.492379 containerd[1495]: time="2025-06-20T19:10:51.492303334Z" level=info msg="CreateContainer within sandbox \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:10:51.523345 containerd[1495]: time="2025-06-20T19:10:51.523271970Z" level=info msg="CreateContainer within sandbox \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e76329f2642bbe9e0a2616311c63dc00a54847c16170437f755ebbbcc3d5261a\"" Jun 20 19:10:51.525278 containerd[1495]: time="2025-06-20T19:10:51.525079582Z" level=info msg="StartContainer for \"e76329f2642bbe9e0a2616311c63dc00a54847c16170437f755ebbbcc3d5261a\"" Jun 20 19:10:51.534573 kubelet[2787]: I0620 19:10:51.534404 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bqh7c" podStartSLOduration=1.853983222 podStartE2EDuration="9.534383521s" podCreationTimestamp="2025-06-20 19:10:42 +0000 UTC" firstStartedPulling="2025-06-20 19:10:42.702247788 +0000 UTC m=+6.460697431" lastFinishedPulling="2025-06-20 19:10:50.382648087 +0000 UTC m=+14.141097730" observedRunningTime="2025-06-20 19:10:51.502762321 +0000 UTC m=+15.261211964" watchObservedRunningTime="2025-06-20 19:10:51.534383521 +0000 UTC m=+15.292833164" Jun 20 19:10:51.585427 systemd[1]: Started cri-containerd-e76329f2642bbe9e0a2616311c63dc00a54847c16170437f755ebbbcc3d5261a.scope - libcontainer container e76329f2642bbe9e0a2616311c63dc00a54847c16170437f755ebbbcc3d5261a. Jun 20 19:10:51.636832 systemd[1]: cri-containerd-e76329f2642bbe9e0a2616311c63dc00a54847c16170437f755ebbbcc3d5261a.scope: Deactivated successfully. Jun 20 19:10:51.642886 containerd[1495]: time="2025-06-20T19:10:51.642751207Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24cc1d18_459b_43ce_9888_c4a1d2f80337.slice/cri-containerd-e76329f2642bbe9e0a2616311c63dc00a54847c16170437f755ebbbcc3d5261a.scope/memory.events\": no such file or directory" Jun 20 19:10:51.647358 containerd[1495]: time="2025-06-20T19:10:51.647228475Z" level=info msg="StartContainer for \"e76329f2642bbe9e0a2616311c63dc00a54847c16170437f755ebbbcc3d5261a\" returns successfully" Jun 20 19:10:51.682076 containerd[1495]: time="2025-06-20T19:10:51.681803694Z" level=info msg="shim disconnected" id=e76329f2642bbe9e0a2616311c63dc00a54847c16170437f755ebbbcc3d5261a namespace=k8s.io Jun 20 19:10:51.682076 containerd[1495]: time="2025-06-20T19:10:51.681884295Z" level=warning msg="cleaning up after shim disconnected" id=e76329f2642bbe9e0a2616311c63dc00a54847c16170437f755ebbbcc3d5261a namespace=k8s.io Jun 20 19:10:51.682076 containerd[1495]: time="2025-06-20T19:10:51.681894215Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:10:51.696816 containerd[1495]: time="2025-06-20T19:10:51.696735749Z" level=warning msg="cleanup warnings time=\"2025-06-20T19:10:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 19:10:52.376054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e76329f2642bbe9e0a2616311c63dc00a54847c16170437f755ebbbcc3d5261a-rootfs.mount: Deactivated successfully. Jun 20 19:10:52.505045 containerd[1495]: time="2025-06-20T19:10:52.502485939Z" level=info msg="CreateContainer within sandbox \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:10:52.531300 containerd[1495]: time="2025-06-20T19:10:52.531127314Z" level=info msg="CreateContainer within sandbox \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d121ab2c94172cf8f86483a787dbe57b724026061289169f76c5bbb8a45eee44\"" Jun 20 19:10:52.532360 containerd[1495]: time="2025-06-20T19:10:52.532311081Z" level=info msg="StartContainer for \"d121ab2c94172cf8f86483a787dbe57b724026061289169f76c5bbb8a45eee44\"" Jun 20 19:10:52.575559 systemd[1]: Started cri-containerd-d121ab2c94172cf8f86483a787dbe57b724026061289169f76c5bbb8a45eee44.scope - libcontainer container d121ab2c94172cf8f86483a787dbe57b724026061289169f76c5bbb8a45eee44. Jun 20 19:10:52.634714 containerd[1495]: time="2025-06-20T19:10:52.634486985Z" level=info msg="StartContainer for \"d121ab2c94172cf8f86483a787dbe57b724026061289169f76c5bbb8a45eee44\" returns successfully" Jun 20 19:10:52.776468 kubelet[2787]: I0620 19:10:52.776426 2787 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:10:52.864214 systemd[1]: Created slice kubepods-burstable-podcd1bd96b_1266_43b4_9537_0828dbed6a0b.slice - libcontainer container kubepods-burstable-podcd1bd96b_1266_43b4_9537_0828dbed6a0b.slice. Jun 20 19:10:52.880342 kubelet[2787]: I0620 19:10:52.880297 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q88v\" (UniqueName: \"kubernetes.io/projected/cd1bd96b-1266-43b4-9537-0828dbed6a0b-kube-api-access-8q88v\") pod \"coredns-668d6bf9bc-66wht\" (UID: \"cd1bd96b-1266-43b4-9537-0828dbed6a0b\") " pod="kube-system/coredns-668d6bf9bc-66wht" Jun 20 19:10:52.881236 kubelet[2787]: I0620 19:10:52.881187 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd1bd96b-1266-43b4-9537-0828dbed6a0b-config-volume\") pod \"coredns-668d6bf9bc-66wht\" (UID: \"cd1bd96b-1266-43b4-9537-0828dbed6a0b\") " pod="kube-system/coredns-668d6bf9bc-66wht" Jun 20 19:10:52.883781 kubelet[2787]: W0620 19:10:52.883737 2787 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4230-2-0-5-45318d0d95" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-2-0-5-45318d0d95' and this object Jun 20 19:10:52.883881 kubelet[2787]: E0620 19:10:52.883784 2787 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4230-2-0-5-45318d0d95\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-0-5-45318d0d95' and this object" logger="UnhandledError" Jun 20 19:10:52.888796 kubelet[2787]: I0620 19:10:52.888638 2787 status_manager.go:890] "Failed to get status for pod" podUID="cd1bd96b-1266-43b4-9537-0828dbed6a0b" pod="kube-system/coredns-668d6bf9bc-66wht" err="pods \"coredns-668d6bf9bc-66wht\" is forbidden: User \"system:node:ci-4230-2-0-5-45318d0d95\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-2-0-5-45318d0d95' and this object" Jun 20 19:10:52.890104 systemd[1]: Created slice kubepods-burstable-pod2208e8a7_c890_4642_8ae1_076d8bd82555.slice - libcontainer container kubepods-burstable-pod2208e8a7_c890_4642_8ae1_076d8bd82555.slice. Jun 20 19:10:52.982021 kubelet[2787]: I0620 19:10:52.981972 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2208e8a7-c890-4642-8ae1-076d8bd82555-config-volume\") pod \"coredns-668d6bf9bc-xxv5h\" (UID: \"2208e8a7-c890-4642-8ae1-076d8bd82555\") " pod="kube-system/coredns-668d6bf9bc-xxv5h" Jun 20 19:10:52.982183 kubelet[2787]: I0620 19:10:52.982041 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45n4v\" (UniqueName: \"kubernetes.io/projected/2208e8a7-c890-4642-8ae1-076d8bd82555-kube-api-access-45n4v\") pod \"coredns-668d6bf9bc-xxv5h\" (UID: \"2208e8a7-c890-4642-8ae1-076d8bd82555\") " pod="kube-system/coredns-668d6bf9bc-xxv5h" Jun 20 19:10:53.531918 kubelet[2787]: I0620 19:10:53.530526 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rm6bh" podStartSLOduration=5.658189747 podStartE2EDuration="11.530503265s" podCreationTimestamp="2025-06-20 19:10:42 +0000 UTC" firstStartedPulling="2025-06-20 19:10:42.480643264 +0000 UTC m=+6.239092907" lastFinishedPulling="2025-06-20 19:10:48.352956822 +0000 UTC m=+12.111406425" observedRunningTime="2025-06-20 19:10:53.52628856 +0000 UTC m=+17.284738243" watchObservedRunningTime="2025-06-20 19:10:53.530503265 +0000 UTC m=+17.288952908" Jun 20 19:10:53.983803 kubelet[2787]: E0620 19:10:53.983283 2787 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jun 20 19:10:53.983803 kubelet[2787]: E0620 19:10:53.983398 2787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cd1bd96b-1266-43b4-9537-0828dbed6a0b-config-volume podName:cd1bd96b-1266-43b4-9537-0828dbed6a0b nodeName:}" failed. No retries permitted until 2025-06-20 19:10:54.483371814 +0000 UTC m=+18.241821457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/cd1bd96b-1266-43b4-9537-0828dbed6a0b-config-volume") pod "coredns-668d6bf9bc-66wht" (UID: "cd1bd96b-1266-43b4-9537-0828dbed6a0b") : failed to sync configmap cache: timed out waiting for the condition Jun 20 19:10:54.097476 containerd[1495]: time="2025-06-20T19:10:54.097352185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xxv5h,Uid:2208e8a7-c890-4642-8ae1-076d8bd82555,Namespace:kube-system,Attempt:0,}" Jun 20 19:10:54.670584 containerd[1495]: time="2025-06-20T19:10:54.670525364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-66wht,Uid:cd1bd96b-1266-43b4-9537-0828dbed6a0b,Namespace:kube-system,Attempt:0,}" Jun 20 19:10:54.882306 systemd-networkd[1392]: cilium_host: Link UP Jun 20 19:10:54.882457 systemd-networkd[1392]: cilium_net: Link UP Jun 20 19:10:54.882461 systemd-networkd[1392]: cilium_net: Gained carrier Jun 20 19:10:54.882614 systemd-networkd[1392]: cilium_host: Gained carrier Jun 20 19:10:54.884195 systemd-networkd[1392]: cilium_net: Gained IPv6LL Jun 20 19:10:55.010272 systemd-networkd[1392]: cilium_host: Gained IPv6LL Jun 20 19:10:55.024231 systemd-networkd[1392]: cilium_vxlan: Link UP Jun 20 19:10:55.024239 systemd-networkd[1392]: cilium_vxlan: Gained carrier Jun 20 19:10:55.338532 kernel: NET: Registered PF_ALG protocol family Jun 20 19:10:56.124524 systemd-networkd[1392]: lxc_health: Link UP Jun 20 19:10:56.126547 systemd-networkd[1392]: lxc_health: Gained carrier Jun 20 19:10:56.552347 systemd-networkd[1392]: cilium_vxlan: Gained IPv6LL Jun 20 19:10:56.654216 kernel: eth0: renamed from tmp3743b Jun 20 19:10:56.661666 systemd-networkd[1392]: lxccd61210a9388: Link UP Jun 20 19:10:56.662117 systemd-networkd[1392]: lxccd61210a9388: Gained carrier Jun 20 19:10:56.721255 kernel: eth0: renamed from tmp791bf Jun 20 19:10:56.727904 systemd-networkd[1392]: lxc52135085f06c: Link UP Jun 20 19:10:56.729762 systemd-networkd[1392]: lxc52135085f06c: Gained carrier Jun 20 19:10:57.896420 systemd-networkd[1392]: lxc_health: Gained IPv6LL Jun 20 19:10:58.728450 systemd-networkd[1392]: lxccd61210a9388: Gained IPv6LL Jun 20 19:10:58.733035 systemd-networkd[1392]: lxc52135085f06c: Gained IPv6LL Jun 20 19:11:00.698261 containerd[1495]: time="2025-06-20T19:11:00.696413614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:00.698261 containerd[1495]: time="2025-06-20T19:11:00.696478214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:00.698261 containerd[1495]: time="2025-06-20T19:11:00.696494174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:00.698261 containerd[1495]: time="2025-06-20T19:11:00.696576055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:00.730546 systemd[1]: Started cri-containerd-3743b1cc0f2f517c76f8077d9b31142053e17c75bda9e7ee79bb18bb90b00a4d.scope - libcontainer container 3743b1cc0f2f517c76f8077d9b31142053e17c75bda9e7ee79bb18bb90b00a4d. Jun 20 19:11:00.735074 containerd[1495]: time="2025-06-20T19:11:00.734948790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:11:00.735074 containerd[1495]: time="2025-06-20T19:11:00.735013870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:11:00.736399 containerd[1495]: time="2025-06-20T19:11:00.736265756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:00.737763 containerd[1495]: time="2025-06-20T19:11:00.736770038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:11:00.767450 systemd[1]: Started cri-containerd-791bf39b7769c362b0bfdbee13a5557e149ebc6d662ec6d37b236c463af26cbb.scope - libcontainer container 791bf39b7769c362b0bfdbee13a5557e149ebc6d662ec6d37b236c463af26cbb. Jun 20 19:11:00.813354 containerd[1495]: time="2025-06-20T19:11:00.813306187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xxv5h,Uid:2208e8a7-c890-4642-8ae1-076d8bd82555,Namespace:kube-system,Attempt:0,} returns sandbox id \"3743b1cc0f2f517c76f8077d9b31142053e17c75bda9e7ee79bb18bb90b00a4d\"" Jun 20 19:11:00.820667 containerd[1495]: time="2025-06-20T19:11:00.820620501Z" level=info msg="CreateContainer within sandbox \"3743b1cc0f2f517c76f8077d9b31142053e17c75bda9e7ee79bb18bb90b00a4d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:11:00.842101 containerd[1495]: time="2025-06-20T19:11:00.841946518Z" level=info msg="CreateContainer within sandbox \"3743b1cc0f2f517c76f8077d9b31142053e17c75bda9e7ee79bb18bb90b00a4d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23355628fedb12970924a5817e22e0009723874e1cbf1ada5a85be423d935cc6\"" Jun 20 19:11:00.845510 containerd[1495]: time="2025-06-20T19:11:00.843535085Z" level=info msg="StartContainer for \"23355628fedb12970924a5817e22e0009723874e1cbf1ada5a85be423d935cc6\"" Jun 20 19:11:00.852825 containerd[1495]: time="2025-06-20T19:11:00.852783727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-66wht,Uid:cd1bd96b-1266-43b4-9537-0828dbed6a0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"791bf39b7769c362b0bfdbee13a5557e149ebc6d662ec6d37b236c463af26cbb\"" Jun 20 19:11:00.860207 containerd[1495]: time="2025-06-20T19:11:00.860092281Z" level=info msg="CreateContainer within sandbox \"791bf39b7769c362b0bfdbee13a5557e149ebc6d662ec6d37b236c463af26cbb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:11:00.889056 containerd[1495]: time="2025-06-20T19:11:00.889003293Z" level=info msg="CreateContainer within sandbox \"791bf39b7769c362b0bfdbee13a5557e149ebc6d662ec6d37b236c463af26cbb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7cf6c935dc836b11640666823a813f8707fb5fbb8f93ec70fcf9bf32654d2634\"" Jun 20 19:11:00.892290 containerd[1495]: time="2025-06-20T19:11:00.892022786Z" level=info msg="StartContainer for \"7cf6c935dc836b11640666823a813f8707fb5fbb8f93ec70fcf9bf32654d2634\"" Jun 20 19:11:00.903414 systemd[1]: Started cri-containerd-23355628fedb12970924a5817e22e0009723874e1cbf1ada5a85be423d935cc6.scope - libcontainer container 23355628fedb12970924a5817e22e0009723874e1cbf1ada5a85be423d935cc6. Jun 20 19:11:00.933398 systemd[1]: Started cri-containerd-7cf6c935dc836b11640666823a813f8707fb5fbb8f93ec70fcf9bf32654d2634.scope - libcontainer container 7cf6c935dc836b11640666823a813f8707fb5fbb8f93ec70fcf9bf32654d2634. Jun 20 19:11:00.951035 containerd[1495]: time="2025-06-20T19:11:00.950298212Z" level=info msg="StartContainer for \"23355628fedb12970924a5817e22e0009723874e1cbf1ada5a85be423d935cc6\" returns successfully" Jun 20 19:11:00.983176 containerd[1495]: time="2025-06-20T19:11:00.982825561Z" level=info msg="StartContainer for \"7cf6c935dc836b11640666823a813f8707fb5fbb8f93ec70fcf9bf32654d2634\" returns successfully" Jun 20 19:11:01.556255 kubelet[2787]: I0620 19:11:01.555902 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-66wht" podStartSLOduration=19.555882643 podStartE2EDuration="19.555882643s" podCreationTimestamp="2025-06-20 19:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:11:01.555314841 +0000 UTC m=+25.313764484" watchObservedRunningTime="2025-06-20 19:11:01.555882643 +0000 UTC m=+25.314332286" Jun 20 19:11:06.463964 kubelet[2787]: I0620 19:11:06.463635 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:11:06.496026 kubelet[2787]: I0620 19:11:06.495922 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xxv5h" podStartSLOduration=24.495893579 podStartE2EDuration="24.495893579s" podCreationTimestamp="2025-06-20 19:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:11:01.595406057 +0000 UTC m=+25.353855740" watchObservedRunningTime="2025-06-20 19:11:06.495893579 +0000 UTC m=+30.254343262" Jun 20 19:15:15.725649 systemd[1]: Started sshd@7-49.12.190.100:22-147.75.109.163:33592.service - OpenSSH per-connection server daemon (147.75.109.163:33592). Jun 20 19:15:16.741552 sshd[4209]: Accepted publickey for core from 147.75.109.163 port 33592 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:16.743582 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:16.751738 systemd-logind[1475]: New session 8 of user core. Jun 20 19:15:16.757480 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:15:17.537558 sshd[4211]: Connection closed by 147.75.109.163 port 33592 Jun 20 19:15:17.538293 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:17.545721 systemd-logind[1475]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:15:17.546028 systemd[1]: sshd@7-49.12.190.100:22-147.75.109.163:33592.service: Deactivated successfully. Jun 20 19:15:17.548405 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:15:17.550789 systemd-logind[1475]: Removed session 8. Jun 20 19:15:22.724622 systemd[1]: Started sshd@8-49.12.190.100:22-147.75.109.163:45512.service - OpenSSH per-connection server daemon (147.75.109.163:45512). Jun 20 19:15:23.730262 sshd[4225]: Accepted publickey for core from 147.75.109.163 port 45512 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:23.732469 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:23.742290 systemd-logind[1475]: New session 9 of user core. Jun 20 19:15:23.755488 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:15:24.510402 sshd[4227]: Connection closed by 147.75.109.163 port 45512 Jun 20 19:15:24.510274 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:24.517846 systemd[1]: sshd@8-49.12.190.100:22-147.75.109.163:45512.service: Deactivated successfully. Jun 20 19:15:24.522114 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:15:24.523871 systemd-logind[1475]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:15:24.525036 systemd-logind[1475]: Removed session 9. Jun 20 19:15:29.685643 systemd[1]: Started sshd@9-49.12.190.100:22-147.75.109.163:57330.service - OpenSSH per-connection server daemon (147.75.109.163:57330). Jun 20 19:15:30.674452 sshd[4240]: Accepted publickey for core from 147.75.109.163 port 57330 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:30.676590 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:30.682721 systemd-logind[1475]: New session 10 of user core. Jun 20 19:15:30.693523 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:15:31.438004 sshd[4242]: Connection closed by 147.75.109.163 port 57330 Jun 20 19:15:31.439585 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:31.445020 systemd[1]: sshd@9-49.12.190.100:22-147.75.109.163:57330.service: Deactivated successfully. Jun 20 19:15:31.445071 systemd-logind[1475]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:15:31.447496 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:15:31.448599 systemd-logind[1475]: Removed session 10. Jun 20 19:15:31.620659 systemd[1]: Started sshd@10-49.12.190.100:22-147.75.109.163:57336.service - OpenSSH per-connection server daemon (147.75.109.163:57336). Jun 20 19:15:32.609292 sshd[4256]: Accepted publickey for core from 147.75.109.163 port 57336 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:32.611071 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:32.617782 systemd-logind[1475]: New session 11 of user core. Jun 20 19:15:32.626513 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:15:33.429829 sshd[4258]: Connection closed by 147.75.109.163 port 57336 Jun 20 19:15:33.430504 sshd-session[4256]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:33.437919 systemd[1]: sshd@10-49.12.190.100:22-147.75.109.163:57336.service: Deactivated successfully. Jun 20 19:15:33.437944 systemd-logind[1475]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:15:33.444937 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:15:33.446829 systemd-logind[1475]: Removed session 11. Jun 20 19:15:33.615706 systemd[1]: Started sshd@11-49.12.190.100:22-147.75.109.163:57352.service - OpenSSH per-connection server daemon (147.75.109.163:57352). Jun 20 19:15:34.616388 sshd[4268]: Accepted publickey for core from 147.75.109.163 port 57352 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:34.618307 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:34.623865 systemd-logind[1475]: New session 12 of user core. Jun 20 19:15:34.630624 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:15:35.380295 sshd[4270]: Connection closed by 147.75.109.163 port 57352 Jun 20 19:15:35.381559 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:35.387526 systemd[1]: sshd@11-49.12.190.100:22-147.75.109.163:57352.service: Deactivated successfully. Jun 20 19:15:35.389877 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:15:35.391365 systemd-logind[1475]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:15:35.393042 systemd-logind[1475]: Removed session 12. Jun 20 19:15:40.557659 systemd[1]: Started sshd@12-49.12.190.100:22-147.75.109.163:35436.service - OpenSSH per-connection server daemon (147.75.109.163:35436). Jun 20 19:15:41.550270 sshd[4284]: Accepted publickey for core from 147.75.109.163 port 35436 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:41.552251 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:41.560046 systemd-logind[1475]: New session 13 of user core. Jun 20 19:15:41.564513 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:15:42.313567 sshd[4286]: Connection closed by 147.75.109.163 port 35436 Jun 20 19:15:42.314954 sshd-session[4284]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:42.323656 systemd[1]: sshd@12-49.12.190.100:22-147.75.109.163:35436.service: Deactivated successfully. Jun 20 19:15:42.328484 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:15:42.331028 systemd-logind[1475]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:15:42.335013 systemd-logind[1475]: Removed session 13. Jun 20 19:15:47.498695 systemd[1]: Started sshd@13-49.12.190.100:22-147.75.109.163:45126.service - OpenSSH per-connection server daemon (147.75.109.163:45126). Jun 20 19:15:48.496721 sshd[4301]: Accepted publickey for core from 147.75.109.163 port 45126 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:48.498741 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:48.504083 systemd-logind[1475]: New session 14 of user core. Jun 20 19:15:48.509588 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:15:49.283218 sshd[4303]: Connection closed by 147.75.109.163 port 45126 Jun 20 19:15:49.284131 sshd-session[4301]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:49.289884 systemd[1]: sshd@13-49.12.190.100:22-147.75.109.163:45126.service: Deactivated successfully. Jun 20 19:15:49.292742 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:15:49.294953 systemd-logind[1475]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:15:49.296523 systemd-logind[1475]: Removed session 14. Jun 20 19:15:49.469613 systemd[1]: Started sshd@14-49.12.190.100:22-147.75.109.163:45136.service - OpenSSH per-connection server daemon (147.75.109.163:45136). Jun 20 19:15:50.480112 sshd[4315]: Accepted publickey for core from 147.75.109.163 port 45136 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:50.481933 sshd-session[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:50.487977 systemd-logind[1475]: New session 15 of user core. Jun 20 19:15:50.499771 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:15:51.308454 sshd[4317]: Connection closed by 147.75.109.163 port 45136 Jun 20 19:15:51.309315 sshd-session[4315]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:51.315326 systemd[1]: sshd@14-49.12.190.100:22-147.75.109.163:45136.service: Deactivated successfully. Jun 20 19:15:51.318444 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:15:51.320125 systemd-logind[1475]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:15:51.321501 systemd-logind[1475]: Removed session 15. Jun 20 19:15:51.492686 systemd[1]: Started sshd@15-49.12.190.100:22-147.75.109.163:45146.service - OpenSSH per-connection server daemon (147.75.109.163:45146). Jun 20 19:15:52.499428 sshd[4328]: Accepted publickey for core from 147.75.109.163 port 45146 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:52.500981 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:52.506966 systemd-logind[1475]: New session 16 of user core. Jun 20 19:15:52.516477 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:15:54.073591 sshd[4330]: Connection closed by 147.75.109.163 port 45146 Jun 20 19:15:54.074433 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:54.079809 systemd-logind[1475]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:15:54.080408 systemd[1]: sshd@15-49.12.190.100:22-147.75.109.163:45146.service: Deactivated successfully. Jun 20 19:15:54.083958 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:15:54.087691 systemd-logind[1475]: Removed session 16. Jun 20 19:15:54.254013 systemd[1]: Started sshd@16-49.12.190.100:22-147.75.109.163:45154.service - OpenSSH per-connection server daemon (147.75.109.163:45154). Jun 20 19:15:55.257052 sshd[4347]: Accepted publickey for core from 147.75.109.163 port 45154 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:55.260549 sshd-session[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:55.268082 systemd-logind[1475]: New session 17 of user core. Jun 20 19:15:55.272447 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:15:56.151730 sshd[4349]: Connection closed by 147.75.109.163 port 45154 Jun 20 19:15:56.151618 sshd-session[4347]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:56.157951 systemd[1]: sshd@16-49.12.190.100:22-147.75.109.163:45154.service: Deactivated successfully. Jun 20 19:15:56.158115 systemd-logind[1475]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:15:56.162054 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:15:56.163279 systemd-logind[1475]: Removed session 17. Jun 20 19:15:56.332564 systemd[1]: Started sshd@17-49.12.190.100:22-147.75.109.163:43546.service - OpenSSH per-connection server daemon (147.75.109.163:43546). Jun 20 19:15:57.322761 sshd[4360]: Accepted publickey for core from 147.75.109.163 port 43546 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:15:57.324834 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:15:57.330255 systemd-logind[1475]: New session 18 of user core. Jun 20 19:15:57.335385 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:15:58.081850 sshd[4362]: Connection closed by 147.75.109.163 port 43546 Jun 20 19:15:58.081687 sshd-session[4360]: pam_unix(sshd:session): session closed for user core Jun 20 19:15:58.087297 systemd[1]: sshd@17-49.12.190.100:22-147.75.109.163:43546.service: Deactivated successfully. Jun 20 19:15:58.089440 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:15:58.092394 systemd-logind[1475]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:15:58.094211 systemd-logind[1475]: Removed session 18. Jun 20 19:16:03.268473 systemd[1]: Started sshd@18-49.12.190.100:22-147.75.109.163:43554.service - OpenSSH per-connection server daemon (147.75.109.163:43554). Jun 20 19:16:04.268250 sshd[4376]: Accepted publickey for core from 147.75.109.163 port 43554 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:16:04.270384 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:04.276012 systemd-logind[1475]: New session 19 of user core. Jun 20 19:16:04.285546 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:16:05.037060 sshd[4378]: Connection closed by 147.75.109.163 port 43554 Jun 20 19:16:05.038015 sshd-session[4376]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:05.045536 systemd-logind[1475]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:16:05.046291 systemd[1]: sshd@18-49.12.190.100:22-147.75.109.163:43554.service: Deactivated successfully. Jun 20 19:16:05.049733 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:16:05.052347 systemd-logind[1475]: Removed session 19. Jun 20 19:16:10.220655 systemd[1]: Started sshd@19-49.12.190.100:22-147.75.109.163:43542.service - OpenSSH per-connection server daemon (147.75.109.163:43542). Jun 20 19:16:11.231845 sshd[4390]: Accepted publickey for core from 147.75.109.163 port 43542 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:16:11.234189 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:11.240105 systemd-logind[1475]: New session 20 of user core. Jun 20 19:16:11.251569 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:16:12.010532 sshd[4392]: Connection closed by 147.75.109.163 port 43542 Jun 20 19:16:12.010411 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:12.017383 systemd[1]: sshd@19-49.12.190.100:22-147.75.109.163:43542.service: Deactivated successfully. Jun 20 19:16:12.022372 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:16:12.023783 systemd-logind[1475]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:16:12.025068 systemd-logind[1475]: Removed session 20. Jun 20 19:16:12.186585 systemd[1]: Started sshd@20-49.12.190.100:22-147.75.109.163:43544.service - OpenSSH per-connection server daemon (147.75.109.163:43544). Jun 20 19:16:13.163957 sshd[4403]: Accepted publickey for core from 147.75.109.163 port 43544 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:16:13.166216 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:13.172550 systemd-logind[1475]: New session 21 of user core. Jun 20 19:16:13.176360 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:16:16.151127 containerd[1495]: time="2025-06-20T19:16:16.151068768Z" level=info msg="StopContainer for \"ef73b63f2731eadbb941446d458bc7fbe1255e9c45feea2495d3e3258610181e\" with timeout 30 (s)" Jun 20 19:16:16.154196 containerd[1495]: time="2025-06-20T19:16:16.153419611Z" level=info msg="Stop container \"ef73b63f2731eadbb941446d458bc7fbe1255e9c45feea2495d3e3258610181e\" with signal terminated" Jun 20 19:16:16.180340 systemd[1]: cri-containerd-ef73b63f2731eadbb941446d458bc7fbe1255e9c45feea2495d3e3258610181e.scope: Deactivated successfully. Jun 20 19:16:16.184617 containerd[1495]: time="2025-06-20T19:16:16.184150019Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:16:16.197151 containerd[1495]: time="2025-06-20T19:16:16.197111880Z" level=info msg="StopContainer for \"d121ab2c94172cf8f86483a787dbe57b724026061289169f76c5bbb8a45eee44\" with timeout 2 (s)" Jun 20 19:16:16.198359 containerd[1495]: time="2025-06-20T19:16:16.198328641Z" level=info msg="Stop container \"d121ab2c94172cf8f86483a787dbe57b724026061289169f76c5bbb8a45eee44\" with signal terminated" Jun 20 19:16:16.209291 systemd-networkd[1392]: lxc_health: Link DOWN Jun 20 19:16:16.209305 systemd-networkd[1392]: lxc_health: Lost carrier Jun 20 19:16:16.216331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef73b63f2731eadbb941446d458bc7fbe1255e9c45feea2495d3e3258610181e-rootfs.mount: Deactivated successfully. Jun 20 19:16:16.237241 systemd[1]: cri-containerd-d121ab2c94172cf8f86483a787dbe57b724026061289169f76c5bbb8a45eee44.scope: Deactivated successfully. Jun 20 19:16:16.238606 systemd[1]: cri-containerd-d121ab2c94172cf8f86483a787dbe57b724026061289169f76c5bbb8a45eee44.scope: Consumed 8.259s CPU time, 126.7M memory peak, 136K read from disk, 12.9M written to disk. Jun 20 19:16:16.248417 containerd[1495]: time="2025-06-20T19:16:16.248328120Z" level=info msg="shim disconnected" id=ef73b63f2731eadbb941446d458bc7fbe1255e9c45feea2495d3e3258610181e namespace=k8s.io Jun 20 19:16:16.248417 containerd[1495]: time="2025-06-20T19:16:16.248403880Z" level=warning msg="cleaning up after shim disconnected" id=ef73b63f2731eadbb941446d458bc7fbe1255e9c45feea2495d3e3258610181e namespace=k8s.io Jun 20 19:16:16.248417 containerd[1495]: time="2025-06-20T19:16:16.248417600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:16.266115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d121ab2c94172cf8f86483a787dbe57b724026061289169f76c5bbb8a45eee44-rootfs.mount: Deactivated successfully. Jun 20 19:16:16.277981 containerd[1495]: time="2025-06-20T19:16:16.277894246Z" level=warning msg="cleanup warnings time=\"2025-06-20T19:16:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 19:16:16.281855 containerd[1495]: time="2025-06-20T19:16:16.281784492Z" level=info msg="shim disconnected" id=d121ab2c94172cf8f86483a787dbe57b724026061289169f76c5bbb8a45eee44 namespace=k8s.io Jun 20 19:16:16.282705 containerd[1495]: time="2025-06-20T19:16:16.282249213Z" level=warning msg="cleaning up after shim disconnected" id=d121ab2c94172cf8f86483a787dbe57b724026061289169f76c5bbb8a45eee44 namespace=k8s.io Jun 20 19:16:16.282705 containerd[1495]: time="2025-06-20T19:16:16.282272133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:16.282705 containerd[1495]: time="2025-06-20T19:16:16.282233453Z" level=info msg="StopContainer for \"ef73b63f2731eadbb941446d458bc7fbe1255e9c45feea2495d3e3258610181e\" returns successfully" Jun 20 19:16:16.284196 containerd[1495]: time="2025-06-20T19:16:16.283459455Z" level=info msg="StopPodSandbox for \"1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990\"" Jun 20 19:16:16.284196 containerd[1495]: time="2025-06-20T19:16:16.283510375Z" level=info msg="Container to stop \"ef73b63f2731eadbb941446d458bc7fbe1255e9c45feea2495d3e3258610181e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:16:16.288055 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990-shm.mount: Deactivated successfully. Jun 20 19:16:16.298950 systemd[1]: cri-containerd-1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990.scope: Deactivated successfully. Jun 20 19:16:16.310533 containerd[1495]: time="2025-06-20T19:16:16.310455177Z" level=info msg="StopContainer for \"d121ab2c94172cf8f86483a787dbe57b724026061289169f76c5bbb8a45eee44\" returns successfully" Jun 20 19:16:16.313211 containerd[1495]: time="2025-06-20T19:16:16.312374940Z" level=info msg="StopPodSandbox for \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\"" Jun 20 19:16:16.313211 containerd[1495]: time="2025-06-20T19:16:16.312622900Z" level=info msg="Container to stop \"e76329f2642bbe9e0a2616311c63dc00a54847c16170437f755ebbbcc3d5261a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:16:16.313211 containerd[1495]: time="2025-06-20T19:16:16.312658420Z" level=info msg="Container to stop \"a028b22aa97a0646ecf0b71246115b8f4b788b0de9889369edd480e3d0eaf20f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:16:16.313211 containerd[1495]: time="2025-06-20T19:16:16.312696860Z" level=info msg="Container to stop \"17cd884940fbbf1f71b7aa998b02fbc7adecadc23840a90bfb4635a84462a43d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:16:16.313211 containerd[1495]: time="2025-06-20T19:16:16.312719660Z" level=info msg="Container to stop \"a10a8f44aff2adfe51036327ab6e6232c32f3c6eb474dfac58dda5a83ca649ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:16:16.313211 containerd[1495]: time="2025-06-20T19:16:16.312751900Z" level=info msg="Container to stop \"d121ab2c94172cf8f86483a787dbe57b724026061289169f76c5bbb8a45eee44\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:16:16.320747 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76-shm.mount: Deactivated successfully. Jun 20 19:16:16.341774 systemd[1]: cri-containerd-3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76.scope: Deactivated successfully. Jun 20 19:16:16.362530 containerd[1495]: time="2025-06-20T19:16:16.362444338Z" level=info msg="shim disconnected" id=1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990 namespace=k8s.io Jun 20 19:16:16.362530 containerd[1495]: time="2025-06-20T19:16:16.362517298Z" level=warning msg="cleaning up after shim disconnected" id=1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990 namespace=k8s.io Jun 20 19:16:16.362530 containerd[1495]: time="2025-06-20T19:16:16.362531498Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:16.390902 containerd[1495]: time="2025-06-20T19:16:16.390813182Z" level=info msg="shim disconnected" id=3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76 namespace=k8s.io Jun 20 19:16:16.390902 containerd[1495]: time="2025-06-20T19:16:16.390892103Z" level=warning msg="cleaning up after shim disconnected" id=3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76 namespace=k8s.io Jun 20 19:16:16.390902 containerd[1495]: time="2025-06-20T19:16:16.390901183Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:16.404195 containerd[1495]: time="2025-06-20T19:16:16.402754201Z" level=info msg="TearDown network for sandbox \"1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990\" successfully" Jun 20 19:16:16.404195 containerd[1495]: time="2025-06-20T19:16:16.402796281Z" level=info msg="StopPodSandbox for \"1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990\" returns successfully" Jun 20 19:16:16.414956 containerd[1495]: time="2025-06-20T19:16:16.414807020Z" level=info msg="TearDown network for sandbox \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\" successfully" Jun 20 19:16:16.414956 containerd[1495]: time="2025-06-20T19:16:16.414841380Z" level=info msg="StopPodSandbox for \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\" returns successfully" Jun 20 19:16:16.483523 kubelet[2787]: I0620 19:16:16.483326 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-hostproc\") pod \"24cc1d18-459b-43ce-9888-c4a1d2f80337\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " Jun 20 19:16:16.483523 kubelet[2787]: I0620 19:16:16.483381 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-cni-path\") pod \"24cc1d18-459b-43ce-9888-c4a1d2f80337\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " Jun 20 19:16:16.483523 kubelet[2787]: I0620 19:16:16.483404 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-lib-modules\") pod \"24cc1d18-459b-43ce-9888-c4a1d2f80337\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " Jun 20 19:16:16.483523 kubelet[2787]: I0620 19:16:16.483426 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-cilium-run\") pod \"24cc1d18-459b-43ce-9888-c4a1d2f80337\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " Jun 20 19:16:16.483523 kubelet[2787]: I0620 19:16:16.483438 2787 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-hostproc" (OuterVolumeSpecName: "hostproc") pod "24cc1d18-459b-43ce-9888-c4a1d2f80337" (UID: "24cc1d18-459b-43ce-9888-c4a1d2f80337"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:16.486364 kubelet[2787]: I0620 19:16:16.483460 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9e78bce-0512-4be0-94c5-d8a7f9d382a9-cilium-config-path\") pod \"d9e78bce-0512-4be0-94c5-d8a7f9d382a9\" (UID: \"d9e78bce-0512-4be0-94c5-d8a7f9d382a9\") " Jun 20 19:16:16.486364 kubelet[2787]: I0620 19:16:16.484234 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjzmb\" (UniqueName: \"kubernetes.io/projected/d9e78bce-0512-4be0-94c5-d8a7f9d382a9-kube-api-access-cjzmb\") pod \"d9e78bce-0512-4be0-94c5-d8a7f9d382a9\" (UID: \"d9e78bce-0512-4be0-94c5-d8a7f9d382a9\") " Jun 20 19:16:16.486364 kubelet[2787]: I0620 19:16:16.484302 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tfp6\" (UniqueName: \"kubernetes.io/projected/24cc1d18-459b-43ce-9888-c4a1d2f80337-kube-api-access-4tfp6\") pod \"24cc1d18-459b-43ce-9888-c4a1d2f80337\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " Jun 20 19:16:16.486364 kubelet[2787]: I0620 19:16:16.484344 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/24cc1d18-459b-43ce-9888-c4a1d2f80337-hubble-tls\") pod \"24cc1d18-459b-43ce-9888-c4a1d2f80337\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " Jun 20 19:16:16.486364 kubelet[2787]: I0620 19:16:16.484383 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-host-proc-sys-kernel\") pod \"24cc1d18-459b-43ce-9888-c4a1d2f80337\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " Jun 20 19:16:16.486364 kubelet[2787]: I0620 19:16:16.484420 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24cc1d18-459b-43ce-9888-c4a1d2f80337-cilium-config-path\") pod \"24cc1d18-459b-43ce-9888-c4a1d2f80337\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " Jun 20 19:16:16.486615 kubelet[2787]: I0620 19:16:16.484465 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/24cc1d18-459b-43ce-9888-c4a1d2f80337-clustermesh-secrets\") pod \"24cc1d18-459b-43ce-9888-c4a1d2f80337\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " Jun 20 19:16:16.486615 kubelet[2787]: I0620 19:16:16.484504 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-xtables-lock\") pod \"24cc1d18-459b-43ce-9888-c4a1d2f80337\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " Jun 20 19:16:16.486615 kubelet[2787]: I0620 19:16:16.484542 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-etc-cni-netd\") pod \"24cc1d18-459b-43ce-9888-c4a1d2f80337\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " Jun 20 19:16:16.486615 kubelet[2787]: I0620 19:16:16.484583 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-cilium-cgroup\") pod \"24cc1d18-459b-43ce-9888-c4a1d2f80337\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " Jun 20 19:16:16.486615 kubelet[2787]: I0620 19:16:16.484584 2787 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-cni-path" (OuterVolumeSpecName: "cni-path") pod "24cc1d18-459b-43ce-9888-c4a1d2f80337" (UID: "24cc1d18-459b-43ce-9888-c4a1d2f80337"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:16.486615 kubelet[2787]: I0620 19:16:16.484621 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-host-proc-sys-net\") pod \"24cc1d18-459b-43ce-9888-c4a1d2f80337\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " Jun 20 19:16:16.486820 kubelet[2787]: I0620 19:16:16.484636 2787 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "24cc1d18-459b-43ce-9888-c4a1d2f80337" (UID: "24cc1d18-459b-43ce-9888-c4a1d2f80337"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:16.486820 kubelet[2787]: I0620 19:16:16.484671 2787 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-bpf-maps\") pod \"24cc1d18-459b-43ce-9888-c4a1d2f80337\" (UID: \"24cc1d18-459b-43ce-9888-c4a1d2f80337\") " Jun 20 19:16:16.486820 kubelet[2787]: I0620 19:16:16.484670 2787 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "24cc1d18-459b-43ce-9888-c4a1d2f80337" (UID: "24cc1d18-459b-43ce-9888-c4a1d2f80337"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:16.486820 kubelet[2787]: I0620 19:16:16.484777 2787 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-cilium-run\") on node \"ci-4230-2-0-5-45318d0d95\" DevicePath \"\"" Jun 20 19:16:16.486820 kubelet[2787]: I0620 19:16:16.484810 2787 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-hostproc\") on node \"ci-4230-2-0-5-45318d0d95\" DevicePath \"\"" Jun 20 19:16:16.486820 kubelet[2787]: I0620 19:16:16.484830 2787 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-cni-path\") on node \"ci-4230-2-0-5-45318d0d95\" DevicePath \"\"" Jun 20 19:16:16.487088 kubelet[2787]: I0620 19:16:16.484846 2787 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-lib-modules\") on node \"ci-4230-2-0-5-45318d0d95\" DevicePath \"\"" Jun 20 19:16:16.487088 kubelet[2787]: I0620 19:16:16.485148 2787 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "24cc1d18-459b-43ce-9888-c4a1d2f80337" (UID: "24cc1d18-459b-43ce-9888-c4a1d2f80337"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:16.491143 kubelet[2787]: I0620 19:16:16.490926 2787 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "24cc1d18-459b-43ce-9888-c4a1d2f80337" (UID: "24cc1d18-459b-43ce-9888-c4a1d2f80337"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:16.492425 kubelet[2787]: I0620 19:16:16.491489 2787 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "24cc1d18-459b-43ce-9888-c4a1d2f80337" (UID: "24cc1d18-459b-43ce-9888-c4a1d2f80337"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:16.492538 kubelet[2787]: I0620 19:16:16.491550 2787 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "24cc1d18-459b-43ce-9888-c4a1d2f80337" (UID: "24cc1d18-459b-43ce-9888-c4a1d2f80337"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:16.492608 kubelet[2787]: I0620 19:16:16.491566 2787 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "24cc1d18-459b-43ce-9888-c4a1d2f80337" (UID: "24cc1d18-459b-43ce-9888-c4a1d2f80337"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:16.492702 kubelet[2787]: I0620 19:16:16.492661 2787 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "24cc1d18-459b-43ce-9888-c4a1d2f80337" (UID: "24cc1d18-459b-43ce-9888-c4a1d2f80337"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:16:16.496019 kubelet[2787]: I0620 19:16:16.495944 2787 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9e78bce-0512-4be0-94c5-d8a7f9d382a9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d9e78bce-0512-4be0-94c5-d8a7f9d382a9" (UID: "d9e78bce-0512-4be0-94c5-d8a7f9d382a9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:16:16.496470 kubelet[2787]: I0620 19:16:16.496425 2787 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9e78bce-0512-4be0-94c5-d8a7f9d382a9-kube-api-access-cjzmb" (OuterVolumeSpecName: "kube-api-access-cjzmb") pod "d9e78bce-0512-4be0-94c5-d8a7f9d382a9" (UID: "d9e78bce-0512-4be0-94c5-d8a7f9d382a9"). InnerVolumeSpecName "kube-api-access-cjzmb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:16:16.508604 kubelet[2787]: I0620 19:16:16.508546 2787 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24cc1d18-459b-43ce-9888-c4a1d2f80337-kube-api-access-4tfp6" (OuterVolumeSpecName: "kube-api-access-4tfp6") pod "24cc1d18-459b-43ce-9888-c4a1d2f80337" (UID: "24cc1d18-459b-43ce-9888-c4a1d2f80337"). InnerVolumeSpecName "kube-api-access-4tfp6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:16:16.508743 kubelet[2787]: I0620 19:16:16.508632 2787 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24cc1d18-459b-43ce-9888-c4a1d2f80337-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "24cc1d18-459b-43ce-9888-c4a1d2f80337" (UID: "24cc1d18-459b-43ce-9888-c4a1d2f80337"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:16:16.509627 kubelet[2787]: I0620 19:16:16.509589 2787 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24cc1d18-459b-43ce-9888-c4a1d2f80337-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "24cc1d18-459b-43ce-9888-c4a1d2f80337" (UID: "24cc1d18-459b-43ce-9888-c4a1d2f80337"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:16:16.510762 kubelet[2787]: I0620 19:16:16.510723 2787 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24cc1d18-459b-43ce-9888-c4a1d2f80337-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "24cc1d18-459b-43ce-9888-c4a1d2f80337" (UID: "24cc1d18-459b-43ce-9888-c4a1d2f80337"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:16:16.578599 kubelet[2787]: E0620 19:16:16.578538 2787 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:16:16.585389 kubelet[2787]: I0620 19:16:16.585108 2787 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9e78bce-0512-4be0-94c5-d8a7f9d382a9-cilium-config-path\") on node \"ci-4230-2-0-5-45318d0d95\" DevicePath \"\"" Jun 20 19:16:16.585389 kubelet[2787]: I0620 19:16:16.585184 2787 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cjzmb\" (UniqueName: \"kubernetes.io/projected/d9e78bce-0512-4be0-94c5-d8a7f9d382a9-kube-api-access-cjzmb\") on node \"ci-4230-2-0-5-45318d0d95\" DevicePath \"\"" Jun 20 19:16:16.585389 kubelet[2787]: I0620 19:16:16.585209 2787 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4tfp6\" (UniqueName: \"kubernetes.io/projected/24cc1d18-459b-43ce-9888-c4a1d2f80337-kube-api-access-4tfp6\") on node \"ci-4230-2-0-5-45318d0d95\" DevicePath \"\"" Jun 20 19:16:16.585389 kubelet[2787]: I0620 19:16:16.585227 2787 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/24cc1d18-459b-43ce-9888-c4a1d2f80337-hubble-tls\") on node \"ci-4230-2-0-5-45318d0d95\" DevicePath \"\"" Jun 20 19:16:16.585389 kubelet[2787]: I0620 19:16:16.585244 2787 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-host-proc-sys-kernel\") on node \"ci-4230-2-0-5-45318d0d95\" DevicePath \"\"" Jun 20 19:16:16.585389 kubelet[2787]: I0620 19:16:16.585259 2787 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24cc1d18-459b-43ce-9888-c4a1d2f80337-cilium-config-path\") on node \"ci-4230-2-0-5-45318d0d95\" DevicePath \"\"" Jun 20 19:16:16.585389 kubelet[2787]: I0620 19:16:16.585278 2787 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/24cc1d18-459b-43ce-9888-c4a1d2f80337-clustermesh-secrets\") on node \"ci-4230-2-0-5-45318d0d95\" DevicePath \"\"" Jun 20 19:16:16.585819 kubelet[2787]: I0620 19:16:16.585294 2787 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-etc-cni-netd\") on node \"ci-4230-2-0-5-45318d0d95\" DevicePath \"\"" Jun 20 19:16:16.585819 kubelet[2787]: I0620 19:16:16.585311 2787 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-xtables-lock\") on node \"ci-4230-2-0-5-45318d0d95\" DevicePath \"\"" Jun 20 19:16:16.585819 kubelet[2787]: I0620 19:16:16.585325 2787 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-cilium-cgroup\") on node \"ci-4230-2-0-5-45318d0d95\" DevicePath \"\"" Jun 20 19:16:16.585819 kubelet[2787]: I0620 19:16:16.585339 2787 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-host-proc-sys-net\") on node \"ci-4230-2-0-5-45318d0d95\" DevicePath \"\"" Jun 20 19:16:16.585819 kubelet[2787]: I0620 19:16:16.585354 2787 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/24cc1d18-459b-43ce-9888-c4a1d2f80337-bpf-maps\") on node \"ci-4230-2-0-5-45318d0d95\" DevicePath \"\"" Jun 20 19:16:17.160714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990-rootfs.mount: Deactivated successfully. Jun 20 19:16:17.160862 systemd[1]: var-lib-kubelet-pods-d9e78bce\x2d0512\x2d4be0\x2d94c5\x2dd8a7f9d382a9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcjzmb.mount: Deactivated successfully. Jun 20 19:16:17.161006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76-rootfs.mount: Deactivated successfully. Jun 20 19:16:17.161071 systemd[1]: var-lib-kubelet-pods-24cc1d18\x2d459b\x2d43ce\x2d9888\x2dc4a1d2f80337-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4tfp6.mount: Deactivated successfully. Jun 20 19:16:17.161145 systemd[1]: var-lib-kubelet-pods-24cc1d18\x2d459b\x2d43ce\x2d9888\x2dc4a1d2f80337-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 19:16:17.161230 systemd[1]: var-lib-kubelet-pods-24cc1d18\x2d459b\x2d43ce\x2d9888\x2dc4a1d2f80337-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 19:16:17.390457 kubelet[2787]: I0620 19:16:17.390275 2787 scope.go:117] "RemoveContainer" containerID="ef73b63f2731eadbb941446d458bc7fbe1255e9c45feea2495d3e3258610181e" Jun 20 19:16:17.395807 containerd[1495]: time="2025-06-20T19:16:17.395726871Z" level=info msg="RemoveContainer for \"ef73b63f2731eadbb941446d458bc7fbe1255e9c45feea2495d3e3258610181e\"" Jun 20 19:16:17.401265 systemd[1]: Removed slice kubepods-besteffort-podd9e78bce_0512_4be0_94c5_d8a7f9d382a9.slice - libcontainer container kubepods-besteffort-podd9e78bce_0512_4be0_94c5_d8a7f9d382a9.slice. Jun 20 19:16:17.407211 containerd[1495]: time="2025-06-20T19:16:17.406201048Z" level=info msg="RemoveContainer for \"ef73b63f2731eadbb941446d458bc7fbe1255e9c45feea2495d3e3258610181e\" returns successfully" Jun 20 19:16:17.410423 systemd[1]: Removed slice kubepods-burstable-pod24cc1d18_459b_43ce_9888_c4a1d2f80337.slice - libcontainer container kubepods-burstable-pod24cc1d18_459b_43ce_9888_c4a1d2f80337.slice. Jun 20 19:16:17.410884 kubelet[2787]: I0620 19:16:17.410794 2787 scope.go:117] "RemoveContainer" containerID="d121ab2c94172cf8f86483a787dbe57b724026061289169f76c5bbb8a45eee44" Jun 20 19:16:17.411016 systemd[1]: kubepods-burstable-pod24cc1d18_459b_43ce_9888_c4a1d2f80337.slice: Consumed 8.362s CPU time, 127.2M memory peak, 136K read from disk, 12.9M written to disk. Jun 20 19:16:17.416898 containerd[1495]: time="2025-06-20T19:16:17.415384142Z" level=info msg="RemoveContainer for \"d121ab2c94172cf8f86483a787dbe57b724026061289169f76c5bbb8a45eee44\"" Jun 20 19:16:17.423314 containerd[1495]: time="2025-06-20T19:16:17.423263994Z" level=info msg="RemoveContainer for \"d121ab2c94172cf8f86483a787dbe57b724026061289169f76c5bbb8a45eee44\" returns successfully" Jun 20 19:16:17.423927 kubelet[2787]: I0620 19:16:17.423888 2787 scope.go:117] "RemoveContainer" containerID="e76329f2642bbe9e0a2616311c63dc00a54847c16170437f755ebbbcc3d5261a" Jun 20 19:16:17.426592 containerd[1495]: time="2025-06-20T19:16:17.426527479Z" level=info msg="RemoveContainer for \"e76329f2642bbe9e0a2616311c63dc00a54847c16170437f755ebbbcc3d5261a\"" Jun 20 19:16:17.433393 containerd[1495]: time="2025-06-20T19:16:17.432740609Z" level=info msg="RemoveContainer for \"e76329f2642bbe9e0a2616311c63dc00a54847c16170437f755ebbbcc3d5261a\" returns successfully" Jun 20 19:16:17.434331 kubelet[2787]: I0620 19:16:17.434298 2787 scope.go:117] "RemoveContainer" containerID="a10a8f44aff2adfe51036327ab6e6232c32f3c6eb474dfac58dda5a83ca649ea" Jun 20 19:16:17.438838 containerd[1495]: time="2025-06-20T19:16:17.438796018Z" level=info msg="RemoveContainer for \"a10a8f44aff2adfe51036327ab6e6232c32f3c6eb474dfac58dda5a83ca649ea\"" Jun 20 19:16:17.445791 containerd[1495]: time="2025-06-20T19:16:17.445721989Z" level=info msg="RemoveContainer for \"a10a8f44aff2adfe51036327ab6e6232c32f3c6eb474dfac58dda5a83ca649ea\" returns successfully" Jun 20 19:16:17.446187 kubelet[2787]: I0620 19:16:17.446068 2787 scope.go:117] "RemoveContainer" containerID="17cd884940fbbf1f71b7aa998b02fbc7adecadc23840a90bfb4635a84462a43d" Jun 20 19:16:17.448892 containerd[1495]: time="2025-06-20T19:16:17.448832074Z" level=info msg="RemoveContainer for \"17cd884940fbbf1f71b7aa998b02fbc7adecadc23840a90bfb4635a84462a43d\"" Jun 20 19:16:17.454111 containerd[1495]: time="2025-06-20T19:16:17.454044442Z" level=info msg="RemoveContainer for \"17cd884940fbbf1f71b7aa998b02fbc7adecadc23840a90bfb4635a84462a43d\" returns successfully" Jun 20 19:16:17.456250 kubelet[2787]: I0620 19:16:17.454776 2787 scope.go:117] "RemoveContainer" containerID="a028b22aa97a0646ecf0b71246115b8f4b788b0de9889369edd480e3d0eaf20f" Jun 20 19:16:17.458318 containerd[1495]: time="2025-06-20T19:16:17.458262529Z" level=info msg="RemoveContainer for \"a028b22aa97a0646ecf0b71246115b8f4b788b0de9889369edd480e3d0eaf20f\"" Jun 20 19:16:17.468346 containerd[1495]: time="2025-06-20T19:16:17.468278344Z" level=info msg="RemoveContainer for \"a028b22aa97a0646ecf0b71246115b8f4b788b0de9889369edd480e3d0eaf20f\" returns successfully" Jun 20 19:16:18.234039 sshd[4407]: Connection closed by 147.75.109.163 port 43544 Jun 20 19:16:18.235253 sshd-session[4403]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:18.241881 systemd-logind[1475]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:16:18.242880 systemd[1]: sshd@20-49.12.190.100:22-147.75.109.163:43544.service: Deactivated successfully. Jun 20 19:16:18.246552 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:16:18.248255 systemd[1]: session-21.scope: Consumed 1.822s CPU time, 23.5M memory peak. Jun 20 19:16:18.250043 systemd-logind[1475]: Removed session 21. Jun 20 19:16:18.366472 kubelet[2787]: I0620 19:16:18.366412 2787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24cc1d18-459b-43ce-9888-c4a1d2f80337" path="/var/lib/kubelet/pods/24cc1d18-459b-43ce-9888-c4a1d2f80337/volumes" Jun 20 19:16:18.367692 kubelet[2787]: I0620 19:16:18.367654 2787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9e78bce-0512-4be0-94c5-d8a7f9d382a9" path="/var/lib/kubelet/pods/d9e78bce-0512-4be0-94c5-d8a7f9d382a9/volumes" Jun 20 19:16:18.419601 systemd[1]: Started sshd@21-49.12.190.100:22-147.75.109.163:54490.service - OpenSSH per-connection server daemon (147.75.109.163:54490). Jun 20 19:16:19.405099 sshd[4569]: Accepted publickey for core from 147.75.109.163 port 54490 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:16:19.407612 sshd-session[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:19.415859 systemd-logind[1475]: New session 22 of user core. Jun 20 19:16:19.424559 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:16:21.065250 kubelet[2787]: I0620 19:16:21.065200 2787 memory_manager.go:355] "RemoveStaleState removing state" podUID="d9e78bce-0512-4be0-94c5-d8a7f9d382a9" containerName="cilium-operator" Jun 20 19:16:21.065250 kubelet[2787]: I0620 19:16:21.065230 2787 memory_manager.go:355] "RemoveStaleState removing state" podUID="24cc1d18-459b-43ce-9888-c4a1d2f80337" containerName="cilium-agent" Jun 20 19:16:21.077402 systemd[1]: Created slice kubepods-burstable-pod539e3dc1_5ad5_42ca_bd0e_5a30c7248904.slice - libcontainer container kubepods-burstable-pod539e3dc1_5ad5_42ca_bd0e_5a30c7248904.slice. Jun 20 19:16:21.118411 kubelet[2787]: I0620 19:16:21.117812 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/539e3dc1-5ad5-42ca-bd0e-5a30c7248904-cilium-cgroup\") pod \"cilium-xj2m4\" (UID: \"539e3dc1-5ad5-42ca-bd0e-5a30c7248904\") " pod="kube-system/cilium-xj2m4" Jun 20 19:16:21.118411 kubelet[2787]: I0620 19:16:21.117857 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/539e3dc1-5ad5-42ca-bd0e-5a30c7248904-cni-path\") pod \"cilium-xj2m4\" (UID: \"539e3dc1-5ad5-42ca-bd0e-5a30c7248904\") " pod="kube-system/cilium-xj2m4" Jun 20 19:16:21.118411 kubelet[2787]: I0620 19:16:21.117877 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/539e3dc1-5ad5-42ca-bd0e-5a30c7248904-lib-modules\") pod \"cilium-xj2m4\" (UID: \"539e3dc1-5ad5-42ca-bd0e-5a30c7248904\") " pod="kube-system/cilium-xj2m4" Jun 20 19:16:21.118411 kubelet[2787]: I0620 19:16:21.117924 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/539e3dc1-5ad5-42ca-bd0e-5a30c7248904-cilium-run\") pod \"cilium-xj2m4\" (UID: \"539e3dc1-5ad5-42ca-bd0e-5a30c7248904\") " pod="kube-system/cilium-xj2m4" Jun 20 19:16:21.118411 kubelet[2787]: I0620 19:16:21.117957 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/539e3dc1-5ad5-42ca-bd0e-5a30c7248904-bpf-maps\") pod \"cilium-xj2m4\" (UID: \"539e3dc1-5ad5-42ca-bd0e-5a30c7248904\") " pod="kube-system/cilium-xj2m4" Jun 20 19:16:21.118411 kubelet[2787]: I0620 19:16:21.117972 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/539e3dc1-5ad5-42ca-bd0e-5a30c7248904-xtables-lock\") pod \"cilium-xj2m4\" (UID: \"539e3dc1-5ad5-42ca-bd0e-5a30c7248904\") " pod="kube-system/cilium-xj2m4" Jun 20 19:16:21.118669 kubelet[2787]: I0620 19:16:21.117987 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/539e3dc1-5ad5-42ca-bd0e-5a30c7248904-clustermesh-secrets\") pod \"cilium-xj2m4\" (UID: \"539e3dc1-5ad5-42ca-bd0e-5a30c7248904\") " pod="kube-system/cilium-xj2m4" Jun 20 19:16:21.118669 kubelet[2787]: I0620 19:16:21.118007 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxrzm\" (UniqueName: \"kubernetes.io/projected/539e3dc1-5ad5-42ca-bd0e-5a30c7248904-kube-api-access-cxrzm\") pod \"cilium-xj2m4\" (UID: \"539e3dc1-5ad5-42ca-bd0e-5a30c7248904\") " pod="kube-system/cilium-xj2m4" Jun 20 19:16:21.118669 kubelet[2787]: I0620 19:16:21.118024 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/539e3dc1-5ad5-42ca-bd0e-5a30c7248904-host-proc-sys-kernel\") pod \"cilium-xj2m4\" (UID: \"539e3dc1-5ad5-42ca-bd0e-5a30c7248904\") " pod="kube-system/cilium-xj2m4" Jun 20 19:16:21.118669 kubelet[2787]: I0620 19:16:21.118040 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/539e3dc1-5ad5-42ca-bd0e-5a30c7248904-cilium-config-path\") pod \"cilium-xj2m4\" (UID: \"539e3dc1-5ad5-42ca-bd0e-5a30c7248904\") " pod="kube-system/cilium-xj2m4" Jun 20 19:16:21.118669 kubelet[2787]: I0620 19:16:21.118057 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/539e3dc1-5ad5-42ca-bd0e-5a30c7248904-hostproc\") pod \"cilium-xj2m4\" (UID: \"539e3dc1-5ad5-42ca-bd0e-5a30c7248904\") " pod="kube-system/cilium-xj2m4" Jun 20 19:16:21.118938 kubelet[2787]: I0620 19:16:21.118079 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/539e3dc1-5ad5-42ca-bd0e-5a30c7248904-etc-cni-netd\") pod \"cilium-xj2m4\" (UID: \"539e3dc1-5ad5-42ca-bd0e-5a30c7248904\") " pod="kube-system/cilium-xj2m4" Jun 20 19:16:21.118938 kubelet[2787]: I0620 19:16:21.118094 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/539e3dc1-5ad5-42ca-bd0e-5a30c7248904-hubble-tls\") pod \"cilium-xj2m4\" (UID: \"539e3dc1-5ad5-42ca-bd0e-5a30c7248904\") " pod="kube-system/cilium-xj2m4" Jun 20 19:16:21.118938 kubelet[2787]: I0620 19:16:21.118114 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/539e3dc1-5ad5-42ca-bd0e-5a30c7248904-cilium-ipsec-secrets\") pod \"cilium-xj2m4\" (UID: \"539e3dc1-5ad5-42ca-bd0e-5a30c7248904\") " pod="kube-system/cilium-xj2m4" Jun 20 19:16:21.118938 kubelet[2787]: I0620 19:16:21.118130 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/539e3dc1-5ad5-42ca-bd0e-5a30c7248904-host-proc-sys-net\") pod \"cilium-xj2m4\" (UID: \"539e3dc1-5ad5-42ca-bd0e-5a30c7248904\") " pod="kube-system/cilium-xj2m4" Jun 20 19:16:21.236275 sshd[4571]: Connection closed by 147.75.109.163 port 54490 Jun 20 19:16:21.240413 sshd-session[4569]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:21.256943 systemd[1]: sshd@21-49.12.190.100:22-147.75.109.163:54490.service: Deactivated successfully. Jun 20 19:16:21.260359 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:16:21.260876 systemd[1]: session-22.scope: Consumed 1.028s CPU time, 23.5M memory peak. Jun 20 19:16:21.263091 systemd-logind[1475]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:16:21.265460 systemd-logind[1475]: Removed session 22. Jun 20 19:16:21.383294 containerd[1495]: time="2025-06-20T19:16:21.383128911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xj2m4,Uid:539e3dc1-5ad5-42ca-bd0e-5a30c7248904,Namespace:kube-system,Attempt:0,}" Jun 20 19:16:21.415254 containerd[1495]: time="2025-06-20T19:16:21.414022279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:16:21.415254 containerd[1495]: time="2025-06-20T19:16:21.414084279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:16:21.415254 containerd[1495]: time="2025-06-20T19:16:21.414098439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:16:21.415254 containerd[1495]: time="2025-06-20T19:16:21.414200599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:16:21.420846 systemd[1]: Started sshd@22-49.12.190.100:22-147.75.109.163:54494.service - OpenSSH per-connection server daemon (147.75.109.163:54494). Jun 20 19:16:21.440422 systemd[1]: Started cri-containerd-a6c8bbbd70fbc32fa7f87366e51602eee75dd9710f56901d985b2903305d20b6.scope - libcontainer container a6c8bbbd70fbc32fa7f87366e51602eee75dd9710f56901d985b2903305d20b6. Jun 20 19:16:21.469902 containerd[1495]: time="2025-06-20T19:16:21.469831964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xj2m4,Uid:539e3dc1-5ad5-42ca-bd0e-5a30c7248904,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6c8bbbd70fbc32fa7f87366e51602eee75dd9710f56901d985b2903305d20b6\"" Jun 20 19:16:21.476628 containerd[1495]: time="2025-06-20T19:16:21.476559654Z" level=info msg="CreateContainer within sandbox \"a6c8bbbd70fbc32fa7f87366e51602eee75dd9710f56901d985b2903305d20b6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:16:21.489931 containerd[1495]: time="2025-06-20T19:16:21.489843995Z" level=info msg="CreateContainer within sandbox \"a6c8bbbd70fbc32fa7f87366e51602eee75dd9710f56901d985b2903305d20b6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5dadad114a8c02cd8feb960b43e352cdf4d2883093851b777ea4d25fe5280144\"" Jun 20 19:16:21.492236 containerd[1495]: time="2025-06-20T19:16:21.490921316Z" level=info msg="StartContainer for \"5dadad114a8c02cd8feb960b43e352cdf4d2883093851b777ea4d25fe5280144\"" Jun 20 19:16:21.522411 systemd[1]: Started cri-containerd-5dadad114a8c02cd8feb960b43e352cdf4d2883093851b777ea4d25fe5280144.scope - libcontainer container 5dadad114a8c02cd8feb960b43e352cdf4d2883093851b777ea4d25fe5280144. Jun 20 19:16:21.562426 containerd[1495]: time="2025-06-20T19:16:21.561524025Z" level=info msg="StartContainer for \"5dadad114a8c02cd8feb960b43e352cdf4d2883093851b777ea4d25fe5280144\" returns successfully" Jun 20 19:16:21.569308 systemd[1]: cri-containerd-5dadad114a8c02cd8feb960b43e352cdf4d2883093851b777ea4d25fe5280144.scope: Deactivated successfully. Jun 20 19:16:21.580974 kubelet[2787]: E0620 19:16:21.580872 2787 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:16:21.609531 containerd[1495]: time="2025-06-20T19:16:21.609431178Z" level=info msg="shim disconnected" id=5dadad114a8c02cd8feb960b43e352cdf4d2883093851b777ea4d25fe5280144 namespace=k8s.io Jun 20 19:16:21.609531 containerd[1495]: time="2025-06-20T19:16:21.609513618Z" level=warning msg="cleaning up after shim disconnected" id=5dadad114a8c02cd8feb960b43e352cdf4d2883093851b777ea4d25fe5280144 namespace=k8s.io Jun 20 19:16:21.609531 containerd[1495]: time="2025-06-20T19:16:21.609529178Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:22.436099 containerd[1495]: time="2025-06-20T19:16:22.435930321Z" level=info msg="CreateContainer within sandbox \"a6c8bbbd70fbc32fa7f87366e51602eee75dd9710f56901d985b2903305d20b6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:16:22.440537 sshd[4602]: Accepted publickey for core from 147.75.109.163 port 54494 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:16:22.443687 sshd-session[4602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:22.457891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3707749967.mount: Deactivated successfully. Jun 20 19:16:22.461847 containerd[1495]: time="2025-06-20T19:16:22.461772201Z" level=info msg="CreateContainer within sandbox \"a6c8bbbd70fbc32fa7f87366e51602eee75dd9710f56901d985b2903305d20b6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"baf4f182f3e5000077b3a3a7cee8e3ae71be0d788429a3e94a58a9c819fbb7b5\"" Jun 20 19:16:22.464791 containerd[1495]: time="2025-06-20T19:16:22.464545285Z" level=info msg="StartContainer for \"baf4f182f3e5000077b3a3a7cee8e3ae71be0d788429a3e94a58a9c819fbb7b5\"" Jun 20 19:16:22.471543 systemd-logind[1475]: New session 23 of user core. Jun 20 19:16:22.476342 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:16:22.503386 systemd[1]: Started cri-containerd-baf4f182f3e5000077b3a3a7cee8e3ae71be0d788429a3e94a58a9c819fbb7b5.scope - libcontainer container baf4f182f3e5000077b3a3a7cee8e3ae71be0d788429a3e94a58a9c819fbb7b5. Jun 20 19:16:22.538263 containerd[1495]: time="2025-06-20T19:16:22.538039597Z" level=info msg="StartContainer for \"baf4f182f3e5000077b3a3a7cee8e3ae71be0d788429a3e94a58a9c819fbb7b5\" returns successfully" Jun 20 19:16:22.550950 systemd[1]: cri-containerd-baf4f182f3e5000077b3a3a7cee8e3ae71be0d788429a3e94a58a9c819fbb7b5.scope: Deactivated successfully. Jun 20 19:16:22.579585 containerd[1495]: time="2025-06-20T19:16:22.579495260Z" level=info msg="shim disconnected" id=baf4f182f3e5000077b3a3a7cee8e3ae71be0d788429a3e94a58a9c819fbb7b5 namespace=k8s.io Jun 20 19:16:22.580586 containerd[1495]: time="2025-06-20T19:16:22.580324541Z" level=warning msg="cleaning up after shim disconnected" id=baf4f182f3e5000077b3a3a7cee8e3ae71be0d788429a3e94a58a9c819fbb7b5 namespace=k8s.io Jun 20 19:16:22.580586 containerd[1495]: time="2025-06-20T19:16:22.580390262Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:23.136578 sshd[4703]: Connection closed by 147.75.109.163 port 54494 Jun 20 19:16:23.137328 sshd-session[4602]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:23.141858 systemd[1]: sshd@22-49.12.190.100:22-147.75.109.163:54494.service: Deactivated successfully. Jun 20 19:16:23.144198 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:16:23.145634 systemd-logind[1475]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:16:23.147399 systemd-logind[1475]: Removed session 23. Jun 20 19:16:23.228987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baf4f182f3e5000077b3a3a7cee8e3ae71be0d788429a3e94a58a9c819fbb7b5-rootfs.mount: Deactivated successfully. Jun 20 19:16:23.321283 systemd[1]: Started sshd@23-49.12.190.100:22-147.75.109.163:54500.service - OpenSSH per-connection server daemon (147.75.109.163:54500). Jun 20 19:16:23.439094 containerd[1495]: time="2025-06-20T19:16:23.438971649Z" level=info msg="CreateContainer within sandbox \"a6c8bbbd70fbc32fa7f87366e51602eee75dd9710f56901d985b2903305d20b6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:16:23.461662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount259531334.mount: Deactivated successfully. Jun 20 19:16:23.465401 containerd[1495]: time="2025-06-20T19:16:23.465337009Z" level=info msg="CreateContainer within sandbox \"a6c8bbbd70fbc32fa7f87366e51602eee75dd9710f56901d985b2903305d20b6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"16e4d83c7370fa274fd76c25f9820ee431ab58b19a56011acc635dc3e3948586\"" Jun 20 19:16:23.467915 containerd[1495]: time="2025-06-20T19:16:23.467763052Z" level=info msg="StartContainer for \"16e4d83c7370fa274fd76c25f9820ee431ab58b19a56011acc635dc3e3948586\"" Jun 20 19:16:23.516464 systemd[1]: Started cri-containerd-16e4d83c7370fa274fd76c25f9820ee431ab58b19a56011acc635dc3e3948586.scope - libcontainer container 16e4d83c7370fa274fd76c25f9820ee431ab58b19a56011acc635dc3e3948586. Jun 20 19:16:23.564228 containerd[1495]: time="2025-06-20T19:16:23.563421478Z" level=info msg="StartContainer for \"16e4d83c7370fa274fd76c25f9820ee431ab58b19a56011acc635dc3e3948586\" returns successfully" Jun 20 19:16:23.569843 systemd[1]: cri-containerd-16e4d83c7370fa274fd76c25f9820ee431ab58b19a56011acc635dc3e3948586.scope: Deactivated successfully. Jun 20 19:16:23.611684 containerd[1495]: time="2025-06-20T19:16:23.611552231Z" level=info msg="shim disconnected" id=16e4d83c7370fa274fd76c25f9820ee431ab58b19a56011acc635dc3e3948586 namespace=k8s.io Jun 20 19:16:23.611684 containerd[1495]: time="2025-06-20T19:16:23.611637591Z" level=warning msg="cleaning up after shim disconnected" id=16e4d83c7370fa274fd76c25f9820ee431ab58b19a56011acc635dc3e3948586 namespace=k8s.io Jun 20 19:16:23.611684 containerd[1495]: time="2025-06-20T19:16:23.611653631Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:23.745647 kubelet[2787]: I0620 19:16:23.745527 2787 setters.go:602] "Node became not ready" node="ci-4230-2-0-5-45318d0d95" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T19:16:23Z","lastTransitionTime":"2025-06-20T19:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 19:16:24.229007 systemd[1]: run-containerd-runc-k8s.io-16e4d83c7370fa274fd76c25f9820ee431ab58b19a56011acc635dc3e3948586-runc.HejuCr.mount: Deactivated successfully. Jun 20 19:16:24.229224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16e4d83c7370fa274fd76c25f9820ee431ab58b19a56011acc635dc3e3948586-rootfs.mount: Deactivated successfully. Jun 20 19:16:24.312925 sshd[4762]: Accepted publickey for core from 147.75.109.163 port 54500 ssh2: RSA SHA256:KJCE0GuS7IeuJf3d+aFjOxUe2ajf60YPU/gwZh9+pdw Jun 20 19:16:24.315677 sshd-session[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:16:24.324610 systemd-logind[1475]: New session 24 of user core. Jun 20 19:16:24.332479 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:16:24.443351 containerd[1495]: time="2025-06-20T19:16:24.443287132Z" level=info msg="CreateContainer within sandbox \"a6c8bbbd70fbc32fa7f87366e51602eee75dd9710f56901d985b2903305d20b6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:16:24.463419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3892898067.mount: Deactivated successfully. Jun 20 19:16:24.469651 containerd[1495]: time="2025-06-20T19:16:24.469588932Z" level=info msg="CreateContainer within sandbox \"a6c8bbbd70fbc32fa7f87366e51602eee75dd9710f56901d985b2903305d20b6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0eff79ee4646c49d96a5b29fb80898520aa7535f32560fa3db0a0910cdf32e98\"" Jun 20 19:16:24.470460 containerd[1495]: time="2025-06-20T19:16:24.470400733Z" level=info msg="StartContainer for \"0eff79ee4646c49d96a5b29fb80898520aa7535f32560fa3db0a0910cdf32e98\"" Jun 20 19:16:24.513514 systemd[1]: Started cri-containerd-0eff79ee4646c49d96a5b29fb80898520aa7535f32560fa3db0a0910cdf32e98.scope - libcontainer container 0eff79ee4646c49d96a5b29fb80898520aa7535f32560fa3db0a0910cdf32e98. Jun 20 19:16:24.541791 systemd[1]: cri-containerd-0eff79ee4646c49d96a5b29fb80898520aa7535f32560fa3db0a0910cdf32e98.scope: Deactivated successfully. Jun 20 19:16:24.546291 containerd[1495]: time="2025-06-20T19:16:24.546251048Z" level=info msg="StartContainer for \"0eff79ee4646c49d96a5b29fb80898520aa7535f32560fa3db0a0910cdf32e98\" returns successfully" Jun 20 19:16:24.571955 containerd[1495]: time="2025-06-20T19:16:24.571885047Z" level=info msg="shim disconnected" id=0eff79ee4646c49d96a5b29fb80898520aa7535f32560fa3db0a0910cdf32e98 namespace=k8s.io Jun 20 19:16:24.572471 containerd[1495]: time="2025-06-20T19:16:24.572249407Z" level=warning msg="cleaning up after shim disconnected" id=0eff79ee4646c49d96a5b29fb80898520aa7535f32560fa3db0a0910cdf32e98 namespace=k8s.io Jun 20 19:16:24.572471 containerd[1495]: time="2025-06-20T19:16:24.572272527Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:25.228168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0eff79ee4646c49d96a5b29fb80898520aa7535f32560fa3db0a0910cdf32e98-rootfs.mount: Deactivated successfully. Jun 20 19:16:25.451222 containerd[1495]: time="2025-06-20T19:16:25.451109815Z" level=info msg="CreateContainer within sandbox \"a6c8bbbd70fbc32fa7f87366e51602eee75dd9710f56901d985b2903305d20b6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:16:25.470452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount874984859.mount: Deactivated successfully. Jun 20 19:16:25.474593 containerd[1495]: time="2025-06-20T19:16:25.474451690Z" level=info msg="CreateContainer within sandbox \"a6c8bbbd70fbc32fa7f87366e51602eee75dd9710f56901d985b2903305d20b6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dadaf07937b6dc70bc6b576753a634e4c1b47d24a4e480157137a468e2ae1ae9\"" Jun 20 19:16:25.476318 containerd[1495]: time="2025-06-20T19:16:25.475310691Z" level=info msg="StartContainer for \"dadaf07937b6dc70bc6b576753a634e4c1b47d24a4e480157137a468e2ae1ae9\"" Jun 20 19:16:25.523456 systemd[1]: Started cri-containerd-dadaf07937b6dc70bc6b576753a634e4c1b47d24a4e480157137a468e2ae1ae9.scope - libcontainer container dadaf07937b6dc70bc6b576753a634e4c1b47d24a4e480157137a468e2ae1ae9. Jun 20 19:16:25.567583 containerd[1495]: time="2025-06-20T19:16:25.567531790Z" level=info msg="StartContainer for \"dadaf07937b6dc70bc6b576753a634e4c1b47d24a4e480157137a468e2ae1ae9\" returns successfully" Jun 20 19:16:25.892204 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jun 20 19:16:26.483426 kubelet[2787]: I0620 19:16:26.483318 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xj2m4" podStartSLOduration=5.4832950480000004 podStartE2EDuration="5.483295048s" podCreationTimestamp="2025-06-20 19:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:16:26.481668206 +0000 UTC m=+350.240117889" watchObservedRunningTime="2025-06-20 19:16:26.483295048 +0000 UTC m=+350.241744691" Jun 20 19:16:27.073922 kubelet[2787]: E0620 19:16:27.073826 2787 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48356->127.0.0.1:42367: write tcp 127.0.0.1:48356->127.0.0.1:42367: write: broken pipe Jun 20 19:16:28.886516 systemd-networkd[1392]: lxc_health: Link UP Jun 20 19:16:28.897061 systemd-networkd[1392]: lxc_health: Gained carrier Jun 20 19:16:29.178499 systemd[1]: run-containerd-runc-k8s.io-dadaf07937b6dc70bc6b576753a634e4c1b47d24a4e480157137a468e2ae1ae9-runc.VvFRdK.mount: Deactivated successfully. Jun 20 19:16:30.697733 systemd-networkd[1392]: lxc_health: Gained IPv6LL Jun 20 19:16:33.493581 systemd[1]: run-containerd-runc-k8s.io-dadaf07937b6dc70bc6b576753a634e4c1b47d24a4e480157137a468e2ae1ae9-runc.uobnHE.mount: Deactivated successfully. Jun 20 19:16:35.904318 sshd[4819]: Connection closed by 147.75.109.163 port 54500 Jun 20 19:16:35.905335 sshd-session[4762]: pam_unix(sshd:session): session closed for user core Jun 20 19:16:35.910561 systemd[1]: sshd@23-49.12.190.100:22-147.75.109.163:54500.service: Deactivated successfully. Jun 20 19:16:35.913347 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:16:35.915136 systemd-logind[1475]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:16:35.916640 systemd-logind[1475]: Removed session 24. Jun 20 19:16:36.406086 containerd[1495]: time="2025-06-20T19:16:36.406004166Z" level=info msg="StopPodSandbox for \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\"" Jun 20 19:16:36.406586 containerd[1495]: time="2025-06-20T19:16:36.406205246Z" level=info msg="TearDown network for sandbox \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\" successfully" Jun 20 19:16:36.406586 containerd[1495]: time="2025-06-20T19:16:36.406231646Z" level=info msg="StopPodSandbox for \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\" returns successfully" Jun 20 19:16:36.408832 containerd[1495]: time="2025-06-20T19:16:36.407228768Z" level=info msg="RemovePodSandbox for \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\"" Jun 20 19:16:36.408832 containerd[1495]: time="2025-06-20T19:16:36.407269728Z" level=info msg="Forcibly stopping sandbox \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\"" Jun 20 19:16:36.408832 containerd[1495]: time="2025-06-20T19:16:36.407344128Z" level=info msg="TearDown network for sandbox \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\" successfully" Jun 20 19:16:36.412213 containerd[1495]: time="2025-06-20T19:16:36.412121335Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 19:16:36.412485 containerd[1495]: time="2025-06-20T19:16:36.412458055Z" level=info msg="RemovePodSandbox \"3631dea6ebee25773296ddf4e1c54f2eac3513f407fe7cc5f6c83f59a2311b76\" returns successfully" Jun 20 19:16:36.413356 containerd[1495]: time="2025-06-20T19:16:36.413299296Z" level=info msg="StopPodSandbox for \"1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990\"" Jun 20 19:16:36.413484 containerd[1495]: time="2025-06-20T19:16:36.413445457Z" level=info msg="TearDown network for sandbox \"1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990\" successfully" Jun 20 19:16:36.413484 containerd[1495]: time="2025-06-20T19:16:36.413475417Z" level=info msg="StopPodSandbox for \"1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990\" returns successfully" Jun 20 19:16:36.413962 containerd[1495]: time="2025-06-20T19:16:36.413933457Z" level=info msg="RemovePodSandbox for \"1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990\"" Jun 20 19:16:36.415455 containerd[1495]: time="2025-06-20T19:16:36.414108378Z" level=info msg="Forcibly stopping sandbox \"1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990\"" Jun 20 19:16:36.415455 containerd[1495]: time="2025-06-20T19:16:36.414215818Z" level=info msg="TearDown network for sandbox \"1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990\" successfully" Jun 20 19:16:36.417877 containerd[1495]: time="2025-06-20T19:16:36.417668863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 20 19:16:36.417877 containerd[1495]: time="2025-06-20T19:16:36.417747063Z" level=info msg="RemovePodSandbox \"1f3b9e4e5fe28da7a81ad98ce97a7e249872b2a03e4ef8749beb8069ebe2c990\" returns successfully" Jun 20 19:16:51.116133 kubelet[2787]: E0620 19:16:51.115743 2787 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:54278->10.0.0.2:2379: read: connection timed out" Jun 20 19:16:51.124194 systemd[1]: cri-containerd-4ba3d36773ad0c9d61c8b828a4f7ccac61fdbea2b5ecba7dbcf61727491cfec7.scope: Deactivated successfully. Jun 20 19:16:51.125259 systemd[1]: cri-containerd-4ba3d36773ad0c9d61c8b828a4f7ccac61fdbea2b5ecba7dbcf61727491cfec7.scope: Consumed 5.293s CPU time, 25.2M memory peak. Jun 20 19:16:51.157633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ba3d36773ad0c9d61c8b828a4f7ccac61fdbea2b5ecba7dbcf61727491cfec7-rootfs.mount: Deactivated successfully. Jun 20 19:16:51.169370 containerd[1495]: time="2025-06-20T19:16:51.169255257Z" level=info msg="shim disconnected" id=4ba3d36773ad0c9d61c8b828a4f7ccac61fdbea2b5ecba7dbcf61727491cfec7 namespace=k8s.io Jun 20 19:16:51.170679 containerd[1495]: time="2025-06-20T19:16:51.170384761Z" level=warning msg="cleaning up after shim disconnected" id=4ba3d36773ad0c9d61c8b828a4f7ccac61fdbea2b5ecba7dbcf61727491cfec7 namespace=k8s.io Jun 20 19:16:51.170679 containerd[1495]: time="2025-06-20T19:16:51.170450562Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:51.525455 kubelet[2787]: I0620 19:16:51.525387 2787 scope.go:117] "RemoveContainer" containerID="4ba3d36773ad0c9d61c8b828a4f7ccac61fdbea2b5ecba7dbcf61727491cfec7" Jun 20 19:16:51.527885 containerd[1495]: time="2025-06-20T19:16:51.527829983Z" level=info msg="CreateContainer within sandbox \"4d72bc8f0ee732c0f1816f0ccf56cfeb36a5c2dffe1cb9cdf7510bc25de56ba1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 20 19:16:51.547561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2662033916.mount: Deactivated successfully. Jun 20 19:16:51.550147 containerd[1495]: time="2025-06-20T19:16:51.550081940Z" level=info msg="CreateContainer within sandbox \"4d72bc8f0ee732c0f1816f0ccf56cfeb36a5c2dffe1cb9cdf7510bc25de56ba1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c4c647289944083c7baeb89baed86f97170f14c06328451d0aaa7e24895135c1\"" Jun 20 19:16:51.550935 containerd[1495]: time="2025-06-20T19:16:51.550893198Z" level=info msg="StartContainer for \"c4c647289944083c7baeb89baed86f97170f14c06328451d0aaa7e24895135c1\"" Jun 20 19:16:51.589643 systemd[1]: Started cri-containerd-c4c647289944083c7baeb89baed86f97170f14c06328451d0aaa7e24895135c1.scope - libcontainer container c4c647289944083c7baeb89baed86f97170f14c06328451d0aaa7e24895135c1. Jun 20 19:16:51.635769 containerd[1495]: time="2025-06-20T19:16:51.635538212Z" level=info msg="StartContainer for \"c4c647289944083c7baeb89baed86f97170f14c06328451d0aaa7e24895135c1\" returns successfully" Jun 20 19:16:51.790894 systemd[1]: cri-containerd-dfa0471fc07a36796181f4c6761242f1bdfb8830ebd16a55269b3d25f4df7815.scope: Deactivated successfully. Jun 20 19:16:51.793449 systemd[1]: cri-containerd-dfa0471fc07a36796181f4c6761242f1bdfb8830ebd16a55269b3d25f4df7815.scope: Consumed 6.541s CPU time, 60M memory peak. Jun 20 19:16:51.824579 containerd[1495]: time="2025-06-20T19:16:51.824430742Z" level=info msg="shim disconnected" id=dfa0471fc07a36796181f4c6761242f1bdfb8830ebd16a55269b3d25f4df7815 namespace=k8s.io Jun 20 19:16:51.825114 containerd[1495]: time="2025-06-20T19:16:51.824915072Z" level=warning msg="cleaning up after shim disconnected" id=dfa0471fc07a36796181f4c6761242f1bdfb8830ebd16a55269b3d25f4df7815 namespace=k8s.io Jun 20 19:16:51.825114 containerd[1495]: time="2025-06-20T19:16:51.824944993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:16:52.156081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfa0471fc07a36796181f4c6761242f1bdfb8830ebd16a55269b3d25f4df7815-rootfs.mount: Deactivated successfully. Jun 20 19:16:52.534264 kubelet[2787]: I0620 19:16:52.534032 2787 scope.go:117] "RemoveContainer" containerID="dfa0471fc07a36796181f4c6761242f1bdfb8830ebd16a55269b3d25f4df7815" Jun 20 19:16:52.538205 containerd[1495]: time="2025-06-20T19:16:52.537592914Z" level=info msg="CreateContainer within sandbox \"d4fc2fd0ee46b2dce1e67d1509a8ef8dd16cec5b3f4419e78b04ea1760b935b7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 20 19:16:52.560631 containerd[1495]: time="2025-06-20T19:16:52.560563484Z" level=info msg="CreateContainer within sandbox \"d4fc2fd0ee46b2dce1e67d1509a8ef8dd16cec5b3f4419e78b04ea1760b935b7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c1d5176eb1acd2a88d7b665480178811838818fbd5f104a562ff36413e6d51e4\"" Jun 20 19:16:52.562713 containerd[1495]: time="2025-06-20T19:16:52.561291139Z" level=info msg="StartContainer for \"c1d5176eb1acd2a88d7b665480178811838818fbd5f104a562ff36413e6d51e4\"" Jun 20 19:16:52.603391 systemd[1]: Started cri-containerd-c1d5176eb1acd2a88d7b665480178811838818fbd5f104a562ff36413e6d51e4.scope - libcontainer container c1d5176eb1acd2a88d7b665480178811838818fbd5f104a562ff36413e6d51e4. Jun 20 19:16:52.653402 containerd[1495]: time="2025-06-20T19:16:52.653259618Z" level=info msg="StartContainer for \"c1d5176eb1acd2a88d7b665480178811838818fbd5f104a562ff36413e6d51e4\" returns successfully" Jun 20 19:16:56.491226 kubelet[2787]: E0620 19:16:56.490982 2787 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:54082->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-2-0-5-45318d0d95.184ad64719d0c1ce kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-2-0-5-45318d0d95,UID:0fd15b6a02023c0a6e08d2f517c8567d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-2-0-5-45318d0d95,},FirstTimestamp:2025-06-20 19:16:46.05099259 +0000 UTC m=+369.809442233,LastTimestamp:2025-06-20 19:16:46.05099259 +0000 UTC m=+369.809442233,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-0-5-45318d0d95,}"