Jul 6 23:07:40.907937 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 6 23:07:40.907963 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Sun Jul 6 21:51:54 -00 2025 Jul 6 23:07:40.907974 kernel: KASLR enabled Jul 6 23:07:40.907980 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jul 6 23:07:40.907986 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jul 6 23:07:40.907992 kernel: random: crng init done Jul 6 23:07:40.907999 kernel: secureboot: Secure boot disabled Jul 6 23:07:40.908011 kernel: ACPI: Early table checksum verification disabled Jul 6 23:07:40.908017 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jul 6 23:07:40.908025 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jul 6 23:07:40.908031 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:07:40.908037 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:07:40.908043 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:07:40.908049 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:07:40.908056 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:07:40.908064 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:07:40.908070 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:07:40.908076 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:07:40.908082 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:07:40.908089 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jul 6 23:07:40.908095 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jul 6 23:07:40.908101 kernel: NUMA: Failed to initialise from firmware Jul 6 23:07:40.908107 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jul 6 23:07:40.908113 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Jul 6 23:07:40.908120 kernel: Zone ranges: Jul 6 23:07:40.909365 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 6 23:07:40.909378 kernel: DMA32 empty Jul 6 23:07:40.909385 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jul 6 23:07:40.909391 kernel: Movable zone start for each node Jul 6 23:07:40.909397 kernel: Early memory node ranges Jul 6 23:07:40.909403 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jul 6 23:07:40.909410 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jul 6 23:07:40.909416 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jul 6 23:07:40.909423 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jul 6 23:07:40.909429 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jul 6 23:07:40.909435 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jul 6 23:07:40.909441 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jul 6 23:07:40.909449 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jul 6 23:07:40.909456 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jul 6 23:07:40.909462 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jul 6 23:07:40.909471 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jul 6 23:07:40.909478 kernel: psci: probing for conduit method from ACPI. Jul 6 23:07:40.909485 kernel: psci: PSCIv1.1 detected in firmware. Jul 6 23:07:40.909493 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:07:40.909499 kernel: psci: Trusted OS migration not required Jul 6 23:07:40.909506 kernel: psci: SMC Calling Convention v1.1 Jul 6 23:07:40.909513 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 6 23:07:40.909519 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 6 23:07:40.909526 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 6 23:07:40.909533 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 6 23:07:40.909539 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:07:40.909546 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:07:40.909553 kernel: CPU features: detected: Hardware dirty bit management Jul 6 23:07:40.909561 kernel: CPU features: detected: Spectre-v4 Jul 6 23:07:40.909567 kernel: CPU features: detected: Spectre-BHB Jul 6 23:07:40.909574 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 6 23:07:40.909580 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 6 23:07:40.909587 kernel: CPU features: detected: ARM erratum 1418040 Jul 6 23:07:40.909593 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 6 23:07:40.909600 kernel: alternatives: applying boot alternatives Jul 6 23:07:40.909608 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ca8feb1f79a67c117068f051b5f829d3e40170c022cd5834bd6789cba9641479 Jul 6 23:07:40.909615 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:07:40.909622 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:07:40.909628 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:07:40.909637 kernel: Fallback order for Node 0: 0 Jul 6 23:07:40.909650 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jul 6 23:07:40.909657 kernel: Policy zone: Normal Jul 6 23:07:40.909664 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:07:40.909670 kernel: software IO TLB: area num 2. Jul 6 23:07:40.909677 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jul 6 23:07:40.909684 kernel: Memory: 3883828K/4096000K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 212172K reserved, 0K cma-reserved) Jul 6 23:07:40.909691 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:07:40.909697 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:07:40.909705 kernel: rcu: RCU event tracing is enabled. Jul 6 23:07:40.909711 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:07:40.909718 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:07:40.909727 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:07:40.909733 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:07:40.909740 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:07:40.909747 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:07:40.909753 kernel: GICv3: 256 SPIs implemented Jul 6 23:07:40.909760 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:07:40.909766 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:07:40.909773 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 6 23:07:40.909779 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 6 23:07:40.909786 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 6 23:07:40.909793 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:07:40.909801 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jul 6 23:07:40.909808 kernel: GICv3: using LPI property table @0x00000001000e0000 Jul 6 23:07:40.909814 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jul 6 23:07:40.909821 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:07:40.909828 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:07:40.909834 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 6 23:07:40.909841 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 6 23:07:40.909848 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 6 23:07:40.909854 kernel: Console: colour dummy device 80x25 Jul 6 23:07:40.909861 kernel: ACPI: Core revision 20230628 Jul 6 23:07:40.909869 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 6 23:07:40.909877 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:07:40.909884 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:07:40.909891 kernel: landlock: Up and running. Jul 6 23:07:40.909897 kernel: SELinux: Initializing. Jul 6 23:07:40.909904 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:07:40.909911 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:07:40.909918 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:07:40.909925 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:07:40.909932 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:07:40.909940 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:07:40.909947 kernel: Platform MSI: ITS@0x8080000 domain created Jul 6 23:07:40.909954 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 6 23:07:40.909960 kernel: Remapping and enabling EFI services. Jul 6 23:07:40.909967 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:07:40.909974 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:07:40.909981 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 6 23:07:40.909988 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jul 6 23:07:40.909995 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:07:40.910004 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 6 23:07:40.910015 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:07:40.910027 kernel: SMP: Total of 2 processors activated. Jul 6 23:07:40.910036 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:07:40.910043 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 6 23:07:40.910050 kernel: CPU features: detected: Common not Private translations Jul 6 23:07:40.910057 kernel: CPU features: detected: CRC32 instructions Jul 6 23:07:40.910064 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 6 23:07:40.910072 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 6 23:07:40.910083 kernel: CPU features: detected: LSE atomic instructions Jul 6 23:07:40.910091 kernel: CPU features: detected: Privileged Access Never Jul 6 23:07:40.910098 kernel: CPU features: detected: RAS Extension Support Jul 6 23:07:40.910105 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 6 23:07:40.910112 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:07:40.910119 kernel: alternatives: applying system-wide alternatives Jul 6 23:07:40.910141 kernel: devtmpfs: initialized Jul 6 23:07:40.910149 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:07:40.910160 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:07:40.910168 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:07:40.910175 kernel: SMBIOS 3.0.0 present. Jul 6 23:07:40.910182 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jul 6 23:07:40.910189 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:07:40.910196 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:07:40.910203 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:07:40.910211 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:07:40.910227 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:07:40.910237 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Jul 6 23:07:40.910244 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:07:40.910251 kernel: cpuidle: using governor menu Jul 6 23:07:40.910258 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:07:40.910265 kernel: ASID allocator initialised with 32768 entries Jul 6 23:07:40.910273 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:07:40.910280 kernel: Serial: AMBA PL011 UART driver Jul 6 23:07:40.910287 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 6 23:07:40.910294 kernel: Modules: 0 pages in range for non-PLT usage Jul 6 23:07:40.910303 kernel: Modules: 509264 pages in range for PLT usage Jul 6 23:07:40.910310 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:07:40.910317 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:07:40.910325 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:07:40.910332 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:07:40.910339 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:07:40.910346 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:07:40.910353 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:07:40.910360 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:07:40.910369 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:07:40.910376 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:07:40.910383 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:07:40.910390 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:07:40.910397 kernel: ACPI: Interpreter enabled Jul 6 23:07:40.910404 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:07:40.910411 kernel: ACPI: MCFG table detected, 1 entries Jul 6 23:07:40.910418 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 6 23:07:40.910426 kernel: printk: console [ttyAMA0] enabled Jul 6 23:07:40.910437 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:07:40.910609 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:07:40.910686 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 6 23:07:40.910752 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 6 23:07:40.910816 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 6 23:07:40.910878 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 6 23:07:40.910888 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 6 23:07:40.910897 kernel: PCI host bridge to bus 0000:00 Jul 6 23:07:40.910969 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 6 23:07:40.911028 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 6 23:07:40.911084 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 6 23:07:40.911184 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:07:40.911295 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 6 23:07:40.911377 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jul 6 23:07:40.911454 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jul 6 23:07:40.911523 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jul 6 23:07:40.911597 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 6 23:07:40.911666 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jul 6 23:07:40.911750 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 6 23:07:40.911818 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jul 6 23:07:40.911896 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 6 23:07:40.911963 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jul 6 23:07:40.912036 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 6 23:07:40.912103 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jul 6 23:07:40.912208 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 6 23:07:40.912325 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jul 6 23:07:40.912409 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 6 23:07:40.912476 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jul 6 23:07:40.912551 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 6 23:07:40.912617 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jul 6 23:07:40.912703 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 6 23:07:40.912783 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jul 6 23:07:40.912869 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jul 6 23:07:40.912937 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jul 6 23:07:40.913018 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jul 6 23:07:40.913087 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jul 6 23:07:40.913207 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jul 6 23:07:40.913308 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jul 6 23:07:40.913388 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 6 23:07:40.913464 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jul 6 23:07:40.913546 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 6 23:07:40.913614 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jul 6 23:07:40.913705 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jul 6 23:07:40.913779 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jul 6 23:07:40.913851 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jul 6 23:07:40.913943 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jul 6 23:07:40.914025 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jul 6 23:07:40.914111 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 6 23:07:40.915760 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jul 6 23:07:40.915844 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jul 6 23:07:40.915923 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jul 6 23:07:40.915995 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jul 6 23:07:40.916073 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jul 6 23:07:40.917326 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jul 6 23:07:40.917426 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jul 6 23:07:40.917497 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jul 6 23:07:40.917570 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jul 6 23:07:40.917643 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 6 23:07:40.917717 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jul 6 23:07:40.917782 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jul 6 23:07:40.917856 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 6 23:07:40.917924 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 6 23:07:40.917990 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jul 6 23:07:40.918059 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 6 23:07:40.919183 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jul 6 23:07:40.919439 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 6 23:07:40.919524 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 6 23:07:40.919591 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jul 6 23:07:40.919656 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 6 23:07:40.919726 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 6 23:07:40.919792 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jul 6 23:07:40.919856 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jul 6 23:07:40.919929 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 6 23:07:40.920000 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jul 6 23:07:40.920063 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jul 6 23:07:40.920175 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 6 23:07:40.920302 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jul 6 23:07:40.920373 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jul 6 23:07:40.920442 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 6 23:07:40.920505 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jul 6 23:07:40.920575 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jul 6 23:07:40.920645 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 6 23:07:40.920711 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jul 6 23:07:40.920786 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jul 6 23:07:40.920856 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jul 6 23:07:40.920922 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jul 6 23:07:40.920990 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jul 6 23:07:40.921060 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jul 6 23:07:40.922580 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jul 6 23:07:40.922720 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jul 6 23:07:40.922815 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jul 6 23:07:40.922882 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jul 6 23:07:40.922951 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jul 6 23:07:40.923016 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jul 6 23:07:40.923094 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jul 6 23:07:40.923230 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 6 23:07:40.923310 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jul 6 23:07:40.923377 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 6 23:07:40.923445 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jul 6 23:07:40.923521 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 6 23:07:40.923588 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jul 6 23:07:40.923659 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jul 6 23:07:40.923731 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jul 6 23:07:40.923796 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jul 6 23:07:40.923864 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jul 6 23:07:40.923929 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jul 6 23:07:40.923997 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jul 6 23:07:40.924061 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jul 6 23:07:40.924204 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jul 6 23:07:40.924288 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jul 6 23:07:40.924354 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jul 6 23:07:40.924418 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jul 6 23:07:40.924483 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jul 6 23:07:40.924547 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jul 6 23:07:40.924618 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jul 6 23:07:40.924682 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jul 6 23:07:40.924750 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jul 6 23:07:40.924813 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jul 6 23:07:40.924878 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jul 6 23:07:40.924947 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jul 6 23:07:40.925014 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jul 6 23:07:40.925077 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jul 6 23:07:40.925159 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jul 6 23:07:40.925300 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jul 6 23:07:40.925384 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 6 23:07:40.925452 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jul 6 23:07:40.925518 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jul 6 23:07:40.925582 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jul 6 23:07:40.925646 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jul 6 23:07:40.925710 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jul 6 23:07:40.925781 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jul 6 23:07:40.925851 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jul 6 23:07:40.925914 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jul 6 23:07:40.925978 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jul 6 23:07:40.926042 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jul 6 23:07:40.926114 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jul 6 23:07:40.926339 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jul 6 23:07:40.926407 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jul 6 23:07:40.926469 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jul 6 23:07:40.926538 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jul 6 23:07:40.926604 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jul 6 23:07:40.926675 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jul 6 23:07:40.926741 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jul 6 23:07:40.926806 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jul 6 23:07:40.926886 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jul 6 23:07:40.926957 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jul 6 23:07:40.927028 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jul 6 23:07:40.927095 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jul 6 23:07:40.927177 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jul 6 23:07:40.927283 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jul 6 23:07:40.927352 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jul 6 23:07:40.927415 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jul 6 23:07:40.927492 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jul 6 23:07:40.927558 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jul 6 23:07:40.927623 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jul 6 23:07:40.927687 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jul 6 23:07:40.927771 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jul 6 23:07:40.927836 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 6 23:07:40.927907 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jul 6 23:07:40.927974 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jul 6 23:07:40.928046 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jul 6 23:07:40.928111 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jul 6 23:07:40.928230 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jul 6 23:07:40.928303 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jul 6 23:07:40.928366 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 6 23:07:40.928433 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jul 6 23:07:40.928499 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jul 6 23:07:40.928571 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jul 6 23:07:40.928659 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 6 23:07:40.928731 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jul 6 23:07:40.928794 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jul 6 23:07:40.928856 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jul 6 23:07:40.928918 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jul 6 23:07:40.928984 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 6 23:07:40.929042 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 6 23:07:40.929098 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 6 23:07:40.929195 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jul 6 23:07:40.929294 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jul 6 23:07:40.929356 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jul 6 23:07:40.929428 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jul 6 23:07:40.929487 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jul 6 23:07:40.929546 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jul 6 23:07:40.929636 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jul 6 23:07:40.929712 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jul 6 23:07:40.929771 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jul 6 23:07:40.929838 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jul 6 23:07:40.929897 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jul 6 23:07:40.929958 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jul 6 23:07:40.930033 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jul 6 23:07:40.930097 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jul 6 23:07:40.930251 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jul 6 23:07:40.930327 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jul 6 23:07:40.930388 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jul 6 23:07:40.930456 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 6 23:07:40.930524 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jul 6 23:07:40.930583 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jul 6 23:07:40.930642 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 6 23:07:40.930709 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jul 6 23:07:40.930781 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jul 6 23:07:40.930842 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 6 23:07:40.930918 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jul 6 23:07:40.930976 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jul 6 23:07:40.931034 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jul 6 23:07:40.931043 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 6 23:07:40.931051 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 6 23:07:40.931059 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 6 23:07:40.931066 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 6 23:07:40.931074 kernel: iommu: Default domain type: Translated Jul 6 23:07:40.931084 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:07:40.931091 kernel: efivars: Registered efivars operations Jul 6 23:07:40.931098 kernel: vgaarb: loaded Jul 6 23:07:40.931106 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:07:40.931114 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:07:40.931186 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:07:40.931197 kernel: pnp: PnP ACPI init Jul 6 23:07:40.931299 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 6 23:07:40.931314 kernel: pnp: PnP ACPI: found 1 devices Jul 6 23:07:40.931322 kernel: NET: Registered PF_INET protocol family Jul 6 23:07:40.931330 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:07:40.931339 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:07:40.931347 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:07:40.931355 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:07:40.931363 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:07:40.931370 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:07:40.931381 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:07:40.931390 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:07:40.931397 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:07:40.931472 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jul 6 23:07:40.931483 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:07:40.931492 kernel: kvm [1]: HYP mode not available Jul 6 23:07:40.931499 kernel: Initialise system trusted keyrings Jul 6 23:07:40.931507 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:07:40.931515 kernel: Key type asymmetric registered Jul 6 23:07:40.931522 kernel: Asymmetric key parser 'x509' registered Jul 6 23:07:40.931531 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 6 23:07:40.931539 kernel: io scheduler mq-deadline registered Jul 6 23:07:40.931546 kernel: io scheduler kyber registered Jul 6 23:07:40.931554 kernel: io scheduler bfq registered Jul 6 23:07:40.931562 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 6 23:07:40.931629 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jul 6 23:07:40.931696 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jul 6 23:07:40.931759 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:07:40.931828 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jul 6 23:07:40.931891 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jul 6 23:07:40.931953 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:07:40.932019 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jul 6 23:07:40.932082 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jul 6 23:07:40.932184 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:07:40.932299 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jul 6 23:07:40.932375 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jul 6 23:07:40.932438 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:07:40.932505 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jul 6 23:07:40.932569 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jul 6 23:07:40.932634 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:07:40.932704 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jul 6 23:07:40.932770 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jul 6 23:07:40.932834 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:07:40.932900 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jul 6 23:07:40.932963 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jul 6 23:07:40.933027 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:07:40.933096 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jul 6 23:07:40.934379 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jul 6 23:07:40.934506 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:07:40.934519 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jul 6 23:07:40.934607 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jul 6 23:07:40.934679 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jul 6 23:07:40.934767 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 6 23:07:40.934778 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 6 23:07:40.934787 kernel: ACPI: button: Power Button [PWRB] Jul 6 23:07:40.934794 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 6 23:07:40.934869 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jul 6 23:07:40.934941 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jul 6 23:07:40.934952 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:07:40.934960 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 6 23:07:40.935035 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jul 6 23:07:40.935046 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jul 6 23:07:40.935053 kernel: thunder_xcv, ver 1.0 Jul 6 23:07:40.935064 kernel: thunder_bgx, ver 1.0 Jul 6 23:07:40.935073 kernel: nicpf, ver 1.0 Jul 6 23:07:40.935081 kernel: nicvf, ver 1.0 Jul 6 23:07:40.935287 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:07:40.935362 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:07:40 UTC (1751843260) Jul 6 23:07:40.935378 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:07:40.935388 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 6 23:07:40.935395 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 6 23:07:40.935403 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:07:40.935411 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:07:40.935419 kernel: Segment Routing with IPv6 Jul 6 23:07:40.935427 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:07:40.935435 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:07:40.935442 kernel: Key type dns_resolver registered Jul 6 23:07:40.935452 kernel: registered taskstats version 1 Jul 6 23:07:40.935459 kernel: Loading compiled-in X.509 certificates Jul 6 23:07:40.935468 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: b86e6d3bec2e587f2e5c37def91c4582416a83e3' Jul 6 23:07:40.935475 kernel: Key type .fscrypt registered Jul 6 23:07:40.935482 kernel: Key type fscrypt-provisioning registered Jul 6 23:07:40.935490 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:07:40.935498 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:07:40.935505 kernel: ima: No architecture policies found Jul 6 23:07:40.935513 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:07:40.935523 kernel: clk: Disabling unused clocks Jul 6 23:07:40.935530 kernel: Freeing unused kernel memory: 38336K Jul 6 23:07:40.935538 kernel: Run /init as init process Jul 6 23:07:40.935545 kernel: with arguments: Jul 6 23:07:40.935553 kernel: /init Jul 6 23:07:40.935561 kernel: with environment: Jul 6 23:07:40.935568 kernel: HOME=/ Jul 6 23:07:40.935575 kernel: TERM=linux Jul 6 23:07:40.935583 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:07:40.935593 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:07:40.935604 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:07:40.935613 systemd[1]: Detected virtualization kvm. Jul 6 23:07:40.935621 systemd[1]: Detected architecture arm64. Jul 6 23:07:40.935628 systemd[1]: Running in initrd. Jul 6 23:07:40.935636 systemd[1]: No hostname configured, using default hostname. Jul 6 23:07:40.935645 systemd[1]: Hostname set to . Jul 6 23:07:40.935654 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:07:40.935663 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:07:40.935671 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:07:40.935680 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:07:40.935688 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:07:40.935696 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:07:40.935705 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:07:40.935715 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:07:40.935724 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:07:40.935733 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:07:40.935741 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:07:40.935749 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:07:40.935757 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:07:40.935766 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:07:40.935774 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:07:40.935784 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:07:40.935792 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:07:40.935800 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:07:40.935808 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:07:40.935816 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:07:40.935825 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:07:40.935833 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:07:40.935841 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:07:40.935849 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:07:40.935859 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:07:40.935867 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:07:40.935875 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:07:40.935883 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:07:40.935891 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:07:40.935900 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:07:40.935908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:07:40.935916 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:07:40.935925 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:07:40.935934 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:07:40.935969 systemd-journald[237]: Collecting audit messages is disabled. Jul 6 23:07:40.935992 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:07:40.936000 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:07:40.936009 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:07:40.936017 kernel: Bridge firewalling registered Jul 6 23:07:40.936025 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:07:40.936033 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:07:40.936043 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:07:40.936052 systemd-journald[237]: Journal started Jul 6 23:07:40.936071 systemd-journald[237]: Runtime Journal (/run/log/journal/94c657fb93374005b410d79277da4fa4) is 8M, max 76.6M, 68.6M free. Jul 6 23:07:40.901275 systemd-modules-load[238]: Inserted module 'overlay' Jul 6 23:07:40.940054 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:07:40.920196 systemd-modules-load[238]: Inserted module 'br_netfilter' Jul 6 23:07:40.944532 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:07:40.944579 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:07:40.957412 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:07:40.959715 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:07:40.961566 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:07:40.966584 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:07:40.973713 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:07:40.975750 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:07:40.999515 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:07:41.019335 dracut-cmdline[273]: dracut-dracut-053 Jul 6 23:07:41.024268 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=ca8feb1f79a67c117068f051b5f829d3e40170c022cd5834bd6789cba9641479 Jul 6 23:07:41.049384 systemd-resolved[275]: Positive Trust Anchors: Jul 6 23:07:41.049399 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:07:41.049430 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:07:41.059314 systemd-resolved[275]: Defaulting to hostname 'linux'. Jul 6 23:07:41.062304 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:07:41.063600 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:07:41.123191 kernel: SCSI subsystem initialized Jul 6 23:07:41.128184 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:07:41.136179 kernel: iscsi: registered transport (tcp) Jul 6 23:07:41.150164 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:07:41.150256 kernel: QLogic iSCSI HBA Driver Jul 6 23:07:41.204615 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:07:41.214474 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:07:41.233376 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:07:41.233482 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:07:41.233516 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:07:41.285170 kernel: raid6: neonx8 gen() 15713 MB/s Jul 6 23:07:41.302167 kernel: raid6: neonx4 gen() 15105 MB/s Jul 6 23:07:41.319182 kernel: raid6: neonx2 gen() 13107 MB/s Jul 6 23:07:41.336182 kernel: raid6: neonx1 gen() 10281 MB/s Jul 6 23:07:41.353185 kernel: raid6: int64x8 gen() 6714 MB/s Jul 6 23:07:41.370161 kernel: raid6: int64x4 gen() 7267 MB/s Jul 6 23:07:41.387166 kernel: raid6: int64x2 gen() 6062 MB/s Jul 6 23:07:41.404240 kernel: raid6: int64x1 gen() 4996 MB/s Jul 6 23:07:41.404591 kernel: raid6: using algorithm neonx8 gen() 15713 MB/s Jul 6 23:07:41.421192 kernel: raid6: .... xor() 11839 MB/s, rmw enabled Jul 6 23:07:41.421279 kernel: raid6: using neon recovery algorithm Jul 6 23:07:41.426458 kernel: xor: measuring software checksum speed Jul 6 23:07:41.426528 kernel: 8regs : 21641 MB/sec Jul 6 23:07:41.426542 kernel: 32regs : 21704 MB/sec Jul 6 23:07:41.426564 kernel: arm64_neon : 27936 MB/sec Jul 6 23:07:41.427162 kernel: xor: using function: arm64_neon (27936 MB/sec) Jul 6 23:07:41.478246 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:07:41.493191 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:07:41.500506 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:07:41.516070 systemd-udevd[457]: Using default interface naming scheme 'v255'. Jul 6 23:07:41.520318 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:07:41.535696 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:07:41.552221 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jul 6 23:07:41.589256 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:07:41.595462 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:07:41.646735 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:07:41.658348 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:07:41.678180 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:07:41.679682 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:07:41.681428 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:07:41.682932 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:07:41.688347 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:07:41.706250 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:07:41.771036 kernel: scsi host0: Virtio SCSI HBA Jul 6 23:07:41.782271 kernel: ACPI: bus type USB registered Jul 6 23:07:41.782337 kernel: usbcore: registered new interface driver usbfs Jul 6 23:07:41.782358 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 6 23:07:41.783160 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 6 23:07:41.783199 kernel: usbcore: registered new interface driver hub Jul 6 23:07:41.785158 kernel: usbcore: registered new device driver usb Jul 6 23:07:41.793708 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:07:41.793794 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:07:41.796548 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:07:41.797291 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:07:41.797362 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:07:41.798276 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:07:41.808765 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:07:41.828984 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:07:41.832776 kernel: sr 0:0:0:0: Power-on or device reset occurred Jul 6 23:07:41.832969 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jul 6 23:07:41.833052 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:07:41.838157 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 6 23:07:41.838425 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jul 6 23:07:41.838533 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 6 23:07:41.838629 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 6 23:07:41.839330 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jul 6 23:07:41.840177 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jul 6 23:07:41.840741 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:07:41.843668 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jul 6 23:07:41.846198 kernel: hub 1-0:1.0: USB hub found Jul 6 23:07:41.846538 kernel: hub 1-0:1.0: 4 ports detected Jul 6 23:07:41.850821 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 6 23:07:41.851020 kernel: hub 2-0:1.0: USB hub found Jul 6 23:07:41.851112 kernel: hub 2-0:1.0: 4 ports detected Jul 6 23:07:41.851962 kernel: sd 0:0:0:1: Power-on or device reset occurred Jul 6 23:07:41.852082 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jul 6 23:07:41.852287 kernel: sd 0:0:0:1: [sda] Write Protect is off Jul 6 23:07:41.852386 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jul 6 23:07:41.852467 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 6 23:07:41.858991 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:07:41.859054 kernel: GPT:17805311 != 80003071 Jul 6 23:07:41.859065 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:07:41.859075 kernel: GPT:17805311 != 80003071 Jul 6 23:07:41.859084 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:07:41.859094 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:07:41.861154 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jul 6 23:07:41.872790 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:07:41.909151 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (502) Jul 6 23:07:41.914155 kernel: BTRFS: device fsid 990dd864-0c88-4d4d-9797-49057844458a devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (505) Jul 6 23:07:41.937807 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 6 23:07:41.946094 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 6 23:07:41.953093 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 6 23:07:41.953801 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 6 23:07:41.963185 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 6 23:07:41.971339 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:07:41.981269 disk-uuid[574]: Primary Header is updated. Jul 6 23:07:41.981269 disk-uuid[574]: Secondary Entries is updated. Jul 6 23:07:41.981269 disk-uuid[574]: Secondary Header is updated. Jul 6 23:07:41.990185 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:07:41.996228 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:07:42.086260 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 6 23:07:42.222755 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jul 6 23:07:42.222836 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jul 6 23:07:42.223317 kernel: usbcore: registered new interface driver usbhid Jul 6 23:07:42.223361 kernel: usbhid: USB HID core driver Jul 6 23:07:42.328282 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jul 6 23:07:42.459185 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jul 6 23:07:42.513236 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jul 6 23:07:42.999223 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:07:42.999288 disk-uuid[575]: The operation has completed successfully. Jul 6 23:07:43.069098 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:07:43.069270 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:07:43.106503 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:07:43.111091 sh[589]: Success Jul 6 23:07:43.125171 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 6 23:07:43.195049 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:07:43.198226 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:07:43.199453 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:07:43.217890 kernel: BTRFS info (device dm-0): first mount of filesystem 990dd864-0c88-4d4d-9797-49057844458a Jul 6 23:07:43.217954 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:07:43.219061 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:07:43.219795 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:07:43.220149 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:07:43.228168 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 6 23:07:43.231374 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:07:43.232160 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:07:43.242573 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:07:43.247347 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:07:43.272413 kernel: BTRFS info (device sda6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:07:43.272491 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:07:43.273143 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:07:43.277215 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:07:43.277336 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:07:43.282188 kernel: BTRFS info (device sda6): last unmount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:07:43.285958 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:07:43.293881 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:07:43.386698 ignition[666]: Ignition 2.20.0 Jul 6 23:07:43.386712 ignition[666]: Stage: fetch-offline Jul 6 23:07:43.386746 ignition[666]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:07:43.386754 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:07:43.386907 ignition[666]: parsed url from cmdline: "" Jul 6 23:07:43.386911 ignition[666]: no config URL provided Jul 6 23:07:43.386915 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:07:43.390432 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:07:43.386921 ignition[666]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:07:43.394594 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:07:43.386926 ignition[666]: failed to fetch config: resource requires networking Jul 6 23:07:43.387093 ignition[666]: Ignition finished successfully Jul 6 23:07:43.403333 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:07:43.428712 systemd-networkd[774]: lo: Link UP Jul 6 23:07:43.428723 systemd-networkd[774]: lo: Gained carrier Jul 6 23:07:43.433088 systemd-networkd[774]: Enumeration completed Jul 6 23:07:43.433743 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:07:43.434159 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:07:43.434178 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:07:43.436275 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:07:43.436282 systemd-networkd[774]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:07:43.437087 systemd-networkd[774]: eth0: Link UP Jul 6 23:07:43.437091 systemd-networkd[774]: eth0: Gained carrier Jul 6 23:07:43.437099 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:07:43.437278 systemd[1]: Reached target network.target - Network. Jul 6 23:07:43.442389 systemd-networkd[774]: eth1: Link UP Jul 6 23:07:43.442392 systemd-networkd[774]: eth1: Gained carrier Jul 6 23:07:43.442403 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:07:43.443365 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:07:43.456808 ignition[778]: Ignition 2.20.0 Jul 6 23:07:43.456820 ignition[778]: Stage: fetch Jul 6 23:07:43.457003 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:07:43.457013 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:07:43.457792 ignition[778]: parsed url from cmdline: "" Jul 6 23:07:43.457798 ignition[778]: no config URL provided Jul 6 23:07:43.457805 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:07:43.457816 ignition[778]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:07:43.457906 ignition[778]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jul 6 23:07:43.459641 ignition[778]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 6 23:07:43.475254 systemd-networkd[774]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:07:43.499264 systemd-networkd[774]: eth0: DHCPv4 address 78.47.124.97/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 6 23:07:43.660178 ignition[778]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jul 6 23:07:43.666916 ignition[778]: GET result: OK Jul 6 23:07:43.667073 ignition[778]: parsing config with SHA512: 492fcaa925b2c6330c8c38cfa562d7fec8b2c202a0e7619fd1849720b78193f051f7c9a8e71292eaf35f02d4e8cd3937771209ccf2f268a6d684be23b015a5af Jul 6 23:07:43.674873 unknown[778]: fetched base config from "system" Jul 6 23:07:43.674882 unknown[778]: fetched base config from "system" Jul 6 23:07:43.675472 ignition[778]: fetch: fetch complete Jul 6 23:07:43.674892 unknown[778]: fetched user config from "hetzner" Jul 6 23:07:43.675478 ignition[778]: fetch: fetch passed Jul 6 23:07:43.678257 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:07:43.675530 ignition[778]: Ignition finished successfully Jul 6 23:07:43.685490 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:07:43.701167 ignition[786]: Ignition 2.20.0 Jul 6 23:07:43.701180 ignition[786]: Stage: kargs Jul 6 23:07:43.701415 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:07:43.701425 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:07:43.704705 ignition[786]: kargs: kargs passed Jul 6 23:07:43.705223 ignition[786]: Ignition finished successfully Jul 6 23:07:43.707506 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:07:43.730479 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:07:43.747899 ignition[793]: Ignition 2.20.0 Jul 6 23:07:43.747914 ignition[793]: Stage: disks Jul 6 23:07:43.748232 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:07:43.748249 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:07:43.751925 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:07:43.749430 ignition[793]: disks: disks passed Jul 6 23:07:43.753230 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:07:43.749497 ignition[793]: Ignition finished successfully Jul 6 23:07:43.755293 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:07:43.756092 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:07:43.756909 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:07:43.757889 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:07:43.769564 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:07:43.800462 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 6 23:07:43.805164 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:07:43.811276 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:07:43.853231 kernel: EXT4-fs (sda9): mounted filesystem efd38a90-a3d5-48a9-85e4-1ea6162daba0 r/w with ordered data mode. Quota mode: none. Jul 6 23:07:43.853658 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:07:43.854852 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:07:43.862378 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:07:43.865735 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:07:43.868325 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:07:43.871839 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:07:43.873449 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:07:43.879156 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:07:43.884196 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (809) Jul 6 23:07:43.888970 kernel: BTRFS info (device sda6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:07:43.889041 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:07:43.889053 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:07:43.891504 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:07:43.900765 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:07:43.900830 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:07:43.905873 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:07:43.944483 coreos-metadata[811]: Jul 06 23:07:43.943 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jul 6 23:07:43.948064 coreos-metadata[811]: Jul 06 23:07:43.946 INFO Fetch successful Jul 6 23:07:43.948064 coreos-metadata[811]: Jul 06 23:07:43.946 INFO wrote hostname ci-4230-2-1-6-eb6896cb23 to /sysroot/etc/hostname Jul 6 23:07:43.949561 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:07:43.952016 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:07:43.955862 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:07:43.960099 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:07:43.964838 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:07:44.061793 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:07:44.067311 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:07:44.070613 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:07:44.081230 kernel: BTRFS info (device sda6): last unmount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:07:44.105649 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:07:44.109467 ignition[926]: INFO : Ignition 2.20.0 Jul 6 23:07:44.109467 ignition[926]: INFO : Stage: mount Jul 6 23:07:44.110673 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:07:44.110673 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:07:44.113028 ignition[926]: INFO : mount: mount passed Jul 6 23:07:44.113028 ignition[926]: INFO : Ignition finished successfully Jul 6 23:07:44.113250 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:07:44.120312 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:07:44.216662 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:07:44.226724 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:07:44.239192 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (938) Jul 6 23:07:44.241846 kernel: BTRFS info (device sda6): first mount of filesystem 297af9a7-3de6-47a6-b022-d94c20ff287b Jul 6 23:07:44.241905 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:07:44.241927 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:07:44.245606 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:07:44.245674 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:07:44.250219 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:07:44.269946 ignition[955]: INFO : Ignition 2.20.0 Jul 6 23:07:44.270768 ignition[955]: INFO : Stage: files Jul 6 23:07:44.271427 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:07:44.273070 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:07:44.273070 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:07:44.274621 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:07:44.274621 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:07:44.278735 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:07:44.280389 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:07:44.280389 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:07:44.279206 unknown[955]: wrote ssh authorized keys file for user: core Jul 6 23:07:44.282911 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:07:44.282911 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 6 23:07:44.410982 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:07:44.666706 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 6 23:07:44.666706 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:07:44.670861 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 6 23:07:44.682317 systemd-networkd[774]: eth0: Gained IPv6LL Jul 6 23:07:44.746358 systemd-networkd[774]: eth1: Gained IPv6LL Jul 6 23:07:45.339821 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:07:45.994888 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:07:45.996284 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:07:45.996284 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:07:45.996284 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:07:45.996284 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:07:45.996284 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:07:45.996284 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:07:45.996284 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:07:45.996284 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:07:45.996284 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:07:45.996284 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:07:45.996284 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:07:45.996284 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:07:45.996284 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:07:45.996284 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 6 23:07:46.720590 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:07:48.753368 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 6 23:07:48.753368 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:07:48.758146 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:07:48.758146 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:07:48.758146 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:07:48.758146 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 6 23:07:48.758146 ignition[955]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 6 23:07:48.758146 ignition[955]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 6 23:07:48.758146 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 6 23:07:48.758146 ignition[955]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:07:48.758146 ignition[955]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:07:48.758146 ignition[955]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:07:48.758146 ignition[955]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:07:48.758146 ignition[955]: INFO : files: files passed Jul 6 23:07:48.758146 ignition[955]: INFO : Ignition finished successfully Jul 6 23:07:48.760626 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:07:48.775413 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:07:48.778300 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:07:48.781941 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:07:48.792420 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:07:48.802870 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:07:48.802870 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:07:48.805620 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:07:48.808051 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:07:48.809096 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:07:48.820441 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:07:48.849931 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:07:48.850081 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:07:48.851898 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:07:48.853109 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:07:48.854564 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:07:48.862416 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:07:48.876766 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:07:48.883389 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:07:48.898196 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:07:48.899708 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:07:48.901154 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:07:48.902245 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:07:48.902379 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:07:48.904406 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:07:48.905711 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:07:48.907116 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:07:48.908321 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:07:48.909589 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:07:48.910860 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:07:48.911563 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:07:48.912820 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:07:48.913873 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:07:48.914993 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:07:48.915981 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:07:48.916111 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:07:48.917432 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:07:48.918083 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:07:48.919165 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:07:48.919298 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:07:48.920352 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:07:48.920471 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:07:48.922087 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:07:48.922251 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:07:48.923417 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:07:48.923511 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:07:48.924668 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:07:48.924760 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:07:48.935479 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:07:48.941519 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:07:48.946374 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:07:48.947120 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:07:48.948543 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:07:48.948654 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:07:48.957165 ignition[1007]: INFO : Ignition 2.20.0 Jul 6 23:07:48.957165 ignition[1007]: INFO : Stage: umount Jul 6 23:07:48.957165 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:07:48.957165 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 6 23:07:48.959412 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:07:48.967054 ignition[1007]: INFO : umount: umount passed Jul 6 23:07:48.967054 ignition[1007]: INFO : Ignition finished successfully Jul 6 23:07:48.960499 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:07:48.966381 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:07:48.966954 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:07:48.967079 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:07:48.969945 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:07:48.970039 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:07:48.971409 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:07:48.971510 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:07:48.973908 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:07:48.973959 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:07:48.974773 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:07:48.974815 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:07:48.975691 systemd[1]: Stopped target network.target - Network. Jul 6 23:07:48.976518 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:07:48.976574 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:07:48.977549 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:07:48.978349 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:07:48.982226 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:07:48.983197 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:07:48.985218 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:07:48.986562 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:07:48.986614 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:07:48.987611 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:07:48.987654 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:07:48.988626 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:07:48.988685 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:07:48.989760 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:07:48.989806 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:07:48.990855 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:07:48.990907 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:07:48.991917 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:07:48.992735 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:07:48.999005 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:07:48.999964 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:07:49.004264 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:07:49.004541 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:07:49.004592 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:07:49.007886 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:07:49.008252 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:07:49.008358 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:07:49.012088 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:07:49.012632 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:07:49.012701 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:07:49.019282 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:07:49.019777 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:07:49.019839 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:07:49.022470 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:07:49.022522 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:07:49.023627 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:07:49.023687 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:07:49.025017 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:07:49.028655 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:07:49.034756 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:07:49.036086 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:07:49.037968 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:07:49.038058 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:07:49.039819 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:07:49.039855 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:07:49.041217 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:07:49.041272 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:07:49.044204 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:07:49.044264 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:07:49.045610 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:07:49.045663 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:07:49.052589 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:07:49.053237 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:07:49.053297 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:07:49.054072 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:07:49.054113 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:07:49.057284 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:07:49.057333 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:07:49.061281 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:07:49.061339 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:07:49.065532 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:07:49.065648 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:07:49.067068 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:07:49.067153 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:07:49.069527 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:07:49.078344 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:07:49.084413 systemd[1]: Switching root. Jul 6 23:07:49.105298 systemd-journald[237]: Journal stopped Jul 6 23:07:50.038677 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 6 23:07:50.038766 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:07:50.038781 kernel: SELinux: policy capability open_perms=1 Jul 6 23:07:50.038790 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:07:50.038799 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:07:50.038811 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:07:50.038824 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:07:50.038833 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:07:50.038843 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:07:50.038852 kernel: audit: type=1403 audit(1751843269.239:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:07:50.038862 systemd[1]: Successfully loaded SELinux policy in 38.960ms. Jul 6 23:07:50.038881 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.981ms. Jul 6 23:07:50.038892 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:07:50.038902 systemd[1]: Detected virtualization kvm. Jul 6 23:07:50.038912 systemd[1]: Detected architecture arm64. Jul 6 23:07:50.038923 systemd[1]: Detected first boot. Jul 6 23:07:50.038933 systemd[1]: Hostname set to . Jul 6 23:07:50.038943 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:07:50.038952 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:07:50.038962 zram_generator::config[1052]: No configuration found. Jul 6 23:07:50.038973 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:07:50.038984 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:07:50.038995 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:07:50.039007 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:07:50.039017 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:07:50.039028 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:07:50.039038 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:07:50.039048 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:07:50.039059 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:07:50.039069 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:07:50.039079 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:07:50.039097 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:07:50.039114 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:07:50.039139 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:07:50.039151 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:07:50.039191 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:07:50.039203 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:07:50.039214 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:07:50.039224 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:07:50.039235 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 6 23:07:50.039249 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:07:50.039259 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:07:50.039269 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:07:50.039280 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:07:50.039290 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:07:50.039314 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:07:50.039332 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:07:50.039343 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:07:50.039353 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:07:50.039367 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:07:50.039380 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:07:50.039392 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:07:50.039402 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:07:50.039418 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:07:50.039429 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:07:50.039439 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:07:50.039454 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:07:50.039464 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:07:50.039477 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:07:50.039487 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:07:50.039499 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:07:50.039510 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:07:50.039521 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:07:50.039531 systemd[1]: Reached target machines.target - Containers. Jul 6 23:07:50.039541 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:07:50.039551 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:07:50.039562 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:07:50.039572 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:07:50.039584 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:07:50.039594 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:07:50.039606 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:07:50.039616 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:07:50.039626 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:07:50.039637 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:07:50.039647 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:07:50.039658 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:07:50.039668 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:07:50.039678 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:07:50.039690 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:07:50.039700 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:07:50.039711 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:07:50.039721 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:07:50.039732 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:07:50.039743 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:07:50.039755 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:07:50.039765 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:07:50.039775 systemd[1]: Stopped verity-setup.service. Jul 6 23:07:50.039785 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:07:50.039795 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:07:50.039805 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:07:50.039815 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:07:50.039827 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:07:50.039838 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:07:50.039849 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:07:50.039859 kernel: loop: module loaded Jul 6 23:07:50.039869 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:07:50.039879 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:07:50.039890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:07:50.039901 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:07:50.039911 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:07:50.039921 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:07:50.039931 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:07:50.039941 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:07:50.039994 systemd-journald[1120]: Collecting audit messages is disabled. Jul 6 23:07:50.040016 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:07:50.040030 systemd-journald[1120]: Journal started Jul 6 23:07:50.040053 systemd-journald[1120]: Runtime Journal (/run/log/journal/94c657fb93374005b410d79277da4fa4) is 8M, max 76.6M, 68.6M free. Jul 6 23:07:49.798886 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:07:49.806084 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:07:49.806563 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:07:50.055218 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:07:50.057167 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:07:50.069147 kernel: fuse: init (API version 7.39) Jul 6 23:07:50.069257 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:07:50.070263 kernel: ACPI: bus type drm_connector registered Jul 6 23:07:50.072819 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:07:50.074638 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:07:50.075548 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:07:50.077242 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:07:50.078121 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:07:50.078326 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:07:50.079261 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:07:50.081824 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:07:50.085224 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:07:50.087566 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:07:50.100293 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:07:50.118303 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:07:50.119876 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:07:50.119907 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:07:50.121713 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:07:50.128478 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:07:50.132923 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:07:50.133723 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:07:50.136809 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:07:50.142215 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:07:50.142834 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:07:50.146110 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:07:50.147551 systemd-tmpfiles[1145]: ACLs are not supported, ignoring. Jul 6 23:07:50.149482 systemd-tmpfiles[1145]: ACLs are not supported, ignoring. Jul 6 23:07:50.150357 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:07:50.153896 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:07:50.156472 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:07:50.162256 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:07:50.174424 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:07:50.178744 systemd-journald[1120]: Time spent on flushing to /var/log/journal/94c657fb93374005b410d79277da4fa4 is 45.490ms for 1145 entries. Jul 6 23:07:50.178744 systemd-journald[1120]: System Journal (/var/log/journal/94c657fb93374005b410d79277da4fa4) is 8M, max 584.8M, 576.8M free. Jul 6 23:07:50.254760 systemd-journald[1120]: Received client request to flush runtime journal. Jul 6 23:07:50.254965 kernel: loop0: detected capacity change from 0 to 8 Jul 6 23:07:50.254996 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:07:50.255013 kernel: loop1: detected capacity change from 0 to 211168 Jul 6 23:07:50.193209 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:07:50.194859 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:07:50.204919 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:07:50.214713 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:07:50.225417 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:07:50.257351 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 6 23:07:50.259007 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:07:50.262446 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:07:50.289679 kernel: loop2: detected capacity change from 0 to 123192 Jul 6 23:07:50.293294 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:07:50.303449 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:07:50.341152 kernel: loop3: detected capacity change from 0 to 113512 Jul 6 23:07:50.346445 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jul 6 23:07:50.347027 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jul 6 23:07:50.357111 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:07:50.385231 kernel: loop4: detected capacity change from 0 to 8 Jul 6 23:07:50.389202 kernel: loop5: detected capacity change from 0 to 211168 Jul 6 23:07:50.420225 kernel: loop6: detected capacity change from 0 to 123192 Jul 6 23:07:50.446256 kernel: loop7: detected capacity change from 0 to 113512 Jul 6 23:07:50.460892 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jul 6 23:07:50.461831 (sd-merge)[1199]: Merged extensions into '/usr'. Jul 6 23:07:50.467616 systemd[1]: Reload requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:07:50.468566 systemd[1]: Reloading... Jul 6 23:07:50.556151 zram_generator::config[1226]: No configuration found. Jul 6 23:07:50.764577 ldconfig[1172]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:07:50.765637 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:07:50.827011 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:07:50.827221 systemd[1]: Reloading finished in 358 ms. Jul 6 23:07:50.855775 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:07:50.861192 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:07:50.868425 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:07:50.874417 systemd[1]: Starting ensure-sysext.service... Jul 6 23:07:50.878526 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:07:50.892420 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:07:50.897951 systemd[1]: Reload requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:07:50.897971 systemd[1]: Reloading... Jul 6 23:07:50.921628 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:07:50.922086 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:07:50.924391 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:07:50.925416 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jul 6 23:07:50.925587 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jul 6 23:07:50.930414 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:07:50.930433 systemd-tmpfiles[1266]: Skipping /boot Jul 6 23:07:50.943909 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:07:50.943931 systemd-tmpfiles[1266]: Skipping /boot Jul 6 23:07:50.989509 zram_generator::config[1296]: No configuration found. Jul 6 23:07:51.098142 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:07:51.159874 systemd[1]: Reloading finished in 261 ms. Jul 6 23:07:51.177661 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:07:51.191316 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:07:51.204722 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:07:51.209501 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:07:51.214580 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:07:51.223509 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:07:51.228468 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:07:51.235510 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:07:51.240888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:07:51.251533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:07:51.255868 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:07:51.261340 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:07:51.262732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:07:51.262877 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:07:51.268467 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:07:51.275233 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:07:51.277591 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:07:51.278223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:07:51.284437 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:07:51.287506 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:07:51.289274 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:07:51.289406 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:07:51.300513 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:07:51.306097 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:07:51.308502 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:07:51.320221 augenrules[1368]: No rules Jul 6 23:07:51.321385 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:07:51.323508 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:07:51.326666 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:07:51.329058 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:07:51.331220 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:07:51.333620 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:07:51.334416 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:07:51.337585 systemd-udevd[1345]: Using default interface naming scheme 'v255'. Jul 6 23:07:51.338182 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:07:51.348674 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:07:51.352673 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:07:51.356324 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:07:51.357414 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:07:51.357462 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:07:51.357503 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:07:51.358428 systemd[1]: Finished ensure-sysext.service. Jul 6 23:07:51.372415 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:07:51.386250 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:07:51.388468 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:07:51.390936 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:07:51.391108 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:07:51.393672 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:07:51.393831 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:07:51.396839 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:07:51.397790 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:07:51.412677 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:07:51.413419 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:07:51.511646 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:07:51.513084 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:07:51.545277 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 6 23:07:51.558642 systemd-networkd[1391]: lo: Link UP Jul 6 23:07:51.560158 systemd-networkd[1391]: lo: Gained carrier Jul 6 23:07:51.561401 systemd-networkd[1391]: Enumeration completed Jul 6 23:07:51.564504 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:07:51.572581 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:07:51.576472 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:07:51.589257 systemd-resolved[1343]: Positive Trust Anchors: Jul 6 23:07:51.589277 systemd-resolved[1343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:07:51.589309 systemd-resolved[1343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:07:51.596403 systemd-resolved[1343]: Using system hostname 'ci-4230-2-1-6-eb6896cb23'. Jul 6 23:07:51.598103 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:07:51.601624 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:07:51.603193 systemd[1]: Reached target network.target - Network. Jul 6 23:07:51.603712 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:07:51.619599 systemd-networkd[1391]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:07:51.619718 systemd-networkd[1391]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:07:51.622015 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:07:51.622300 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:07:51.623494 systemd-networkd[1391]: eth1: Link UP Jul 6 23:07:51.624143 systemd-networkd[1391]: eth1: Gained carrier Jul 6 23:07:51.624186 systemd-networkd[1391]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:07:51.631341 systemd-networkd[1391]: eth0: Link UP Jul 6 23:07:51.631350 systemd-networkd[1391]: eth0: Gained carrier Jul 6 23:07:51.631372 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:07:51.663294 systemd-networkd[1391]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:07:51.664200 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Jul 6 23:07:51.668191 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1406) Jul 6 23:07:51.688350 systemd-networkd[1391]: eth0: DHCPv4 address 78.47.124.97/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 6 23:07:51.688884 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Jul 6 23:07:51.689381 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Jul 6 23:07:51.705164 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:07:51.731025 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 6 23:07:51.742927 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:07:51.763736 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:07:51.784800 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jul 6 23:07:51.784964 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 6 23:07:51.784986 kernel: [drm] features: -context_init Jul 6 23:07:51.793212 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jul 6 23:07:51.793577 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:07:51.799955 kernel: [drm] number of scanouts: 1 Jul 6 23:07:51.800038 kernel: [drm] number of cap sets: 0 Jul 6 23:07:51.798447 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:07:51.803289 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:07:51.807111 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:07:51.807813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:07:51.807850 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:07:51.807875 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:07:51.810456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:07:51.810650 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:07:51.812070 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:07:51.812743 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:07:51.816317 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:07:51.828148 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jul 6 23:07:51.830527 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:07:51.830729 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:07:51.836952 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:07:51.848898 kernel: Console: switching to colour frame buffer device 160x50 Jul 6 23:07:51.853238 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 6 23:07:51.862817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:07:51.875548 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:07:51.877210 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:07:51.888594 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:07:51.955583 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:07:52.007484 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:07:52.016403 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:07:52.029969 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:07:52.059506 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:07:52.060617 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:07:52.061405 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:07:52.062196 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:07:52.062929 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:07:52.064207 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:07:52.064855 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:07:52.065584 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:07:52.066272 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:07:52.066301 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:07:52.066777 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:07:52.068693 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:07:52.070886 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:07:52.074493 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:07:52.075478 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:07:52.076143 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:07:52.088602 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:07:52.091003 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:07:52.104507 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:07:52.107381 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:07:52.108954 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:07:52.109730 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:07:52.110446 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:07:52.110478 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:07:52.111048 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:07:52.112365 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:07:52.117368 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:07:52.120436 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:07:52.129426 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:07:52.133878 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:07:52.136063 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:07:52.139370 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:07:52.143331 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:07:52.146343 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jul 6 23:07:52.150883 jq[1468]: false Jul 6 23:07:52.149706 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:07:52.154402 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:07:52.159018 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:07:52.160936 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:07:52.163599 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:07:52.164352 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:07:52.166393 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:07:52.168193 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:07:52.170488 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:07:52.172208 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:07:52.193826 (ntainerd)[1487]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:07:52.215000 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:07:52.215291 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:07:52.225938 coreos-metadata[1466]: Jul 06 23:07:52.225 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jul 6 23:07:52.226695 coreos-metadata[1466]: Jul 06 23:07:52.226 INFO Fetch successful Jul 6 23:07:52.226907 coreos-metadata[1466]: Jul 06 23:07:52.226 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jul 6 23:07:52.227117 coreos-metadata[1466]: Jul 06 23:07:52.226 INFO Fetch successful Jul 6 23:07:52.237852 extend-filesystems[1469]: Found loop4 Jul 6 23:07:52.249649 extend-filesystems[1469]: Found loop5 Jul 6 23:07:52.249649 extend-filesystems[1469]: Found loop6 Jul 6 23:07:52.249649 extend-filesystems[1469]: Found loop7 Jul 6 23:07:52.249649 extend-filesystems[1469]: Found sda Jul 6 23:07:52.249649 extend-filesystems[1469]: Found sda1 Jul 6 23:07:52.249649 extend-filesystems[1469]: Found sda2 Jul 6 23:07:52.249649 extend-filesystems[1469]: Found sda3 Jul 6 23:07:52.249649 extend-filesystems[1469]: Found usr Jul 6 23:07:52.249649 extend-filesystems[1469]: Found sda4 Jul 6 23:07:52.249649 extend-filesystems[1469]: Found sda6 Jul 6 23:07:52.249649 extend-filesystems[1469]: Found sda7 Jul 6 23:07:52.249649 extend-filesystems[1469]: Found sda9 Jul 6 23:07:52.249649 extend-filesystems[1469]: Checking size of /dev/sda9 Jul 6 23:07:52.296592 jq[1480]: true Jul 6 23:07:52.255487 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:07:52.263905 dbus-daemon[1467]: [system] SELinux support is enabled Jul 6 23:07:52.297009 tar[1488]: linux-arm64/LICENSE Jul 6 23:07:52.297009 tar[1488]: linux-arm64/helm Jul 6 23:07:52.255723 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:07:52.266428 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:07:52.272207 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:07:52.272235 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:07:52.274259 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:07:52.274278 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:07:52.307974 extend-filesystems[1469]: Resized partition /dev/sda9 Jul 6 23:07:52.313695 jq[1510]: true Jul 6 23:07:52.322234 extend-filesystems[1519]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:07:52.328083 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:07:52.332672 update_engine[1479]: I20250706 23:07:52.330806 1479 main.cc:92] Flatcar Update Engine starting Jul 6 23:07:52.329000 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:07:52.343298 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jul 6 23:07:52.345822 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:07:52.346621 update_engine[1479]: I20250706 23:07:52.346565 1479 update_check_scheduler.cc:74] Next update check in 5m24s Jul 6 23:07:52.359438 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:07:52.458671 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:07:52.462447 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:07:52.475693 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1409) Jul 6 23:07:52.482601 systemd[1]: Starting sshkeys.service... Jul 6 23:07:52.492287 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jul 6 23:07:52.508734 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:07:52.511055 extend-filesystems[1519]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 6 23:07:52.511055 extend-filesystems[1519]: old_desc_blocks = 1, new_desc_blocks = 5 Jul 6 23:07:52.511055 extend-filesystems[1519]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jul 6 23:07:52.519563 extend-filesystems[1469]: Resized filesystem in /dev/sda9 Jul 6 23:07:52.519563 extend-filesystems[1469]: Found sr0 Jul 6 23:07:52.516500 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:07:52.522453 systemd-logind[1477]: New seat seat0. Jul 6 23:07:52.528480 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:07:52.528679 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:07:52.539403 systemd-logind[1477]: Watching system buttons on /dev/input/event0 (Power Button) Jul 6 23:07:52.539425 systemd-logind[1477]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jul 6 23:07:52.541652 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:07:52.558253 coreos-metadata[1546]: Jul 06 23:07:52.554 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jul 6 23:07:52.558253 coreos-metadata[1546]: Jul 06 23:07:52.556 INFO Fetch successful Jul 6 23:07:52.560360 unknown[1546]: wrote ssh authorized keys file for user: core Jul 6 23:07:52.620630 update-ssh-keys[1551]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:07:52.621640 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:07:52.629799 systemd[1]: Finished sshkeys.service. Jul 6 23:07:52.734597 containerd[1487]: time="2025-07-06T23:07:52.734450200Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 6 23:07:52.793647 containerd[1487]: time="2025-07-06T23:07:52.793584600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:07:52.800724 containerd[1487]: time="2025-07-06T23:07:52.800652920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:07:52.800724 containerd[1487]: time="2025-07-06T23:07:52.800703640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:07:52.800724 containerd[1487]: time="2025-07-06T23:07:52.800721840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:07:52.800917 containerd[1487]: time="2025-07-06T23:07:52.800894120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:07:52.800949 containerd[1487]: time="2025-07-06T23:07:52.800919200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:07:52.800998 containerd[1487]: time="2025-07-06T23:07:52.800980400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:07:52.800998 containerd[1487]: time="2025-07-06T23:07:52.800996320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:07:52.801276 containerd[1487]: time="2025-07-06T23:07:52.801249440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:07:52.801276 containerd[1487]: time="2025-07-06T23:07:52.801274760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:07:52.801343 containerd[1487]: time="2025-07-06T23:07:52.801288720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:07:52.801343 containerd[1487]: time="2025-07-06T23:07:52.801297840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:07:52.801400 containerd[1487]: time="2025-07-06T23:07:52.801380400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:07:52.801603 containerd[1487]: time="2025-07-06T23:07:52.801580400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:07:52.801731 containerd[1487]: time="2025-07-06T23:07:52.801711000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:07:52.801731 containerd[1487]: time="2025-07-06T23:07:52.801729400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:07:52.801830 containerd[1487]: time="2025-07-06T23:07:52.801810160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:07:52.801877 containerd[1487]: time="2025-07-06T23:07:52.801862040Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:07:52.808183 containerd[1487]: time="2025-07-06T23:07:52.808095160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:07:52.808183 containerd[1487]: time="2025-07-06T23:07:52.808193920Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:07:52.808334 containerd[1487]: time="2025-07-06T23:07:52.808213440Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:07:52.808334 containerd[1487]: time="2025-07-06T23:07:52.808230440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:07:52.808334 containerd[1487]: time="2025-07-06T23:07:52.808281560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:07:52.808758 containerd[1487]: time="2025-07-06T23:07:52.808451880Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:07:52.809458 containerd[1487]: time="2025-07-06T23:07:52.809421840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:07:52.809589 containerd[1487]: time="2025-07-06T23:07:52.809565200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:07:52.809632 containerd[1487]: time="2025-07-06T23:07:52.809588960Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:07:52.809632 containerd[1487]: time="2025-07-06T23:07:52.809606720Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:07:52.809632 containerd[1487]: time="2025-07-06T23:07:52.809621480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:07:52.809682 containerd[1487]: time="2025-07-06T23:07:52.809635280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:07:52.809682 containerd[1487]: time="2025-07-06T23:07:52.809648360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:07:52.809682 containerd[1487]: time="2025-07-06T23:07:52.809662080Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:07:52.809682 containerd[1487]: time="2025-07-06T23:07:52.809676200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:07:52.809747 containerd[1487]: time="2025-07-06T23:07:52.809689200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:07:52.809747 containerd[1487]: time="2025-07-06T23:07:52.809702200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:07:52.809747 containerd[1487]: time="2025-07-06T23:07:52.809717080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:07:52.809747 containerd[1487]: time="2025-07-06T23:07:52.809737520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:07:52.809810 containerd[1487]: time="2025-07-06T23:07:52.809750480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:07:52.809810 containerd[1487]: time="2025-07-06T23:07:52.809764600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:07:52.809810 containerd[1487]: time="2025-07-06T23:07:52.809784800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:07:52.809810 containerd[1487]: time="2025-07-06T23:07:52.809797080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:07:52.809917 containerd[1487]: time="2025-07-06T23:07:52.809813520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:07:52.809917 containerd[1487]: time="2025-07-06T23:07:52.809826680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:07:52.809917 containerd[1487]: time="2025-07-06T23:07:52.809840280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:07:52.809917 containerd[1487]: time="2025-07-06T23:07:52.809854440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:07:52.809917 containerd[1487]: time="2025-07-06T23:07:52.809868920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:07:52.809917 containerd[1487]: time="2025-07-06T23:07:52.809880160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:07:52.809917 containerd[1487]: time="2025-07-06T23:07:52.809893400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:07:52.809917 containerd[1487]: time="2025-07-06T23:07:52.809906720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:07:52.810061 containerd[1487]: time="2025-07-06T23:07:52.809921160Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:07:52.810061 containerd[1487]: time="2025-07-06T23:07:52.809942680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:07:52.810061 containerd[1487]: time="2025-07-06T23:07:52.809956080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:07:52.810061 containerd[1487]: time="2025-07-06T23:07:52.809967120Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:07:52.811707 containerd[1487]: time="2025-07-06T23:07:52.811163920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:07:52.811707 containerd[1487]: time="2025-07-06T23:07:52.811195120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:07:52.811707 containerd[1487]: time="2025-07-06T23:07:52.811205840Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:07:52.811707 containerd[1487]: time="2025-07-06T23:07:52.811285440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:07:52.811707 containerd[1487]: time="2025-07-06T23:07:52.811295080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:07:52.811707 containerd[1487]: time="2025-07-06T23:07:52.811309360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:07:52.811707 containerd[1487]: time="2025-07-06T23:07:52.811319320Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:07:52.811707 containerd[1487]: time="2025-07-06T23:07:52.811329920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:07:52.811906 containerd[1487]: time="2025-07-06T23:07:52.811673600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:07:52.811906 containerd[1487]: time="2025-07-06T23:07:52.811719960Z" level=info msg="Connect containerd service" Jul 6 23:07:52.811906 containerd[1487]: time="2025-07-06T23:07:52.811751960Z" level=info msg="using legacy CRI server" Jul 6 23:07:52.811906 containerd[1487]: time="2025-07-06T23:07:52.811759440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:07:52.812068 containerd[1487]: time="2025-07-06T23:07:52.811997560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:07:52.820034 containerd[1487]: time="2025-07-06T23:07:52.814741640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:07:52.820664 containerd[1487]: time="2025-07-06T23:07:52.820489600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:07:52.820664 containerd[1487]: time="2025-07-06T23:07:52.820549760Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:07:52.822405 containerd[1487]: time="2025-07-06T23:07:52.822264600Z" level=info msg="Start subscribing containerd event" Jul 6 23:07:52.822405 containerd[1487]: time="2025-07-06T23:07:52.822329480Z" level=info msg="Start recovering state" Jul 6 23:07:52.822405 containerd[1487]: time="2025-07-06T23:07:52.822407760Z" level=info msg="Start event monitor" Jul 6 23:07:52.822526 containerd[1487]: time="2025-07-06T23:07:52.822419200Z" level=info msg="Start snapshots syncer" Jul 6 23:07:52.822526 containerd[1487]: time="2025-07-06T23:07:52.822428960Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:07:52.822526 containerd[1487]: time="2025-07-06T23:07:52.822435640Z" level=info msg="Start streaming server" Jul 6 23:07:52.822671 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:07:52.824049 containerd[1487]: time="2025-07-06T23:07:52.824010360Z" level=info msg="containerd successfully booted in 0.090503s" Jul 6 23:07:52.874294 systemd-networkd[1391]: eth1: Gained IPv6LL Jul 6 23:07:52.874836 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Jul 6 23:07:52.880860 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:07:52.882917 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:07:52.894354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:07:52.900459 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:07:52.905351 locksmithd[1523]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:07:52.953345 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:07:52.960306 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:07:52.986550 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:07:52.995440 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:07:53.002925 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:07:53.003196 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:07:53.014596 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:07:53.024856 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:07:53.032958 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:07:53.036323 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 6 23:07:53.038414 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:07:53.040209 tar[1488]: linux-arm64/README.md Jul 6 23:07:53.053099 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:07:53.258452 systemd-networkd[1391]: eth0: Gained IPv6LL Jul 6 23:07:53.259036 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Jul 6 23:07:53.743540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:07:53.745377 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:07:53.749441 (kubelet)[1599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:07:53.750255 systemd[1]: Startup finished in 808ms (kernel) + 8.540s (initrd) + 4.550s (userspace) = 13.899s. Jul 6 23:07:54.295404 kubelet[1599]: E0706 23:07:54.295329 1599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:07:54.298004 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:07:54.298282 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:07:54.298742 systemd[1]: kubelet.service: Consumed 907ms CPU time, 261.3M memory peak. Jul 6 23:08:04.548670 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:08:04.557416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:08:04.668219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:08:04.678995 (kubelet)[1618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:08:04.729657 kubelet[1618]: E0706 23:08:04.729534 1618 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:08:04.733630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:08:04.733787 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:08:04.734143 systemd[1]: kubelet.service: Consumed 152ms CPU time, 104.9M memory peak. Jul 6 23:08:14.854273 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:08:14.870412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:08:14.993250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:08:15.003701 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:08:15.044908 kubelet[1634]: E0706 23:08:15.044843 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:08:15.048771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:08:15.049150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:08:15.049598 systemd[1]: kubelet.service: Consumed 150ms CPU time, 104.5M memory peak. Jul 6 23:08:23.364155 systemd-timesyncd[1382]: Contacted time server 158.180.28.150:123 (2.flatcar.pool.ntp.org). Jul 6 23:08:23.364269 systemd-timesyncd[1382]: Initial clock synchronization to Sun 2025-07-06 23:08:23.662725 UTC. Jul 6 23:08:25.105895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 6 23:08:25.112531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:08:25.237690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:08:25.242614 (kubelet)[1649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:08:25.297943 kubelet[1649]: E0706 23:08:25.297888 1649 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:08:25.302184 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:08:25.302436 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:08:25.303290 systemd[1]: kubelet.service: Consumed 157ms CPU time, 105.3M memory peak. Jul 6 23:08:35.354497 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 6 23:08:35.367510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:08:35.507406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:08:35.517782 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:08:35.566565 kubelet[1664]: E0706 23:08:35.566495 1664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:08:35.569964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:08:35.570453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:08:35.571221 systemd[1]: kubelet.service: Consumed 155ms CPU time, 106.9M memory peak. Jul 6 23:08:37.870703 update_engine[1479]: I20250706 23:08:37.870589 1479 update_attempter.cc:509] Updating boot flags... Jul 6 23:08:37.915199 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1680) Jul 6 23:08:37.978278 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1684) Jul 6 23:08:45.603886 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 6 23:08:45.612499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:08:45.730384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:08:45.738915 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:08:45.787686 kubelet[1697]: E0706 23:08:45.787620 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:08:45.791044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:08:45.791256 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:08:45.791665 systemd[1]: kubelet.service: Consumed 148ms CPU time, 108.2M memory peak. Jul 6 23:08:55.854357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 6 23:08:55.862516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:08:55.984752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:08:56.000164 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:08:56.048409 kubelet[1712]: E0706 23:08:56.048362 1712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:08:56.051053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:08:56.051258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:08:56.051718 systemd[1]: kubelet.service: Consumed 149ms CPU time, 106.9M memory peak. Jul 6 23:09:06.104471 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jul 6 23:09:06.110403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:09:06.230667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:09:06.241724 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:09:06.290346 kubelet[1727]: E0706 23:09:06.290272 1727 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:09:06.294469 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:09:06.294817 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:09:06.295410 systemd[1]: kubelet.service: Consumed 152ms CPU time, 107.1M memory peak. Jul 6 23:09:16.354121 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jul 6 23:09:16.362476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:09:16.473050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:09:16.487685 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:09:16.535148 kubelet[1741]: E0706 23:09:16.535066 1741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:09:16.539485 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:09:16.539902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:09:16.540636 systemd[1]: kubelet.service: Consumed 150ms CPU time, 104.5M memory peak. Jul 6 23:09:26.604376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jul 6 23:09:26.615440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:09:26.758500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:09:26.758566 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:09:26.804152 kubelet[1757]: E0706 23:09:26.804076 1757 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:09:26.807695 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:09:26.807903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:09:26.808422 systemd[1]: kubelet.service: Consumed 152ms CPU time, 104.7M memory peak. Jul 6 23:09:36.854524 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jul 6 23:09:36.864477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:09:36.994693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:09:37.009810 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:09:37.059619 kubelet[1772]: E0706 23:09:37.059538 1772 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:09:37.062498 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:09:37.062783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:09:37.063438 systemd[1]: kubelet.service: Consumed 159ms CPU time, 105M memory peak. Jul 6 23:09:38.724861 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:09:38.734666 systemd[1]: Started sshd@0-78.47.124.97:22-139.178.89.65:34560.service - OpenSSH per-connection server daemon (139.178.89.65:34560). Jul 6 23:09:39.814264 sshd[1779]: Accepted publickey for core from 139.178.89.65 port 34560 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:09:39.817164 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:09:39.826903 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:09:39.833867 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:09:39.842245 systemd-logind[1477]: New session 1 of user core. Jul 6 23:09:39.850282 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:09:39.857718 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:09:39.862515 (systemd)[1783]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:09:39.865443 systemd-logind[1477]: New session c1 of user core. Jul 6 23:09:39.996754 systemd[1783]: Queued start job for default target default.target. Jul 6 23:09:40.005952 systemd[1783]: Created slice app.slice - User Application Slice. Jul 6 23:09:40.006006 systemd[1783]: Reached target paths.target - Paths. Jul 6 23:09:40.006061 systemd[1783]: Reached target timers.target - Timers. Jul 6 23:09:40.008051 systemd[1783]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:09:40.031750 systemd[1783]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:09:40.032001 systemd[1783]: Reached target sockets.target - Sockets. Jul 6 23:09:40.032097 systemd[1783]: Reached target basic.target - Basic System. Jul 6 23:09:40.032584 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:09:40.032846 systemd[1783]: Reached target default.target - Main User Target. Jul 6 23:09:40.032988 systemd[1783]: Startup finished in 158ms. Jul 6 23:09:40.043416 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:09:40.809904 systemd[1]: Started sshd@1-78.47.124.97:22-139.178.89.65:43482.service - OpenSSH per-connection server daemon (139.178.89.65:43482). Jul 6 23:09:41.917610 sshd[1795]: Accepted publickey for core from 139.178.89.65 port 43482 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:09:41.920188 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:09:41.925513 systemd-logind[1477]: New session 2 of user core. Jul 6 23:09:41.936493 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:09:42.677146 sshd[1797]: Connection closed by 139.178.89.65 port 43482 Jul 6 23:09:42.678028 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Jul 6 23:09:42.684454 systemd-logind[1477]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:09:42.685544 systemd[1]: sshd@1-78.47.124.97:22-139.178.89.65:43482.service: Deactivated successfully. Jul 6 23:09:42.688248 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:09:42.690277 systemd-logind[1477]: Removed session 2. Jul 6 23:09:42.876555 systemd[1]: Started sshd@2-78.47.124.97:22-139.178.89.65:43484.service - OpenSSH per-connection server daemon (139.178.89.65:43484). Jul 6 23:09:43.976257 sshd[1803]: Accepted publickey for core from 139.178.89.65 port 43484 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:09:43.978065 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:09:43.985900 systemd-logind[1477]: New session 3 of user core. Jul 6 23:09:43.995510 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:09:44.731152 sshd[1805]: Connection closed by 139.178.89.65 port 43484 Jul 6 23:09:44.730536 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Jul 6 23:09:44.734895 systemd[1]: sshd@2-78.47.124.97:22-139.178.89.65:43484.service: Deactivated successfully. Jul 6 23:09:44.737291 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:09:44.739683 systemd-logind[1477]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:09:44.740650 systemd-logind[1477]: Removed session 3. Jul 6 23:09:44.921709 systemd[1]: Started sshd@3-78.47.124.97:22-139.178.89.65:43500.service - OpenSSH per-connection server daemon (139.178.89.65:43500). Jul 6 23:09:45.992667 sshd[1811]: Accepted publickey for core from 139.178.89.65 port 43500 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:09:45.994939 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:09:46.001304 systemd-logind[1477]: New session 4 of user core. Jul 6 23:09:46.008374 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:09:46.729539 sshd[1813]: Connection closed by 139.178.89.65 port 43500 Jul 6 23:09:46.730494 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Jul 6 23:09:46.735318 systemd[1]: sshd@3-78.47.124.97:22-139.178.89.65:43500.service: Deactivated successfully. Jul 6 23:09:46.738023 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:09:46.739272 systemd-logind[1477]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:09:46.740309 systemd-logind[1477]: Removed session 4. Jul 6 23:09:46.927502 systemd[1]: Started sshd@4-78.47.124.97:22-139.178.89.65:43512.service - OpenSSH per-connection server daemon (139.178.89.65:43512). Jul 6 23:09:47.104400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jul 6 23:09:47.116522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:09:47.253438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:09:47.254936 (kubelet)[1829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:09:47.296671 kubelet[1829]: E0706 23:09:47.296532 1829 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:09:47.299462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:09:47.299622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:09:47.301247 systemd[1]: kubelet.service: Consumed 148ms CPU time, 105.2M memory peak. Jul 6 23:09:48.025339 sshd[1819]: Accepted publickey for core from 139.178.89.65 port 43512 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:09:48.027616 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:09:48.034200 systemd-logind[1477]: New session 5 of user core. Jul 6 23:09:48.040603 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:09:48.613255 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:09:48.613524 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:09:48.628385 sudo[1837]: pam_unix(sudo:session): session closed for user root Jul 6 23:09:48.808465 sshd[1836]: Connection closed by 139.178.89.65 port 43512 Jul 6 23:09:48.809721 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Jul 6 23:09:48.815822 systemd[1]: sshd@4-78.47.124.97:22-139.178.89.65:43512.service: Deactivated successfully. Jul 6 23:09:48.817903 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:09:48.818931 systemd-logind[1477]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:09:48.820219 systemd-logind[1477]: Removed session 5. Jul 6 23:09:49.001589 systemd[1]: Started sshd@5-78.47.124.97:22-139.178.89.65:43520.service - OpenSSH per-connection server daemon (139.178.89.65:43520). Jul 6 23:09:50.084002 sshd[1843]: Accepted publickey for core from 139.178.89.65 port 43520 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:09:50.086768 sshd-session[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:09:50.093739 systemd-logind[1477]: New session 6 of user core. Jul 6 23:09:50.105571 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:09:50.655875 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:09:50.656370 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:09:50.661725 sudo[1847]: pam_unix(sudo:session): session closed for user root Jul 6 23:09:50.668080 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:09:50.668517 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:09:50.687705 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:09:50.721870 augenrules[1869]: No rules Jul 6 23:09:50.722951 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:09:50.723361 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:09:50.727310 sudo[1846]: pam_unix(sudo:session): session closed for user root Jul 6 23:09:50.904163 sshd[1845]: Connection closed by 139.178.89.65 port 43520 Jul 6 23:09:50.903392 sshd-session[1843]: pam_unix(sshd:session): session closed for user core Jul 6 23:09:50.909038 systemd[1]: sshd@5-78.47.124.97:22-139.178.89.65:43520.service: Deactivated successfully. Jul 6 23:09:50.912205 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:09:50.913729 systemd-logind[1477]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:09:50.915014 systemd-logind[1477]: Removed session 6. Jul 6 23:09:51.107687 systemd[1]: Started sshd@6-78.47.124.97:22-139.178.89.65:51380.service - OpenSSH per-connection server daemon (139.178.89.65:51380). Jul 6 23:09:52.208582 sshd[1878]: Accepted publickey for core from 139.178.89.65 port 51380 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:09:52.210551 sshd-session[1878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:09:52.217507 systemd-logind[1477]: New session 7 of user core. Jul 6 23:09:52.223486 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:09:52.789589 sudo[1881]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:09:52.790257 sudo[1881]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:09:53.134616 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:09:53.135845 (dockerd)[1898]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:09:53.378181 dockerd[1898]: time="2025-07-06T23:09:53.378085490Z" level=info msg="Starting up" Jul 6 23:09:53.474184 dockerd[1898]: time="2025-07-06T23:09:53.473994095Z" level=info msg="Loading containers: start." Jul 6 23:09:53.662176 kernel: Initializing XFRM netlink socket Jul 6 23:09:53.759480 systemd-networkd[1391]: docker0: Link UP Jul 6 23:09:53.781397 dockerd[1898]: time="2025-07-06T23:09:53.781321403Z" level=info msg="Loading containers: done." Jul 6 23:09:53.801194 dockerd[1898]: time="2025-07-06T23:09:53.800572153Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:09:53.801194 dockerd[1898]: time="2025-07-06T23:09:53.800709618Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 6 23:09:53.801194 dockerd[1898]: time="2025-07-06T23:09:53.800946423Z" level=info msg="Daemon has completed initialization" Jul 6 23:09:53.843610 dockerd[1898]: time="2025-07-06T23:09:53.842557702Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:09:53.842676 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:09:54.627361 containerd[1487]: time="2025-07-06T23:09:54.627289887Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 6 23:09:55.229072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount781794704.mount: Deactivated successfully. Jul 6 23:09:56.518154 containerd[1487]: time="2025-07-06T23:09:56.516224183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:56.518154 containerd[1487]: time="2025-07-06T23:09:56.518104803Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351808" Jul 6 23:09:56.518856 containerd[1487]: time="2025-07-06T23:09:56.518829774Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:56.523197 containerd[1487]: time="2025-07-06T23:09:56.523143955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:56.525236 containerd[1487]: time="2025-07-06T23:09:56.525180243Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.897833226s" Jul 6 23:09:56.525236 containerd[1487]: time="2025-07-06T23:09:56.525234173Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 6 23:09:56.527554 containerd[1487]: time="2025-07-06T23:09:56.527493902Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 6 23:09:57.353619 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jul 6 23:09:57.368313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:09:57.499291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:09:57.513253 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:09:57.573274 kubelet[2148]: E0706 23:09:57.573083 2148 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:09:57.575818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:09:57.575967 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:09:57.576421 systemd[1]: kubelet.service: Consumed 167ms CPU time, 108.4M memory peak. Jul 6 23:09:58.223396 containerd[1487]: time="2025-07-06T23:09:58.223271499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:58.225165 containerd[1487]: time="2025-07-06T23:09:58.224738800Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537643" Jul 6 23:09:58.226461 containerd[1487]: time="2025-07-06T23:09:58.226393094Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:58.230958 containerd[1487]: time="2025-07-06T23:09:58.230890694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:58.233656 containerd[1487]: time="2025-07-06T23:09:58.233245273Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.705689839s" Jul 6 23:09:58.233656 containerd[1487]: time="2025-07-06T23:09:58.233295642Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 6 23:09:58.234468 containerd[1487]: time="2025-07-06T23:09:58.234217566Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 6 23:09:59.775162 containerd[1487]: time="2025-07-06T23:09:59.774143557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:59.780268 containerd[1487]: time="2025-07-06T23:09:59.780188263Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293535" Jul 6 23:09:59.781487 containerd[1487]: time="2025-07-06T23:09:59.781454407Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:59.784536 containerd[1487]: time="2025-07-06T23:09:59.784486822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:09:59.786151 containerd[1487]: time="2025-07-06T23:09:59.786086184Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.55182045s" Jul 6 23:09:59.787243 containerd[1487]: time="2025-07-06T23:09:59.787201661Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 6 23:09:59.788425 containerd[1487]: time="2025-07-06T23:09:59.788376988Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 6 23:10:00.893615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2158540900.mount: Deactivated successfully. Jul 6 23:10:01.316008 containerd[1487]: time="2025-07-06T23:10:01.315233559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:10:01.316875 containerd[1487]: time="2025-07-06T23:10:01.316583241Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199498" Jul 6 23:10:01.318061 containerd[1487]: time="2025-07-06T23:10:01.317987621Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:10:01.321476 containerd[1487]: time="2025-07-06T23:10:01.321388136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:10:01.322978 containerd[1487]: time="2025-07-06T23:10:01.322497038Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.533829646s" Jul 6 23:10:01.322978 containerd[1487]: time="2025-07-06T23:10:01.322543139Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 6 23:10:01.323158 containerd[1487]: time="2025-07-06T23:10:01.323051889Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 6 23:10:01.919849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2087652103.mount: Deactivated successfully. Jul 6 23:10:03.237117 containerd[1487]: time="2025-07-06T23:10:03.237049449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:10:03.237117 containerd[1487]: time="2025-07-06T23:10:03.237903083Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Jul 6 23:10:03.240363 containerd[1487]: time="2025-07-06T23:10:03.240319719Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:10:03.246111 containerd[1487]: time="2025-07-06T23:10:03.246045972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:10:03.248520 containerd[1487]: time="2025-07-06T23:10:03.248463688Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.925374453s" Jul 6 23:10:03.248701 containerd[1487]: time="2025-07-06T23:10:03.248675647Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 6 23:10:03.249672 containerd[1487]: time="2025-07-06T23:10:03.249519045Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:10:03.698257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1384320526.mount: Deactivated successfully. Jul 6 23:10:03.707959 containerd[1487]: time="2025-07-06T23:10:03.706720244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:10:03.707959 containerd[1487]: time="2025-07-06T23:10:03.707896954Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jul 6 23:10:03.708898 containerd[1487]: time="2025-07-06T23:10:03.708849870Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:10:03.712089 containerd[1487]: time="2025-07-06T23:10:03.712014621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:10:03.714145 containerd[1487]: time="2025-07-06T23:10:03.713253428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 463.37576ms" Jul 6 23:10:03.714145 containerd[1487]: time="2025-07-06T23:10:03.713295252Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:10:03.714509 containerd[1487]: time="2025-07-06T23:10:03.714422341Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 6 23:10:04.258177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3692538892.mount: Deactivated successfully. Jul 6 23:10:06.736052 containerd[1487]: time="2025-07-06T23:10:06.735971019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:10:06.737876 containerd[1487]: time="2025-07-06T23:10:06.737830189Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334637" Jul 6 23:10:06.738102 containerd[1487]: time="2025-07-06T23:10:06.738070867Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:10:06.742388 containerd[1487]: time="2025-07-06T23:10:06.742349816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:10:06.744148 containerd[1487]: time="2025-07-06T23:10:06.744038683Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.029574638s" Jul 6 23:10:06.744148 containerd[1487]: time="2025-07-06T23:10:06.744077630Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 6 23:10:07.603590 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jul 6 23:10:07.612574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:10:07.732442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:10:07.738518 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:10:07.777559 kubelet[2311]: E0706 23:10:07.777510 2311 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:10:07.782208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:10:07.782505 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:10:07.783105 systemd[1]: kubelet.service: Consumed 139ms CPU time, 107M memory peak. Jul 6 23:10:12.046151 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:10:12.046297 systemd[1]: kubelet.service: Consumed 139ms CPU time, 107M memory peak. Jul 6 23:10:12.057956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:10:12.095490 systemd[1]: Reload requested from client PID 2325 ('systemctl') (unit session-7.scope)... Jul 6 23:10:12.095508 systemd[1]: Reloading... Jul 6 23:10:12.237152 zram_generator::config[2389]: No configuration found. Jul 6 23:10:12.308121 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:10:12.398735 systemd[1]: Reloading finished in 302 ms. Jul 6 23:10:12.442625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:10:12.452094 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:10:12.453399 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:10:12.453704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:10:12.453762 systemd[1]: kubelet.service: Consumed 104ms CPU time, 94.9M memory peak. Jul 6 23:10:12.463174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:10:12.581145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:10:12.586385 (kubelet)[2420]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:10:12.626324 kubelet[2420]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:10:12.626324 kubelet[2420]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:10:12.626324 kubelet[2420]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:10:12.626837 kubelet[2420]: I0706 23:10:12.626370 2420 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:10:12.974817 kubelet[2420]: I0706 23:10:12.974779 2420 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:10:12.977034 kubelet[2420]: I0706 23:10:12.974965 2420 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:10:12.977034 kubelet[2420]: I0706 23:10:12.975246 2420 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:10:13.002946 kubelet[2420]: E0706 23:10:13.002890 2420 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://78.47.124.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 78.47.124.97:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:10:13.003172 kubelet[2420]: I0706 23:10:13.003148 2420 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:10:13.017944 kubelet[2420]: E0706 23:10:13.017868 2420 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:10:13.017944 kubelet[2420]: I0706 23:10:13.017937 2420 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:10:13.024053 kubelet[2420]: I0706 23:10:13.023998 2420 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:10:13.026879 kubelet[2420]: I0706 23:10:13.026722 2420 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:10:13.027134 kubelet[2420]: I0706 23:10:13.026856 2420 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-1-6-eb6896cb23","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:10:13.027287 kubelet[2420]: I0706 23:10:13.027227 2420 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:10:13.027287 kubelet[2420]: I0706 23:10:13.027243 2420 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:10:13.027537 kubelet[2420]: I0706 23:10:13.027485 2420 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:10:13.031612 kubelet[2420]: I0706 23:10:13.031536 2420 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:10:13.031758 kubelet[2420]: I0706 23:10:13.031704 2420 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:10:13.031758 kubelet[2420]: I0706 23:10:13.031736 2420 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:10:13.034043 kubelet[2420]: I0706 23:10:13.033946 2420 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:10:13.039151 kubelet[2420]: E0706 23:10:13.037465 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://78.47.124.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-1-6-eb6896cb23&limit=500&resourceVersion=0\": dial tcp 78.47.124.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:10:13.039151 kubelet[2420]: E0706 23:10:13.037813 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://78.47.124.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 78.47.124.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:10:13.039151 kubelet[2420]: I0706 23:10:13.038310 2420 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:10:13.039151 kubelet[2420]: I0706 23:10:13.039148 2420 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:10:13.039486 kubelet[2420]: W0706 23:10:13.039279 2420 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:10:13.042372 kubelet[2420]: I0706 23:10:13.042264 2420 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:10:13.042372 kubelet[2420]: I0706 23:10:13.042311 2420 server.go:1289] "Started kubelet" Jul 6 23:10:13.048095 kubelet[2420]: I0706 23:10:13.048045 2420 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:10:13.049243 kubelet[2420]: E0706 23:10:13.047944 2420 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://78.47.124.97:6443/api/v1/namespaces/default/events\": dial tcp 78.47.124.97:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-1-6-eb6896cb23.184fcc4d707e49a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-1-6-eb6896cb23,UID:ci-4230-2-1-6-eb6896cb23,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-1-6-eb6896cb23,},FirstTimestamp:2025-07-06 23:10:13.042284962 +0000 UTC m=+0.452603521,LastTimestamp:2025-07-06 23:10:13.042284962 +0000 UTC m=+0.452603521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-1-6-eb6896cb23,}" Jul 6 23:10:13.051823 kubelet[2420]: I0706 23:10:13.051780 2420 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:10:13.052872 kubelet[2420]: I0706 23:10:13.052850 2420 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:10:13.056016 kubelet[2420]: I0706 23:10:13.055977 2420 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:10:13.056191 kubelet[2420]: I0706 23:10:13.056145 2420 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:10:13.056334 kubelet[2420]: E0706 23:10:13.056301 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-1-6-eb6896cb23\" not found" Jul 6 23:10:13.056520 kubelet[2420]: I0706 23:10:13.056505 2420 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:10:13.056768 kubelet[2420]: I0706 23:10:13.056750 2420 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:10:13.058067 kubelet[2420]: E0706 23:10:13.058005 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.124.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-1-6-eb6896cb23?timeout=10s\": dial tcp 78.47.124.97:6443: connect: connection refused" interval="200ms" Jul 6 23:10:13.058514 kubelet[2420]: I0706 23:10:13.058481 2420 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:10:13.058675 kubelet[2420]: I0706 23:10:13.058656 2420 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:10:13.059869 kubelet[2420]: I0706 23:10:13.059854 2420 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:10:13.060046 kubelet[2420]: I0706 23:10:13.060032 2420 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:10:13.060554 kubelet[2420]: I0706 23:10:13.060529 2420 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:10:13.065562 kubelet[2420]: E0706 23:10:13.065115 2420 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:10:13.079872 kubelet[2420]: I0706 23:10:13.079819 2420 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:10:13.081097 kubelet[2420]: I0706 23:10:13.081073 2420 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:10:13.081479 kubelet[2420]: I0706 23:10:13.081168 2420 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:10:13.081479 kubelet[2420]: I0706 23:10:13.081195 2420 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:10:13.081479 kubelet[2420]: I0706 23:10:13.081203 2420 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:10:13.081479 kubelet[2420]: E0706 23:10:13.081243 2420 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:10:13.085674 kubelet[2420]: E0706 23:10:13.085632 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://78.47.124.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 78.47.124.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:10:13.087494 kubelet[2420]: E0706 23:10:13.087454 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://78.47.124.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 78.47.124.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:10:13.087850 kubelet[2420]: I0706 23:10:13.087829 2420 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:10:13.087850 kubelet[2420]: I0706 23:10:13.087847 2420 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:10:13.087924 kubelet[2420]: I0706 23:10:13.087866 2420 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:10:13.090671 kubelet[2420]: I0706 23:10:13.090635 2420 policy_none.go:49] "None policy: Start" Jul 6 23:10:13.090671 kubelet[2420]: I0706 23:10:13.090669 2420 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:10:13.090786 kubelet[2420]: I0706 23:10:13.090683 2420 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:10:13.097878 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:10:13.108733 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:10:13.113353 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:10:13.124533 kubelet[2420]: E0706 23:10:13.123928 2420 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:10:13.124533 kubelet[2420]: I0706 23:10:13.124176 2420 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:10:13.124533 kubelet[2420]: I0706 23:10:13.124190 2420 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:10:13.124533 kubelet[2420]: I0706 23:10:13.124504 2420 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:10:13.128911 kubelet[2420]: E0706 23:10:13.127986 2420 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:10:13.128911 kubelet[2420]: E0706 23:10:13.128072 2420 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-1-6-eb6896cb23\" not found" Jul 6 23:10:13.197523 systemd[1]: Created slice kubepods-burstable-pod2fb31c15171711b288a04341e6885dd6.slice - libcontainer container kubepods-burstable-pod2fb31c15171711b288a04341e6885dd6.slice. Jul 6 23:10:13.208767 kubelet[2420]: E0706 23:10:13.208665 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-6-eb6896cb23\" not found" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.213601 systemd[1]: Created slice kubepods-burstable-podecc4cd2647827a979411b3ee5d2f4d9e.slice - libcontainer container kubepods-burstable-podecc4cd2647827a979411b3ee5d2f4d9e.slice. Jul 6 23:10:13.224094 kubelet[2420]: E0706 23:10:13.223700 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-6-eb6896cb23\" not found" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.226569 kubelet[2420]: I0706 23:10:13.226477 2420 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.228725 kubelet[2420]: E0706 23:10:13.228676 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://78.47.124.97:6443/api/v1/nodes\": dial tcp 78.47.124.97:6443: connect: connection refused" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.229071 systemd[1]: Created slice kubepods-burstable-pode79da8afb5eebc2ac610656a95a362eb.slice - libcontainer container kubepods-burstable-pode79da8afb5eebc2ac610656a95a362eb.slice. Jul 6 23:10:13.231739 kubelet[2420]: E0706 23:10:13.231653 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-6-eb6896cb23\" not found" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.259615 kubelet[2420]: E0706 23:10:13.259513 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.124.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-1-6-eb6896cb23?timeout=10s\": dial tcp 78.47.124.97:6443: connect: connection refused" interval="400ms" Jul 6 23:10:13.361590 kubelet[2420]: I0706 23:10:13.361511 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ecc4cd2647827a979411b3ee5d2f4d9e-ca-certs\") pod \"kube-controller-manager-ci-4230-2-1-6-eb6896cb23\" (UID: \"ecc4cd2647827a979411b3ee5d2f4d9e\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.361590 kubelet[2420]: I0706 23:10:13.361581 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ecc4cd2647827a979411b3ee5d2f4d9e-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-1-6-eb6896cb23\" (UID: \"ecc4cd2647827a979411b3ee5d2f4d9e\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.361837 kubelet[2420]: I0706 23:10:13.361618 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ecc4cd2647827a979411b3ee5d2f4d9e-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-1-6-eb6896cb23\" (UID: \"ecc4cd2647827a979411b3ee5d2f4d9e\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.361837 kubelet[2420]: I0706 23:10:13.361673 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2fb31c15171711b288a04341e6885dd6-ca-certs\") pod \"kube-apiserver-ci-4230-2-1-6-eb6896cb23\" (UID: \"2fb31c15171711b288a04341e6885dd6\") " pod="kube-system/kube-apiserver-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.361837 kubelet[2420]: I0706 23:10:13.361702 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2fb31c15171711b288a04341e6885dd6-k8s-certs\") pod \"kube-apiserver-ci-4230-2-1-6-eb6896cb23\" (UID: \"2fb31c15171711b288a04341e6885dd6\") " pod="kube-system/kube-apiserver-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.361837 kubelet[2420]: I0706 23:10:13.361733 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2fb31c15171711b288a04341e6885dd6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-1-6-eb6896cb23\" (UID: \"2fb31c15171711b288a04341e6885dd6\") " pod="kube-system/kube-apiserver-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.361837 kubelet[2420]: I0706 23:10:13.361761 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ecc4cd2647827a979411b3ee5d2f4d9e-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-1-6-eb6896cb23\" (UID: \"ecc4cd2647827a979411b3ee5d2f4d9e\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.362085 kubelet[2420]: I0706 23:10:13.361793 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ecc4cd2647827a979411b3ee5d2f4d9e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-1-6-eb6896cb23\" (UID: \"ecc4cd2647827a979411b3ee5d2f4d9e\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.362085 kubelet[2420]: I0706 23:10:13.361825 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e79da8afb5eebc2ac610656a95a362eb-kubeconfig\") pod \"kube-scheduler-ci-4230-2-1-6-eb6896cb23\" (UID: \"e79da8afb5eebc2ac610656a95a362eb\") " pod="kube-system/kube-scheduler-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.431576 kubelet[2420]: I0706 23:10:13.431519 2420 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.432106 kubelet[2420]: E0706 23:10:13.432058 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://78.47.124.97:6443/api/v1/nodes\": dial tcp 78.47.124.97:6443: connect: connection refused" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.510699 containerd[1487]: time="2025-07-06T23:10:13.510559513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-1-6-eb6896cb23,Uid:2fb31c15171711b288a04341e6885dd6,Namespace:kube-system,Attempt:0,}" Jul 6 23:10:13.525690 containerd[1487]: time="2025-07-06T23:10:13.525629372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-1-6-eb6896cb23,Uid:ecc4cd2647827a979411b3ee5d2f4d9e,Namespace:kube-system,Attempt:0,}" Jul 6 23:10:13.532798 containerd[1487]: time="2025-07-06T23:10:13.532724933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-1-6-eb6896cb23,Uid:e79da8afb5eebc2ac610656a95a362eb,Namespace:kube-system,Attempt:0,}" Jul 6 23:10:13.660969 kubelet[2420]: E0706 23:10:13.660862 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.124.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-1-6-eb6896cb23?timeout=10s\": dial tcp 78.47.124.97:6443: connect: connection refused" interval="800ms" Jul 6 23:10:13.836636 kubelet[2420]: I0706 23:10:13.835896 2420 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.836636 kubelet[2420]: E0706 23:10:13.836459 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://78.47.124.97:6443/api/v1/nodes\": dial tcp 78.47.124.97:6443: connect: connection refused" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:13.931243 kubelet[2420]: E0706 23:10:13.931107 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://78.47.124.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 78.47.124.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:10:14.032041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount215262680.mount: Deactivated successfully. Jul 6 23:10:14.040517 containerd[1487]: time="2025-07-06T23:10:14.040387589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:10:14.043480 containerd[1487]: time="2025-07-06T23:10:14.043372824Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jul 6 23:10:14.045681 containerd[1487]: time="2025-07-06T23:10:14.045569051Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:10:14.047617 containerd[1487]: time="2025-07-06T23:10:14.047551690Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:10:14.048852 containerd[1487]: time="2025-07-06T23:10:14.048788349Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:10:14.051032 containerd[1487]: time="2025-07-06T23:10:14.050533286Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:10:14.053154 containerd[1487]: time="2025-07-06T23:10:14.051472018Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:10:14.053154 containerd[1487]: time="2025-07-06T23:10:14.052692041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:10:14.055803 containerd[1487]: time="2025-07-06T23:10:14.055721306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 545.056139ms" Jul 6 23:10:14.057443 containerd[1487]: time="2025-07-06T23:10:14.057409416Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 524.590986ms" Jul 6 23:10:14.061101 containerd[1487]: time="2025-07-06T23:10:14.061024858Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 535.278515ms" Jul 6 23:10:14.206905 containerd[1487]: time="2025-07-06T23:10:14.206812621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:10:14.207235 containerd[1487]: time="2025-07-06T23:10:14.207055122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:10:14.207235 containerd[1487]: time="2025-07-06T23:10:14.207102391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:10:14.207494 containerd[1487]: time="2025-07-06T23:10:14.207418834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:10:14.209720 containerd[1487]: time="2025-07-06T23:10:14.209610302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:10:14.209720 containerd[1487]: time="2025-07-06T23:10:14.209682204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:10:14.210017 containerd[1487]: time="2025-07-06T23:10:14.209700000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:10:14.210017 containerd[1487]: time="2025-07-06T23:10:14.209856842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:10:14.217579 containerd[1487]: time="2025-07-06T23:10:14.217374137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:10:14.217579 containerd[1487]: time="2025-07-06T23:10:14.217429723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:10:14.217579 containerd[1487]: time="2025-07-06T23:10:14.217462595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:10:14.217918 containerd[1487]: time="2025-07-06T23:10:14.217555013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:10:14.232412 systemd[1]: Started cri-containerd-587eda1d02b922e74451f1dca26cc8937c0cb9784f06576fdfd8e8e8e95561e3.scope - libcontainer container 587eda1d02b922e74451f1dca26cc8937c0cb9784f06576fdfd8e8e8e95561e3. Jul 6 23:10:14.237940 systemd[1]: Started cri-containerd-3f8bca974b87e6248a1755a229319c548bb24920094a4f9fee34e6d3f5d7bade.scope - libcontainer container 3f8bca974b87e6248a1755a229319c548bb24920094a4f9fee34e6d3f5d7bade. Jul 6 23:10:14.254452 systemd[1]: Started cri-containerd-a1a607ee3fa8fe35d3036d3ca12d5e99cac13050d606ce2121bf4ce5d4e5b45c.scope - libcontainer container a1a607ee3fa8fe35d3036d3ca12d5e99cac13050d606ce2121bf4ce5d4e5b45c. Jul 6 23:10:14.299497 kubelet[2420]: E0706 23:10:14.299434 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://78.47.124.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 78.47.124.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:10:14.300565 containerd[1487]: time="2025-07-06T23:10:14.300315479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-1-6-eb6896cb23,Uid:ecc4cd2647827a979411b3ee5d2f4d9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"587eda1d02b922e74451f1dca26cc8937c0cb9784f06576fdfd8e8e8e95561e3\"" Jul 6 23:10:14.309257 containerd[1487]: time="2025-07-06T23:10:14.309198482Z" level=info msg="CreateContainer within sandbox \"587eda1d02b922e74451f1dca26cc8937c0cb9784f06576fdfd8e8e8e95561e3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:10:14.313499 containerd[1487]: time="2025-07-06T23:10:14.313455808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-1-6-eb6896cb23,Uid:2fb31c15171711b288a04341e6885dd6,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f8bca974b87e6248a1755a229319c548bb24920094a4f9fee34e6d3f5d7bade\"" Jul 6 23:10:14.320484 containerd[1487]: time="2025-07-06T23:10:14.319810905Z" level=info msg="CreateContainer within sandbox \"3f8bca974b87e6248a1755a229319c548bb24920094a4f9fee34e6d3f5d7bade\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:10:14.321903 containerd[1487]: time="2025-07-06T23:10:14.321578356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-1-6-eb6896cb23,Uid:e79da8afb5eebc2ac610656a95a362eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1a607ee3fa8fe35d3036d3ca12d5e99cac13050d606ce2121bf4ce5d4e5b45c\"" Jul 6 23:10:14.325773 containerd[1487]: time="2025-07-06T23:10:14.325597940Z" level=info msg="CreateContainer within sandbox \"a1a607ee3fa8fe35d3036d3ca12d5e99cac13050d606ce2121bf4ce5d4e5b45c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:10:14.336312 containerd[1487]: time="2025-07-06T23:10:14.336264231Z" level=info msg="CreateContainer within sandbox \"587eda1d02b922e74451f1dca26cc8937c0cb9784f06576fdfd8e8e8e95561e3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0bfd455c97dc34fd618ab0e70c75c7064caf7ddee2a12d44fe77ab50e3576d86\"" Jul 6 23:10:14.338162 containerd[1487]: time="2025-07-06T23:10:14.337962738Z" level=info msg="StartContainer for \"0bfd455c97dc34fd618ab0e70c75c7064caf7ddee2a12d44fe77ab50e3576d86\"" Jul 6 23:10:14.343839 containerd[1487]: time="2025-07-06T23:10:14.343772568Z" level=info msg="CreateContainer within sandbox \"3f8bca974b87e6248a1755a229319c548bb24920094a4f9fee34e6d3f5d7bade\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9f496b1b3cc599263b713eca046114a3b0e9f0670dfb3599124f2a354ec45f43\"" Jul 6 23:10:14.346012 containerd[1487]: time="2025-07-06T23:10:14.344538862Z" level=info msg="StartContainer for \"9f496b1b3cc599263b713eca046114a3b0e9f0670dfb3599124f2a354ec45f43\"" Jul 6 23:10:14.350062 containerd[1487]: time="2025-07-06T23:10:14.350002175Z" level=info msg="CreateContainer within sandbox \"a1a607ee3fa8fe35d3036d3ca12d5e99cac13050d606ce2121bf4ce5d4e5b45c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ac29005872a43fe53616148fa28434e4aa413827fe35d36a9e3ef42b86777b83\"" Jul 6 23:10:14.351510 containerd[1487]: time="2025-07-06T23:10:14.351470818Z" level=info msg="StartContainer for \"ac29005872a43fe53616148fa28434e4aa413827fe35d36a9e3ef42b86777b83\"" Jul 6 23:10:14.379374 systemd[1]: Started cri-containerd-0bfd455c97dc34fd618ab0e70c75c7064caf7ddee2a12d44fe77ab50e3576d86.scope - libcontainer container 0bfd455c97dc34fd618ab0e70c75c7064caf7ddee2a12d44fe77ab50e3576d86. Jul 6 23:10:14.387351 systemd[1]: Started cri-containerd-9f496b1b3cc599263b713eca046114a3b0e9f0670dfb3599124f2a354ec45f43.scope - libcontainer container 9f496b1b3cc599263b713eca046114a3b0e9f0670dfb3599124f2a354ec45f43. Jul 6 23:10:14.395343 systemd[1]: Started cri-containerd-ac29005872a43fe53616148fa28434e4aa413827fe35d36a9e3ef42b86777b83.scope - libcontainer container ac29005872a43fe53616148fa28434e4aa413827fe35d36a9e3ef42b86777b83. Jul 6 23:10:14.453895 containerd[1487]: time="2025-07-06T23:10:14.453835165Z" level=info msg="StartContainer for \"0bfd455c97dc34fd618ab0e70c75c7064caf7ddee2a12d44fe77ab50e3576d86\" returns successfully" Jul 6 23:10:14.463541 containerd[1487]: time="2025-07-06T23:10:14.463404401Z" level=info msg="StartContainer for \"ac29005872a43fe53616148fa28434e4aa413827fe35d36a9e3ef42b86777b83\" returns successfully" Jul 6 23:10:14.466070 kubelet[2420]: E0706 23:10:14.465683 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.124.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-1-6-eb6896cb23?timeout=10s\": dial tcp 78.47.124.97:6443: connect: connection refused" interval="1.6s" Jul 6 23:10:14.469594 kubelet[2420]: E0706 23:10:14.469557 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://78.47.124.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 78.47.124.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:10:14.471950 kubelet[2420]: E0706 23:10:14.471919 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://78.47.124.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-1-6-eb6896cb23&limit=500&resourceVersion=0\": dial tcp 78.47.124.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:10:14.474203 containerd[1487]: time="2025-07-06T23:10:14.474117880Z" level=info msg="StartContainer for \"9f496b1b3cc599263b713eca046114a3b0e9f0670dfb3599124f2a354ec45f43\" returns successfully" Jul 6 23:10:14.639401 kubelet[2420]: I0706 23:10:14.638754 2420 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:14.639401 kubelet[2420]: E0706 23:10:14.639098 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://78.47.124.97:6443/api/v1/nodes\": dial tcp 78.47.124.97:6443: connect: connection refused" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:15.113845 kubelet[2420]: E0706 23:10:15.113800 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-6-eb6896cb23\" not found" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:15.121056 kubelet[2420]: E0706 23:10:15.120525 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-6-eb6896cb23\" not found" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:15.121056 kubelet[2420]: E0706 23:10:15.120902 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-6-eb6896cb23\" not found" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:16.123492 kubelet[2420]: E0706 23:10:16.123451 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-6-eb6896cb23\" not found" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:16.124097 kubelet[2420]: E0706 23:10:16.123980 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-6-eb6896cb23\" not found" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:16.241599 kubelet[2420]: I0706 23:10:16.241566 2420 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:17.125375 kubelet[2420]: E0706 23:10:17.125209 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-1-6-eb6896cb23\" not found" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:17.548167 kubelet[2420]: E0706 23:10:17.547874 2420 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-2-1-6-eb6896cb23\" not found" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:17.719140 kubelet[2420]: I0706 23:10:17.717318 2420 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:17.758091 kubelet[2420]: I0706 23:10:17.757539 2420 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:17.780375 kubelet[2420]: E0706 23:10:17.780318 2420 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-1-6-eb6896cb23\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:17.780375 kubelet[2420]: I0706 23:10:17.780363 2420 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:17.789008 kubelet[2420]: E0706 23:10:17.788728 2420 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-1-6-eb6896cb23\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:17.789008 kubelet[2420]: I0706 23:10:17.788762 2420 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:17.791511 kubelet[2420]: E0706 23:10:17.791472 2420 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-1-6-eb6896cb23\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:18.041089 kubelet[2420]: I0706 23:10:18.041030 2420 apiserver.go:52] "Watching apiserver" Jul 6 23:10:18.060662 kubelet[2420]: I0706 23:10:18.060614 2420 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:10:19.563216 systemd[1]: Reload requested from client PID 2704 ('systemctl') (unit session-7.scope)... Jul 6 23:10:19.563235 systemd[1]: Reloading... Jul 6 23:10:19.687167 zram_generator::config[2750]: No configuration found. Jul 6 23:10:19.787040 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:10:19.896066 systemd[1]: Reloading finished in 332 ms. Jul 6 23:10:19.926241 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:10:19.941011 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:10:19.941562 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:10:19.941656 systemd[1]: kubelet.service: Consumed 890ms CPU time, 127.7M memory peak. Jul 6 23:10:19.951639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:10:20.093118 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:10:20.104468 (kubelet)[2793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:10:20.174863 kubelet[2793]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:10:20.174863 kubelet[2793]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:10:20.174863 kubelet[2793]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:10:20.174863 kubelet[2793]: I0706 23:10:20.173204 2793 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:10:20.184858 kubelet[2793]: I0706 23:10:20.184821 2793 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:10:20.185016 kubelet[2793]: I0706 23:10:20.185005 2793 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:10:20.185361 kubelet[2793]: I0706 23:10:20.185335 2793 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:10:20.186985 kubelet[2793]: I0706 23:10:20.186960 2793 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 6 23:10:20.189910 kubelet[2793]: I0706 23:10:20.189880 2793 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:10:20.193991 kubelet[2793]: E0706 23:10:20.193956 2793 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:10:20.194171 kubelet[2793]: I0706 23:10:20.194146 2793 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:10:20.197270 kubelet[2793]: I0706 23:10:20.197222 2793 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:10:20.197596 kubelet[2793]: I0706 23:10:20.197556 2793 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:10:20.197967 kubelet[2793]: I0706 23:10:20.197604 2793 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-1-6-eb6896cb23","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:10:20.198060 kubelet[2793]: I0706 23:10:20.197977 2793 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:10:20.198060 kubelet[2793]: I0706 23:10:20.197990 2793 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:10:20.198060 kubelet[2793]: I0706 23:10:20.198052 2793 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:10:20.198327 kubelet[2793]: I0706 23:10:20.198300 2793 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:10:20.198375 kubelet[2793]: I0706 23:10:20.198327 2793 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:10:20.198404 kubelet[2793]: I0706 23:10:20.198388 2793 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:10:20.198461 kubelet[2793]: I0706 23:10:20.198407 2793 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:10:20.205303 kubelet[2793]: I0706 23:10:20.205270 2793 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:10:20.205952 kubelet[2793]: I0706 23:10:20.205914 2793 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:10:20.209616 kubelet[2793]: I0706 23:10:20.209589 2793 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:10:20.209703 kubelet[2793]: I0706 23:10:20.209657 2793 server.go:1289] "Started kubelet" Jul 6 23:10:20.211300 kubelet[2793]: I0706 23:10:20.211159 2793 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:10:20.211389 kubelet[2793]: I0706 23:10:20.211372 2793 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:10:20.213254 kubelet[2793]: I0706 23:10:20.213225 2793 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:10:20.219097 kubelet[2793]: I0706 23:10:20.218529 2793 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:10:20.219592 kubelet[2793]: I0706 23:10:20.219541 2793 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:10:20.219761 kubelet[2793]: I0706 23:10:20.219746 2793 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:10:20.224925 kubelet[2793]: I0706 23:10:20.224875 2793 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:10:20.229297 kubelet[2793]: I0706 23:10:20.229250 2793 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:10:20.229650 kubelet[2793]: E0706 23:10:20.229566 2793 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-1-6-eb6896cb23\" not found" Jul 6 23:10:20.245245 kubelet[2793]: I0706 23:10:20.245207 2793 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:10:20.245385 kubelet[2793]: I0706 23:10:20.245339 2793 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:10:20.253618 kubelet[2793]: I0706 23:10:20.253577 2793 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:10:20.253919 kubelet[2793]: I0706 23:10:20.253890 2793 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:10:20.254834 kubelet[2793]: I0706 23:10:20.254791 2793 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:10:20.254834 kubelet[2793]: I0706 23:10:20.254819 2793 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:10:20.254834 kubelet[2793]: I0706 23:10:20.254836 2793 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:10:20.254834 kubelet[2793]: I0706 23:10:20.254842 2793 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:10:20.255149 kubelet[2793]: E0706 23:10:20.255115 2793 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:10:20.265291 kubelet[2793]: I0706 23:10:20.265254 2793 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:10:20.297638 kubelet[2793]: E0706 23:10:20.297580 2793 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:10:20.345501 kubelet[2793]: I0706 23:10:20.345461 2793 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:10:20.345501 kubelet[2793]: I0706 23:10:20.345481 2793 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:10:20.345501 kubelet[2793]: I0706 23:10:20.345502 2793 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:10:20.345796 kubelet[2793]: I0706 23:10:20.345643 2793 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:10:20.345796 kubelet[2793]: I0706 23:10:20.345653 2793 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:10:20.345796 kubelet[2793]: I0706 23:10:20.345671 2793 policy_none.go:49] "None policy: Start" Jul 6 23:10:20.345796 kubelet[2793]: I0706 23:10:20.345680 2793 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:10:20.345796 kubelet[2793]: I0706 23:10:20.345689 2793 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:10:20.345796 kubelet[2793]: I0706 23:10:20.345771 2793 state_mem.go:75] "Updated machine memory state" Jul 6 23:10:20.351186 kubelet[2793]: E0706 23:10:20.351107 2793 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:10:20.351362 kubelet[2793]: I0706 23:10:20.351334 2793 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:10:20.351433 kubelet[2793]: I0706 23:10:20.351350 2793 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:10:20.352231 kubelet[2793]: I0706 23:10:20.352199 2793 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:10:20.360988 kubelet[2793]: I0706 23:10:20.358942 2793 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:20.360988 kubelet[2793]: I0706 23:10:20.359616 2793 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:20.360988 kubelet[2793]: E0706 23:10:20.359725 2793 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:10:20.360988 kubelet[2793]: I0706 23:10:20.360336 2793 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:20.445880 kubelet[2793]: I0706 23:10:20.445618 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2fb31c15171711b288a04341e6885dd6-ca-certs\") pod \"kube-apiserver-ci-4230-2-1-6-eb6896cb23\" (UID: \"2fb31c15171711b288a04341e6885dd6\") " pod="kube-system/kube-apiserver-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:20.461666 kubelet[2793]: I0706 23:10:20.461618 2793 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:20.472197 kubelet[2793]: I0706 23:10:20.472147 2793 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:20.472486 kubelet[2793]: I0706 23:10:20.472240 2793 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:20.545905 kubelet[2793]: I0706 23:10:20.545834 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e79da8afb5eebc2ac610656a95a362eb-kubeconfig\") pod \"kube-scheduler-ci-4230-2-1-6-eb6896cb23\" (UID: \"e79da8afb5eebc2ac610656a95a362eb\") " pod="kube-system/kube-scheduler-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:20.546092 kubelet[2793]: I0706 23:10:20.545931 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2fb31c15171711b288a04341e6885dd6-k8s-certs\") pod \"kube-apiserver-ci-4230-2-1-6-eb6896cb23\" (UID: \"2fb31c15171711b288a04341e6885dd6\") " pod="kube-system/kube-apiserver-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:20.546092 kubelet[2793]: I0706 23:10:20.545951 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ecc4cd2647827a979411b3ee5d2f4d9e-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-1-6-eb6896cb23\" (UID: \"ecc4cd2647827a979411b3ee5d2f4d9e\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:20.546092 kubelet[2793]: I0706 23:10:20.545978 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2fb31c15171711b288a04341e6885dd6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-1-6-eb6896cb23\" (UID: \"2fb31c15171711b288a04341e6885dd6\") " pod="kube-system/kube-apiserver-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:20.546092 kubelet[2793]: I0706 23:10:20.545994 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ecc4cd2647827a979411b3ee5d2f4d9e-ca-certs\") pod \"kube-controller-manager-ci-4230-2-1-6-eb6896cb23\" (UID: \"ecc4cd2647827a979411b3ee5d2f4d9e\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:20.546092 kubelet[2793]: I0706 23:10:20.546010 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ecc4cd2647827a979411b3ee5d2f4d9e-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-1-6-eb6896cb23\" (UID: \"ecc4cd2647827a979411b3ee5d2f4d9e\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:20.546449 kubelet[2793]: I0706 23:10:20.546027 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ecc4cd2647827a979411b3ee5d2f4d9e-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-1-6-eb6896cb23\" (UID: \"ecc4cd2647827a979411b3ee5d2f4d9e\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:20.546449 kubelet[2793]: I0706 23:10:20.546051 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ecc4cd2647827a979411b3ee5d2f4d9e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-1-6-eb6896cb23\" (UID: \"ecc4cd2647827a979411b3ee5d2f4d9e\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:20.565777 sudo[2834]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:10:20.566706 sudo[2834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:10:21.072017 sudo[2834]: pam_unix(sudo:session): session closed for user root Jul 6 23:10:21.201731 kubelet[2793]: I0706 23:10:21.201477 2793 apiserver.go:52] "Watching apiserver" Jul 6 23:10:21.246374 kubelet[2793]: I0706 23:10:21.246306 2793 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:10:21.318288 kubelet[2793]: I0706 23:10:21.317289 2793 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:21.334830 kubelet[2793]: E0706 23:10:21.334497 2793 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-1-6-eb6896cb23\" already exists" pod="kube-system/kube-controller-manager-ci-4230-2-1-6-eb6896cb23" Jul 6 23:10:21.346051 kubelet[2793]: I0706 23:10:21.345857 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-1-6-eb6896cb23" podStartSLOduration=1.345842076 podStartE2EDuration="1.345842076s" podCreationTimestamp="2025-07-06 23:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:10:21.345778168 +0000 UTC m=+1.234886178" watchObservedRunningTime="2025-07-06 23:10:21.345842076 +0000 UTC m=+1.234950086" Jul 6 23:10:21.360341 kubelet[2793]: I0706 23:10:21.359869 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-1-6-eb6896cb23" podStartSLOduration=1.359846726 podStartE2EDuration="1.359846726s" podCreationTimestamp="2025-07-06 23:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:10:21.35937097 +0000 UTC m=+1.248479060" watchObservedRunningTime="2025-07-06 23:10:21.359846726 +0000 UTC m=+1.248954776" Jul 6 23:10:21.395141 kubelet[2793]: I0706 23:10:21.393372 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-1-6-eb6896cb23" podStartSLOduration=1.393353575 podStartE2EDuration="1.393353575s" podCreationTimestamp="2025-07-06 23:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:10:21.373705241 +0000 UTC m=+1.262813291" watchObservedRunningTime="2025-07-06 23:10:21.393353575 +0000 UTC m=+1.282461585" Jul 6 23:10:23.395373 sudo[1881]: pam_unix(sudo:session): session closed for user root Jul 6 23:10:23.579590 sshd[1880]: Connection closed by 139.178.89.65 port 51380 Jul 6 23:10:23.579444 sshd-session[1878]: pam_unix(sshd:session): session closed for user core Jul 6 23:10:23.584976 systemd[1]: sshd@6-78.47.124.97:22-139.178.89.65:51380.service: Deactivated successfully. Jul 6 23:10:23.589555 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:10:23.589793 systemd[1]: session-7.scope: Consumed 8.098s CPU time, 265.4M memory peak. Jul 6 23:10:23.591757 systemd-logind[1477]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:10:23.593165 systemd-logind[1477]: Removed session 7. Jul 6 23:10:25.294200 kubelet[2793]: I0706 23:10:25.294160 2793 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:10:25.295018 containerd[1487]: time="2025-07-06T23:10:25.294974434Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:10:25.295383 kubelet[2793]: I0706 23:10:25.295358 2793 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:10:25.841915 systemd[1]: Created slice kubepods-besteffort-pode603690a_5f81_4816_9979_00a999668614.slice - libcontainer container kubepods-besteffort-pode603690a_5f81_4816_9979_00a999668614.slice. Jul 6 23:10:25.853260 systemd[1]: Created slice kubepods-burstable-poda377642c_ad77_4d5c_9fb3_88cc630d987d.slice - libcontainer container kubepods-burstable-poda377642c_ad77_4d5c_9fb3_88cc630d987d.slice. Jul 6 23:10:25.884781 kubelet[2793]: I0706 23:10:25.884730 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e603690a-5f81-4816-9979-00a999668614-lib-modules\") pod \"kube-proxy-bzqkh\" (UID: \"e603690a-5f81-4816-9979-00a999668614\") " pod="kube-system/kube-proxy-bzqkh" Jul 6 23:10:25.884935 kubelet[2793]: I0706 23:10:25.884810 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-hostproc\") pod \"cilium-25j2r\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " pod="kube-system/cilium-25j2r" Jul 6 23:10:25.884935 kubelet[2793]: I0706 23:10:25.884831 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-cilium-cgroup\") pod \"cilium-25j2r\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " pod="kube-system/cilium-25j2r" Jul 6 23:10:25.884935 kubelet[2793]: I0706 23:10:25.884847 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a377642c-ad77-4d5c-9fb3-88cc630d987d-cilium-config-path\") pod \"cilium-25j2r\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " pod="kube-system/cilium-25j2r" Jul 6 23:10:25.884935 kubelet[2793]: I0706 23:10:25.884883 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-host-proc-sys-kernel\") pod \"cilium-25j2r\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " pod="kube-system/cilium-25j2r" Jul 6 23:10:25.884935 kubelet[2793]: I0706 23:10:25.884902 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-etc-cni-netd\") pod \"cilium-25j2r\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " pod="kube-system/cilium-25j2r" Jul 6 23:10:25.884935 kubelet[2793]: I0706 23:10:25.884916 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-lib-modules\") pod \"cilium-25j2r\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " pod="kube-system/cilium-25j2r" Jul 6 23:10:25.885081 kubelet[2793]: I0706 23:10:25.884935 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a377642c-ad77-4d5c-9fb3-88cc630d987d-hubble-tls\") pod \"cilium-25j2r\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " pod="kube-system/cilium-25j2r" Jul 6 23:10:25.885081 kubelet[2793]: I0706 23:10:25.884957 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e603690a-5f81-4816-9979-00a999668614-kube-proxy\") pod \"kube-proxy-bzqkh\" (UID: \"e603690a-5f81-4816-9979-00a999668614\") " pod="kube-system/kube-proxy-bzqkh" Jul 6 23:10:25.885081 kubelet[2793]: I0706 23:10:25.884971 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-cilium-run\") pod \"cilium-25j2r\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " pod="kube-system/cilium-25j2r" Jul 6 23:10:25.885081 kubelet[2793]: I0706 23:10:25.884985 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-bpf-maps\") pod \"cilium-25j2r\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " pod="kube-system/cilium-25j2r" Jul 6 23:10:25.885081 kubelet[2793]: I0706 23:10:25.885000 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-xtables-lock\") pod \"cilium-25j2r\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " pod="kube-system/cilium-25j2r" Jul 6 23:10:25.885081 kubelet[2793]: I0706 23:10:25.885015 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4jt4\" (UniqueName: \"kubernetes.io/projected/a377642c-ad77-4d5c-9fb3-88cc630d987d-kube-api-access-l4jt4\") pod \"cilium-25j2r\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " pod="kube-system/cilium-25j2r" Jul 6 23:10:25.885262 kubelet[2793]: I0706 23:10:25.885042 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e603690a-5f81-4816-9979-00a999668614-xtables-lock\") pod \"kube-proxy-bzqkh\" (UID: \"e603690a-5f81-4816-9979-00a999668614\") " pod="kube-system/kube-proxy-bzqkh" Jul 6 23:10:25.885262 kubelet[2793]: I0706 23:10:25.885062 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln2p2\" (UniqueName: \"kubernetes.io/projected/e603690a-5f81-4816-9979-00a999668614-kube-api-access-ln2p2\") pod \"kube-proxy-bzqkh\" (UID: \"e603690a-5f81-4816-9979-00a999668614\") " pod="kube-system/kube-proxy-bzqkh" Jul 6 23:10:25.885262 kubelet[2793]: I0706 23:10:25.885078 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-cni-path\") pod \"cilium-25j2r\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " pod="kube-system/cilium-25j2r" Jul 6 23:10:25.885262 kubelet[2793]: I0706 23:10:25.885095 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a377642c-ad77-4d5c-9fb3-88cc630d987d-clustermesh-secrets\") pod \"cilium-25j2r\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " pod="kube-system/cilium-25j2r" Jul 6 23:10:25.885262 kubelet[2793]: I0706 23:10:25.885113 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-host-proc-sys-net\") pod \"cilium-25j2r\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " pod="kube-system/cilium-25j2r" Jul 6 23:10:26.150626 containerd[1487]: time="2025-07-06T23:10:26.150561367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzqkh,Uid:e603690a-5f81-4816-9979-00a999668614,Namespace:kube-system,Attempt:0,}" Jul 6 23:10:26.159211 containerd[1487]: time="2025-07-06T23:10:26.159055402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-25j2r,Uid:a377642c-ad77-4d5c-9fb3-88cc630d987d,Namespace:kube-system,Attempt:0,}" Jul 6 23:10:26.182353 containerd[1487]: time="2025-07-06T23:10:26.182236620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:10:26.182353 containerd[1487]: time="2025-07-06T23:10:26.182304930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:10:26.182353 containerd[1487]: time="2025-07-06T23:10:26.182322568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:10:26.182931 containerd[1487]: time="2025-07-06T23:10:26.182450590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:10:26.207488 containerd[1487]: time="2025-07-06T23:10:26.207344893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:10:26.207628 containerd[1487]: time="2025-07-06T23:10:26.207452958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:10:26.207628 containerd[1487]: time="2025-07-06T23:10:26.207469996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:10:26.207628 containerd[1487]: time="2025-07-06T23:10:26.207556944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:10:26.220363 systemd[1]: Started cri-containerd-e8c461b3fc2b586a537dccaf70e96d897eb7712fc4c73d2f55d18c492ff3a922.scope - libcontainer container e8c461b3fc2b586a537dccaf70e96d897eb7712fc4c73d2f55d18c492ff3a922. Jul 6 23:10:26.238338 systemd[1]: Started cri-containerd-649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2.scope - libcontainer container 649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2. Jul 6 23:10:26.279065 containerd[1487]: time="2025-07-06T23:10:26.278734414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bzqkh,Uid:e603690a-5f81-4816-9979-00a999668614,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8c461b3fc2b586a537dccaf70e96d897eb7712fc4c73d2f55d18c492ff3a922\"" Jul 6 23:10:26.288523 containerd[1487]: time="2025-07-06T23:10:26.288361772Z" level=info msg="CreateContainer within sandbox \"e8c461b3fc2b586a537dccaf70e96d897eb7712fc4c73d2f55d18c492ff3a922\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:10:26.295558 containerd[1487]: time="2025-07-06T23:10:26.295435001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-25j2r,Uid:a377642c-ad77-4d5c-9fb3-88cc630d987d,Namespace:kube-system,Attempt:0,} returns sandbox id \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\"" Jul 6 23:10:26.301685 containerd[1487]: time="2025-07-06T23:10:26.301600715Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:10:26.318782 containerd[1487]: time="2025-07-06T23:10:26.318637856Z" level=info msg="CreateContainer within sandbox \"e8c461b3fc2b586a537dccaf70e96d897eb7712fc4c73d2f55d18c492ff3a922\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1c6a46c2e6558b101189922c411038039f40c0fa523876f40535b6353d49896d\"" Jul 6 23:10:26.319520 containerd[1487]: time="2025-07-06T23:10:26.319462903Z" level=info msg="StartContainer for \"1c6a46c2e6558b101189922c411038039f40c0fa523876f40535b6353d49896d\"" Jul 6 23:10:26.381041 systemd[1]: Started cri-containerd-1c6a46c2e6558b101189922c411038039f40c0fa523876f40535b6353d49896d.scope - libcontainer container 1c6a46c2e6558b101189922c411038039f40c0fa523876f40535b6353d49896d. Jul 6 23:10:26.430360 containerd[1487]: time="2025-07-06T23:10:26.430206302Z" level=info msg="StartContainer for \"1c6a46c2e6558b101189922c411038039f40c0fa523876f40535b6353d49896d\" returns successfully" Jul 6 23:10:26.573060 systemd[1]: Created slice kubepods-besteffort-pod8dff44f5_4a37_4dc5_bcde_484af4530e1f.slice - libcontainer container kubepods-besteffort-pod8dff44f5_4a37_4dc5_bcde_484af4530e1f.slice. Jul 6 23:10:26.593656 kubelet[2793]: I0706 23:10:26.593524 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8dff44f5-4a37-4dc5-bcde-484af4530e1f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-fw8bs\" (UID: \"8dff44f5-4a37-4dc5-bcde-484af4530e1f\") " pod="kube-system/cilium-operator-6c4d7847fc-fw8bs" Jul 6 23:10:26.593656 kubelet[2793]: I0706 23:10:26.593579 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph2w8\" (UniqueName: \"kubernetes.io/projected/8dff44f5-4a37-4dc5-bcde-484af4530e1f-kube-api-access-ph2w8\") pod \"cilium-operator-6c4d7847fc-fw8bs\" (UID: \"8dff44f5-4a37-4dc5-bcde-484af4530e1f\") " pod="kube-system/cilium-operator-6c4d7847fc-fw8bs" Jul 6 23:10:26.876232 containerd[1487]: time="2025-07-06T23:10:26.876144449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fw8bs,Uid:8dff44f5-4a37-4dc5-bcde-484af4530e1f,Namespace:kube-system,Attempt:0,}" Jul 6 23:10:26.915057 containerd[1487]: time="2025-07-06T23:10:26.914603730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:10:26.915057 containerd[1487]: time="2025-07-06T23:10:26.914664682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:10:26.915057 containerd[1487]: time="2025-07-06T23:10:26.914679760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:10:26.915057 containerd[1487]: time="2025-07-06T23:10:26.914766308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:10:26.937783 systemd[1]: Started cri-containerd-c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6.scope - libcontainer container c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6. Jul 6 23:10:27.001216 containerd[1487]: time="2025-07-06T23:10:27.001066748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fw8bs,Uid:8dff44f5-4a37-4dc5-bcde-484af4530e1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6\"" Jul 6 23:10:29.189150 kubelet[2793]: I0706 23:10:29.188493 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bzqkh" podStartSLOduration=4.188470231 podStartE2EDuration="4.188470231s" podCreationTimestamp="2025-07-06 23:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:10:27.355667435 +0000 UTC m=+7.244775485" watchObservedRunningTime="2025-07-06 23:10:29.188470231 +0000 UTC m=+9.077578281" Jul 6 23:10:38.060485 systemd[1]: Started sshd@7-78.47.124.97:22-185.156.73.234:54412.service - OpenSSH per-connection server daemon (185.156.73.234:54412). Jul 6 23:10:39.042887 sshd[3174]: Connection closed by authenticating user root 185.156.73.234 port 54412 [preauth] Jul 6 23:10:39.046541 systemd[1]: sshd@7-78.47.124.97:22-185.156.73.234:54412.service: Deactivated successfully. Jul 6 23:10:42.103696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3150970056.mount: Deactivated successfully. Jul 6 23:10:43.695207 containerd[1487]: time="2025-07-06T23:10:43.693576826Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:10:43.695639 containerd[1487]: time="2025-07-06T23:10:43.695475986Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 6 23:10:43.696837 containerd[1487]: time="2025-07-06T23:10:43.696777970Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:10:43.699013 containerd[1487]: time="2025-07-06T23:10:43.698957038Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 17.397315688s" Jul 6 23:10:43.699013 containerd[1487]: time="2025-07-06T23:10:43.699011116Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 6 23:10:43.702171 containerd[1487]: time="2025-07-06T23:10:43.700907596Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:10:43.703881 containerd[1487]: time="2025-07-06T23:10:43.703834152Z" level=info msg="CreateContainer within sandbox \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:10:43.720587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2500548195.mount: Deactivated successfully. Jul 6 23:10:43.722248 containerd[1487]: time="2025-07-06T23:10:43.722021661Z" level=info msg="CreateContainer within sandbox \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff\"" Jul 6 23:10:43.722885 containerd[1487]: time="2025-07-06T23:10:43.722857746Z" level=info msg="StartContainer for \"1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff\"" Jul 6 23:10:43.756576 systemd[1]: run-containerd-runc-k8s.io-1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff-runc.n4YnHr.mount: Deactivated successfully. Jul 6 23:10:43.764591 systemd[1]: Started cri-containerd-1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff.scope - libcontainer container 1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff. Jul 6 23:10:43.799529 containerd[1487]: time="2025-07-06T23:10:43.799298067Z" level=info msg="StartContainer for \"1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff\" returns successfully" Jul 6 23:10:43.819320 systemd[1]: cri-containerd-1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff.scope: Deactivated successfully. Jul 6 23:10:44.020758 containerd[1487]: time="2025-07-06T23:10:44.020480377Z" level=info msg="shim disconnected" id=1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff namespace=k8s.io Jul 6 23:10:44.020758 containerd[1487]: time="2025-07-06T23:10:44.020564613Z" level=warning msg="cleaning up after shim disconnected" id=1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff namespace=k8s.io Jul 6 23:10:44.020758 containerd[1487]: time="2025-07-06T23:10:44.020577773Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:10:44.396490 containerd[1487]: time="2025-07-06T23:10:44.396427687Z" level=info msg="CreateContainer within sandbox \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:10:44.411282 containerd[1487]: time="2025-07-06T23:10:44.411225402Z" level=info msg="CreateContainer within sandbox \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff\"" Jul 6 23:10:44.412459 containerd[1487]: time="2025-07-06T23:10:44.412317440Z" level=info msg="StartContainer for \"ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff\"" Jul 6 23:10:44.447464 systemd[1]: Started cri-containerd-ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff.scope - libcontainer container ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff. Jul 6 23:10:44.484460 containerd[1487]: time="2025-07-06T23:10:44.484252411Z" level=info msg="StartContainer for \"ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff\" returns successfully" Jul 6 23:10:44.501272 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:10:44.501720 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:10:44.502275 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:10:44.509636 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:10:44.510471 systemd[1]: cri-containerd-ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff.scope: Deactivated successfully. Jul 6 23:10:44.531582 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:10:44.542225 containerd[1487]: time="2025-07-06T23:10:44.542118159Z" level=info msg="shim disconnected" id=ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff namespace=k8s.io Jul 6 23:10:44.542225 containerd[1487]: time="2025-07-06T23:10:44.542208156Z" level=warning msg="cleaning up after shim disconnected" id=ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff namespace=k8s.io Jul 6 23:10:44.542225 containerd[1487]: time="2025-07-06T23:10:44.542219715Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:10:44.717210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff-rootfs.mount: Deactivated successfully. Jul 6 23:10:45.403935 containerd[1487]: time="2025-07-06T23:10:45.403869798Z" level=info msg="CreateContainer within sandbox \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:10:45.429634 containerd[1487]: time="2025-07-06T23:10:45.429505321Z" level=info msg="CreateContainer within sandbox \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec\"" Jul 6 23:10:45.430411 containerd[1487]: time="2025-07-06T23:10:45.430371172Z" level=info msg="StartContainer for \"3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec\"" Jul 6 23:10:45.466492 systemd[1]: Started cri-containerd-3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec.scope - libcontainer container 3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec. Jul 6 23:10:45.503019 containerd[1487]: time="2025-07-06T23:10:45.502871732Z" level=info msg="StartContainer for \"3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec\" returns successfully" Jul 6 23:10:45.507084 systemd[1]: cri-containerd-3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec.scope: Deactivated successfully. Jul 6 23:10:45.537678 containerd[1487]: time="2025-07-06T23:10:45.537454949Z" level=info msg="shim disconnected" id=3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec namespace=k8s.io Jul 6 23:10:45.537678 containerd[1487]: time="2025-07-06T23:10:45.537527466Z" level=warning msg="cleaning up after shim disconnected" id=3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec namespace=k8s.io Jul 6 23:10:45.537678 containerd[1487]: time="2025-07-06T23:10:45.537545066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:10:45.717544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec-rootfs.mount: Deactivated successfully. Jul 6 23:10:46.413753 containerd[1487]: time="2025-07-06T23:10:46.413670262Z" level=info msg="CreateContainer within sandbox \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:10:46.432621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3060615828.mount: Deactivated successfully. Jul 6 23:10:46.453311 containerd[1487]: time="2025-07-06T23:10:46.453100147Z" level=info msg="CreateContainer within sandbox \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901\"" Jul 6 23:10:46.456156 containerd[1487]: time="2025-07-06T23:10:46.455269881Z" level=info msg="StartContainer for \"b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901\"" Jul 6 23:10:46.492531 systemd[1]: Started cri-containerd-b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901.scope - libcontainer container b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901. Jul 6 23:10:46.527103 systemd[1]: cri-containerd-b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901.scope: Deactivated successfully. Jul 6 23:10:46.529813 containerd[1487]: time="2025-07-06T23:10:46.529524590Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda377642c_ad77_4d5c_9fb3_88cc630d987d.slice/cri-containerd-b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901.scope/memory.events\": no such file or directory" Jul 6 23:10:46.532302 containerd[1487]: time="2025-07-06T23:10:46.532253027Z" level=info msg="StartContainer for \"b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901\" returns successfully" Jul 6 23:10:46.560517 containerd[1487]: time="2025-07-06T23:10:46.560398374Z" level=info msg="shim disconnected" id=b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901 namespace=k8s.io Jul 6 23:10:46.560734 containerd[1487]: time="2025-07-06T23:10:46.560522610Z" level=warning msg="cleaning up after shim disconnected" id=b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901 namespace=k8s.io Jul 6 23:10:46.560734 containerd[1487]: time="2025-07-06T23:10:46.560540690Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:10:47.418729 containerd[1487]: time="2025-07-06T23:10:47.418432853Z" level=info msg="CreateContainer within sandbox \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:10:47.443432 containerd[1487]: time="2025-07-06T23:10:47.443380430Z" level=info msg="CreateContainer within sandbox \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1\"" Jul 6 23:10:47.445181 containerd[1487]: time="2025-07-06T23:10:47.444169249Z" level=info msg="StartContainer for \"2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1\"" Jul 6 23:10:47.483394 systemd[1]: Started cri-containerd-2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1.scope - libcontainer container 2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1. Jul 6 23:10:47.523891 containerd[1487]: time="2025-07-06T23:10:47.523723097Z" level=info msg="StartContainer for \"2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1\" returns successfully" Jul 6 23:10:47.632160 kubelet[2793]: I0706 23:10:47.629246 2793 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:10:47.676004 systemd[1]: Created slice kubepods-burstable-pod9fdb2ea6_20a3_44a6_8698_c083b1ad99e8.slice - libcontainer container kubepods-burstable-pod9fdb2ea6_20a3_44a6_8698_c083b1ad99e8.slice. Jul 6 23:10:47.690456 systemd[1]: Created slice kubepods-burstable-pod4eec098f_0751_4eb9_bc48_1229c4044cb9.slice - libcontainer container kubepods-burstable-pod4eec098f_0751_4eb9_bc48_1229c4044cb9.slice. Jul 6 23:10:47.717306 systemd[1]: run-containerd-runc-k8s.io-2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1-runc.If6XUj.mount: Deactivated successfully. Jul 6 23:10:47.750208 kubelet[2793]: I0706 23:10:47.750072 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4eec098f-0751-4eb9-bc48-1229c4044cb9-config-volume\") pod \"coredns-674b8bbfcf-7xchb\" (UID: \"4eec098f-0751-4eb9-bc48-1229c4044cb9\") " pod="kube-system/coredns-674b8bbfcf-7xchb" Jul 6 23:10:47.750542 kubelet[2793]: I0706 23:10:47.750114 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6zf2\" (UniqueName: \"kubernetes.io/projected/4eec098f-0751-4eb9-bc48-1229c4044cb9-kube-api-access-h6zf2\") pod \"coredns-674b8bbfcf-7xchb\" (UID: \"4eec098f-0751-4eb9-bc48-1229c4044cb9\") " pod="kube-system/coredns-674b8bbfcf-7xchb" Jul 6 23:10:47.750542 kubelet[2793]: I0706 23:10:47.750524 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9fdb2ea6-20a3-44a6-8698-c083b1ad99e8-config-volume\") pod \"coredns-674b8bbfcf-qdd9f\" (UID: \"9fdb2ea6-20a3-44a6-8698-c083b1ad99e8\") " pod="kube-system/coredns-674b8bbfcf-qdd9f" Jul 6 23:10:47.750744 kubelet[2793]: I0706 23:10:47.750548 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xk4b\" (UniqueName: \"kubernetes.io/projected/9fdb2ea6-20a3-44a6-8698-c083b1ad99e8-kube-api-access-4xk4b\") pod \"coredns-674b8bbfcf-qdd9f\" (UID: \"9fdb2ea6-20a3-44a6-8698-c083b1ad99e8\") " pod="kube-system/coredns-674b8bbfcf-qdd9f" Jul 6 23:10:47.987860 containerd[1487]: time="2025-07-06T23:10:47.987715498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qdd9f,Uid:9fdb2ea6-20a3-44a6-8698-c083b1ad99e8,Namespace:kube-system,Attempt:0,}" Jul 6 23:10:47.996545 containerd[1487]: time="2025-07-06T23:10:47.996201153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7xchb,Uid:4eec098f-0751-4eb9-bc48-1229c4044cb9,Namespace:kube-system,Attempt:0,}" Jul 6 23:10:48.442598 kubelet[2793]: I0706 23:10:48.442508 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-25j2r" podStartSLOduration=6.043464605 podStartE2EDuration="23.442488755s" podCreationTimestamp="2025-07-06 23:10:25 +0000 UTC" firstStartedPulling="2025-07-06 23:10:26.300933206 +0000 UTC m=+6.190041216" lastFinishedPulling="2025-07-06 23:10:43.699957356 +0000 UTC m=+23.589065366" observedRunningTime="2025-07-06 23:10:48.440210927 +0000 UTC m=+28.329318977" watchObservedRunningTime="2025-07-06 23:10:48.442488755 +0000 UTC m=+28.331596765" Jul 6 23:10:50.691239 containerd[1487]: time="2025-07-06T23:10:50.690177378Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:10:50.692321 containerd[1487]: time="2025-07-06T23:10:50.692230946Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 6 23:10:50.695525 containerd[1487]: time="2025-07-06T23:10:50.694109396Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:10:50.696044 containerd[1487]: time="2025-07-06T23:10:50.696006086Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 6.995059332s" Jul 6 23:10:50.696157 containerd[1487]: time="2025-07-06T23:10:50.696115964Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 6 23:10:50.701419 containerd[1487]: time="2025-07-06T23:10:50.701361360Z" level=info msg="CreateContainer within sandbox \"c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:10:50.724856 containerd[1487]: time="2025-07-06T23:10:50.724789707Z" level=info msg="CreateContainer within sandbox \"c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28\"" Jul 6 23:10:50.725789 containerd[1487]: time="2025-07-06T23:10:50.725749171Z" level=info msg="StartContainer for \"e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28\"" Jul 6 23:10:50.762412 systemd[1]: Started cri-containerd-e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28.scope - libcontainer container e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28. Jul 6 23:10:50.796539 containerd[1487]: time="2025-07-06T23:10:50.796480324Z" level=info msg="StartContainer for \"e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28\" returns successfully" Jul 6 23:10:54.816678 systemd-networkd[1391]: cilium_host: Link UP Jul 6 23:10:54.817804 systemd-networkd[1391]: cilium_net: Link UP Jul 6 23:10:54.818392 systemd-networkd[1391]: cilium_net: Gained carrier Jul 6 23:10:54.818702 systemd-networkd[1391]: cilium_host: Gained carrier Jul 6 23:10:54.937609 systemd-networkd[1391]: cilium_vxlan: Link UP Jul 6 23:10:54.937618 systemd-networkd[1391]: cilium_vxlan: Gained carrier Jul 6 23:10:55.230563 kernel: NET: Registered PF_ALG protocol family Jul 6 23:10:55.242542 systemd-networkd[1391]: cilium_host: Gained IPv6LL Jul 6 23:10:55.786534 systemd-networkd[1391]: cilium_net: Gained IPv6LL Jul 6 23:10:56.015166 systemd-networkd[1391]: lxc_health: Link UP Jul 6 23:10:56.019329 systemd-networkd[1391]: lxc_health: Gained carrier Jul 6 23:10:56.201203 kubelet[2793]: I0706 23:10:56.200639 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-fw8bs" podStartSLOduration=6.509604152 podStartE2EDuration="30.20062321s" podCreationTimestamp="2025-07-06 23:10:26 +0000 UTC" firstStartedPulling="2025-07-06 23:10:27.006131809 +0000 UTC m=+6.895239819" lastFinishedPulling="2025-07-06 23:10:50.697150867 +0000 UTC m=+30.586258877" observedRunningTime="2025-07-06 23:10:51.461408253 +0000 UTC m=+31.350516263" watchObservedRunningTime="2025-07-06 23:10:56.20062321 +0000 UTC m=+36.089731220" Jul 6 23:10:56.592758 kernel: eth0: renamed from tmpde16c Jul 6 23:10:56.589436 systemd-networkd[1391]: lxc812b19b44c39: Link UP Jul 6 23:10:56.596293 kernel: eth0: renamed from tmpdd154 Jul 6 23:10:56.593757 systemd-networkd[1391]: lxc032bfd4bc2a3: Link UP Jul 6 23:10:56.601501 systemd-networkd[1391]: lxc812b19b44c39: Gained carrier Jul 6 23:10:56.607310 systemd-networkd[1391]: lxc032bfd4bc2a3: Gained carrier Jul 6 23:10:56.626261 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL Jul 6 23:10:57.642442 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jul 6 23:10:58.474437 systemd-networkd[1391]: lxc812b19b44c39: Gained IPv6LL Jul 6 23:10:58.538585 systemd-networkd[1391]: lxc032bfd4bc2a3: Gained IPv6LL Jul 6 23:11:00.634884 containerd[1487]: time="2025-07-06T23:11:00.634732863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:11:00.634884 containerd[1487]: time="2025-07-06T23:11:00.634793864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:11:00.634884 containerd[1487]: time="2025-07-06T23:11:00.634806344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:11:00.635643 containerd[1487]: time="2025-07-06T23:11:00.634891265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:11:00.641299 containerd[1487]: time="2025-07-06T23:11:00.638962878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:11:00.641299 containerd[1487]: time="2025-07-06T23:11:00.639071120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:11:00.641299 containerd[1487]: time="2025-07-06T23:11:00.639104320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:11:00.641299 containerd[1487]: time="2025-07-06T23:11:00.639252282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:11:00.686372 systemd[1]: Started cri-containerd-dd1549427413da88d792e1a5be5e9defda9f4cc1bab5241c9b9696feebc0b31e.scope - libcontainer container dd1549427413da88d792e1a5be5e9defda9f4cc1bab5241c9b9696feebc0b31e. Jul 6 23:11:00.690215 systemd[1]: Started cri-containerd-de16cfaa991e38168c907dc8e7ac3462b187988fd511c10a266cd853806ae9ce.scope - libcontainer container de16cfaa991e38168c907dc8e7ac3462b187988fd511c10a266cd853806ae9ce. Jul 6 23:11:00.780107 containerd[1487]: time="2025-07-06T23:11:00.778592966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7xchb,Uid:4eec098f-0751-4eb9-bc48-1229c4044cb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"de16cfaa991e38168c907dc8e7ac3462b187988fd511c10a266cd853806ae9ce\"" Jul 6 23:11:00.791169 containerd[1487]: time="2025-07-06T23:11:00.791000726Z" level=info msg="CreateContainer within sandbox \"de16cfaa991e38168c907dc8e7ac3462b187988fd511c10a266cd853806ae9ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:11:00.792668 containerd[1487]: time="2025-07-06T23:11:00.792631787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qdd9f,Uid:9fdb2ea6-20a3-44a6-8698-c083b1ad99e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd1549427413da88d792e1a5be5e9defda9f4cc1bab5241c9b9696feebc0b31e\"" Jul 6 23:11:00.803205 containerd[1487]: time="2025-07-06T23:11:00.803158364Z" level=info msg="CreateContainer within sandbox \"dd1549427413da88d792e1a5be5e9defda9f4cc1bab5241c9b9696feebc0b31e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:11:00.817082 containerd[1487]: time="2025-07-06T23:11:00.817030063Z" level=info msg="CreateContainer within sandbox \"de16cfaa991e38168c907dc8e7ac3462b187988fd511c10a266cd853806ae9ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d1fd3ba90ab13c6529f80076081b9a53d8f87569a97950f0f950ce85aa0bc17\"" Jul 6 23:11:00.819440 containerd[1487]: time="2025-07-06T23:11:00.819392214Z" level=info msg="StartContainer for \"3d1fd3ba90ab13c6529f80076081b9a53d8f87569a97950f0f950ce85aa0bc17\"" Jul 6 23:11:00.833569 containerd[1487]: time="2025-07-06T23:11:00.833438596Z" level=info msg="CreateContainer within sandbox \"dd1549427413da88d792e1a5be5e9defda9f4cc1bab5241c9b9696feebc0b31e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2e0986d528aa52335c959bc8751ea79c8106797566a19001f581148eb551d3ab\"" Jul 6 23:11:00.834667 containerd[1487]: time="2025-07-06T23:11:00.834474409Z" level=info msg="StartContainer for \"2e0986d528aa52335c959bc8751ea79c8106797566a19001f581148eb551d3ab\"" Jul 6 23:11:00.861371 systemd[1]: Started cri-containerd-3d1fd3ba90ab13c6529f80076081b9a53d8f87569a97950f0f950ce85aa0bc17.scope - libcontainer container 3d1fd3ba90ab13c6529f80076081b9a53d8f87569a97950f0f950ce85aa0bc17. Jul 6 23:11:00.878473 systemd[1]: Started cri-containerd-2e0986d528aa52335c959bc8751ea79c8106797566a19001f581148eb551d3ab.scope - libcontainer container 2e0986d528aa52335c959bc8751ea79c8106797566a19001f581148eb551d3ab. Jul 6 23:11:00.918187 containerd[1487]: time="2025-07-06T23:11:00.917624485Z" level=info msg="StartContainer for \"3d1fd3ba90ab13c6529f80076081b9a53d8f87569a97950f0f950ce85aa0bc17\" returns successfully" Jul 6 23:11:00.920266 containerd[1487]: time="2025-07-06T23:11:00.918311894Z" level=info msg="StartContainer for \"2e0986d528aa52335c959bc8751ea79c8106797566a19001f581148eb551d3ab\" returns successfully" Jul 6 23:11:01.480173 kubelet[2793]: I0706 23:11:01.478447 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7xchb" podStartSLOduration=35.478420537 podStartE2EDuration="35.478420537s" podCreationTimestamp="2025-07-06 23:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:11:01.475853858 +0000 UTC m=+41.364961908" watchObservedRunningTime="2025-07-06 23:11:01.478420537 +0000 UTC m=+41.367528627" Jul 6 23:12:53.187782 systemd[1]: Started sshd@8-78.47.124.97:22-139.178.89.65:53854.service - OpenSSH per-connection server daemon (139.178.89.65:53854). Jul 6 23:12:54.284674 sshd[4217]: Accepted publickey for core from 139.178.89.65 port 53854 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:12:54.286882 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:12:54.293116 systemd-logind[1477]: New session 8 of user core. Jul 6 23:12:54.302737 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:12:55.139163 sshd[4222]: Connection closed by 139.178.89.65 port 53854 Jul 6 23:12:55.138218 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Jul 6 23:12:55.142457 systemd[1]: sshd@8-78.47.124.97:22-139.178.89.65:53854.service: Deactivated successfully. Jul 6 23:12:55.145428 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:12:55.147742 systemd-logind[1477]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:12:55.149534 systemd-logind[1477]: Removed session 8. Jul 6 23:13:00.329550 systemd[1]: Started sshd@9-78.47.124.97:22-139.178.89.65:59076.service - OpenSSH per-connection server daemon (139.178.89.65:59076). Jul 6 23:13:01.448991 sshd[4237]: Accepted publickey for core from 139.178.89.65 port 59076 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:13:01.451510 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:13:01.459654 systemd-logind[1477]: New session 9 of user core. Jul 6 23:13:01.467540 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:13:02.291806 sshd[4239]: Connection closed by 139.178.89.65 port 59076 Jul 6 23:13:02.293453 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Jul 6 23:13:02.298394 systemd[1]: sshd@9-78.47.124.97:22-139.178.89.65:59076.service: Deactivated successfully. Jul 6 23:13:02.301954 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:13:02.303188 systemd-logind[1477]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:13:02.304195 systemd-logind[1477]: Removed session 9. Jul 6 23:13:07.493640 systemd[1]: Started sshd@10-78.47.124.97:22-139.178.89.65:59086.service - OpenSSH per-connection server daemon (139.178.89.65:59086). Jul 6 23:13:08.590914 sshd[4252]: Accepted publickey for core from 139.178.89.65 port 59086 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:13:08.592800 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:13:08.598847 systemd-logind[1477]: New session 10 of user core. Jul 6 23:13:08.606447 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:13:09.432224 sshd[4254]: Connection closed by 139.178.89.65 port 59086 Jul 6 23:13:09.433293 sshd-session[4252]: pam_unix(sshd:session): session closed for user core Jul 6 23:13:09.438498 systemd[1]: sshd@10-78.47.124.97:22-139.178.89.65:59086.service: Deactivated successfully. Jul 6 23:13:09.441055 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:13:09.444470 systemd-logind[1477]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:13:09.445629 systemd-logind[1477]: Removed session 10. Jul 6 23:13:14.632659 systemd[1]: Started sshd@11-78.47.124.97:22-139.178.89.65:35490.service - OpenSSH per-connection server daemon (139.178.89.65:35490). Jul 6 23:13:15.730809 sshd[4267]: Accepted publickey for core from 139.178.89.65 port 35490 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:13:15.732740 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:13:15.738934 systemd-logind[1477]: New session 11 of user core. Jul 6 23:13:15.744463 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:13:16.575023 sshd[4269]: Connection closed by 139.178.89.65 port 35490 Jul 6 23:13:16.575991 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Jul 6 23:13:16.581292 systemd[1]: sshd@11-78.47.124.97:22-139.178.89.65:35490.service: Deactivated successfully. Jul 6 23:13:16.583735 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:13:16.584975 systemd-logind[1477]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:13:16.586522 systemd-logind[1477]: Removed session 11. Jul 6 23:13:16.773651 systemd[1]: Started sshd@12-78.47.124.97:22-139.178.89.65:35496.service - OpenSSH per-connection server daemon (139.178.89.65:35496). Jul 6 23:13:16.851896 update_engine[1479]: I20250706 23:13:16.851714 1479 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 6 23:13:16.851896 update_engine[1479]: I20250706 23:13:16.851790 1479 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 6 23:13:16.852631 update_engine[1479]: I20250706 23:13:16.852192 1479 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 6 23:13:16.853429 update_engine[1479]: I20250706 23:13:16.852825 1479 omaha_request_params.cc:62] Current group set to stable Jul 6 23:13:16.853429 update_engine[1479]: I20250706 23:13:16.852977 1479 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 6 23:13:16.853429 update_engine[1479]: I20250706 23:13:16.852993 1479 update_attempter.cc:643] Scheduling an action processor start. Jul 6 23:13:16.853429 update_engine[1479]: I20250706 23:13:16.853017 1479 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 6 23:13:16.853429 update_engine[1479]: I20250706 23:13:16.853070 1479 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 6 23:13:16.853429 update_engine[1479]: I20250706 23:13:16.853272 1479 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 6 23:13:16.853429 update_engine[1479]: I20250706 23:13:16.853295 1479 omaha_request_action.cc:272] Request: Jul 6 23:13:16.853429 update_engine[1479]: Jul 6 23:13:16.853429 update_engine[1479]: Jul 6 23:13:16.853429 update_engine[1479]: Jul 6 23:13:16.853429 update_engine[1479]: Jul 6 23:13:16.853429 update_engine[1479]: Jul 6 23:13:16.853429 update_engine[1479]: Jul 6 23:13:16.853429 update_engine[1479]: Jul 6 23:13:16.853429 update_engine[1479]: Jul 6 23:13:16.853429 update_engine[1479]: I20250706 23:13:16.853307 1479 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:13:16.854694 locksmithd[1523]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 6 23:13:16.855854 update_engine[1479]: I20250706 23:13:16.855806 1479 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:13:16.856275 update_engine[1479]: I20250706 23:13:16.856240 1479 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:13:16.856967 update_engine[1479]: E20250706 23:13:16.856922 1479 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:13:16.857028 update_engine[1479]: I20250706 23:13:16.856997 1479 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 6 23:13:17.858214 sshd[4282]: Accepted publickey for core from 139.178.89.65 port 35496 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:13:17.860498 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:13:17.868212 systemd-logind[1477]: New session 12 of user core. Jul 6 23:13:17.876470 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:13:18.714382 sshd[4284]: Connection closed by 139.178.89.65 port 35496 Jul 6 23:13:18.715631 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Jul 6 23:13:18.720754 systemd[1]: sshd@12-78.47.124.97:22-139.178.89.65:35496.service: Deactivated successfully. Jul 6 23:13:18.724689 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:13:18.727187 systemd-logind[1477]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:13:18.728568 systemd-logind[1477]: Removed session 12. Jul 6 23:13:18.910001 systemd[1]: Started sshd@13-78.47.124.97:22-139.178.89.65:35500.service - OpenSSH per-connection server daemon (139.178.89.65:35500). Jul 6 23:13:19.993913 sshd[4293]: Accepted publickey for core from 139.178.89.65 port 35500 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:13:19.996221 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:13:20.001308 systemd-logind[1477]: New session 13 of user core. Jul 6 23:13:20.009442 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:13:20.818232 sshd[4295]: Connection closed by 139.178.89.65 port 35500 Jul 6 23:13:20.819237 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Jul 6 23:13:20.823843 systemd[1]: sshd@13-78.47.124.97:22-139.178.89.65:35500.service: Deactivated successfully. Jul 6 23:13:20.827604 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:13:20.830653 systemd-logind[1477]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:13:20.832057 systemd-logind[1477]: Removed session 13. Jul 6 23:13:26.012205 systemd[1]: Started sshd@14-78.47.124.97:22-139.178.89.65:60250.service - OpenSSH per-connection server daemon (139.178.89.65:60250). Jul 6 23:13:26.855747 update_engine[1479]: I20250706 23:13:26.854696 1479 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:13:26.855747 update_engine[1479]: I20250706 23:13:26.855257 1479 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:13:26.856597 update_engine[1479]: I20250706 23:13:26.855672 1479 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:13:26.856896 update_engine[1479]: E20250706 23:13:26.856853 1479 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:13:26.857104 update_engine[1479]: I20250706 23:13:26.857048 1479 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 6 23:13:27.112293 sshd[4309]: Accepted publickey for core from 139.178.89.65 port 60250 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:13:27.113502 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:13:27.120575 systemd-logind[1477]: New session 14 of user core. Jul 6 23:13:27.132500 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:13:27.941278 sshd[4313]: Connection closed by 139.178.89.65 port 60250 Jul 6 23:13:27.942982 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Jul 6 23:13:27.947731 systemd-logind[1477]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:13:27.948897 systemd[1]: sshd@14-78.47.124.97:22-139.178.89.65:60250.service: Deactivated successfully. Jul 6 23:13:27.951585 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:13:27.953525 systemd-logind[1477]: Removed session 14. Jul 6 23:13:28.137515 systemd[1]: Started sshd@15-78.47.124.97:22-139.178.89.65:60264.service - OpenSSH per-connection server daemon (139.178.89.65:60264). Jul 6 23:13:29.237996 sshd[4325]: Accepted publickey for core from 139.178.89.65 port 60264 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:13:29.239662 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:13:29.246806 systemd-logind[1477]: New session 15 of user core. Jul 6 23:13:29.249682 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:13:30.115831 sshd[4327]: Connection closed by 139.178.89.65 port 60264 Jul 6 23:13:30.116839 sshd-session[4325]: pam_unix(sshd:session): session closed for user core Jul 6 23:13:30.121637 systemd[1]: sshd@15-78.47.124.97:22-139.178.89.65:60264.service: Deactivated successfully. Jul 6 23:13:30.124625 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:13:30.126055 systemd-logind[1477]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:13:30.128016 systemd-logind[1477]: Removed session 15. Jul 6 23:13:30.311496 systemd[1]: Started sshd@16-78.47.124.97:22-139.178.89.65:54132.service - OpenSSH per-connection server daemon (139.178.89.65:54132). Jul 6 23:13:31.430944 sshd[4336]: Accepted publickey for core from 139.178.89.65 port 54132 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:13:31.433023 sshd-session[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:13:31.442503 systemd-logind[1477]: New session 16 of user core. Jul 6 23:13:31.448082 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:13:33.242694 sshd[4338]: Connection closed by 139.178.89.65 port 54132 Jul 6 23:13:33.243380 sshd-session[4336]: pam_unix(sshd:session): session closed for user core Jul 6 23:13:33.249971 systemd-logind[1477]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:13:33.250581 systemd[1]: sshd@16-78.47.124.97:22-139.178.89.65:54132.service: Deactivated successfully. Jul 6 23:13:33.253106 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:13:33.255093 systemd-logind[1477]: Removed session 16. Jul 6 23:13:33.442286 systemd[1]: Started sshd@17-78.47.124.97:22-139.178.89.65:54146.service - OpenSSH per-connection server daemon (139.178.89.65:54146). Jul 6 23:13:34.558560 sshd[4356]: Accepted publickey for core from 139.178.89.65 port 54146 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:13:34.561033 sshd-session[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:13:34.569293 systemd-logind[1477]: New session 17 of user core. Jul 6 23:13:34.573875 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:13:35.536359 sshd[4358]: Connection closed by 139.178.89.65 port 54146 Jul 6 23:13:35.536918 sshd-session[4356]: pam_unix(sshd:session): session closed for user core Jul 6 23:13:35.543082 systemd[1]: sshd@17-78.47.124.97:22-139.178.89.65:54146.service: Deactivated successfully. Jul 6 23:13:35.546578 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:13:35.548074 systemd-logind[1477]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:13:35.549514 systemd-logind[1477]: Removed session 17. Jul 6 23:13:35.729084 systemd[1]: Started sshd@18-78.47.124.97:22-139.178.89.65:54148.service - OpenSSH per-connection server daemon (139.178.89.65:54148). Jul 6 23:13:36.821001 sshd[4368]: Accepted publickey for core from 139.178.89.65 port 54148 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:13:36.824077 sshd-session[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:13:36.829621 systemd-logind[1477]: New session 18 of user core. Jul 6 23:13:36.838562 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:13:36.852189 update_engine[1479]: I20250706 23:13:36.851648 1479 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:13:36.852189 update_engine[1479]: I20250706 23:13:36.851877 1479 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:13:36.852189 update_engine[1479]: I20250706 23:13:36.852141 1479 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:13:36.852968 update_engine[1479]: E20250706 23:13:36.852929 1479 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:13:36.853170 update_engine[1479]: I20250706 23:13:36.853111 1479 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 6 23:13:37.647307 sshd[4370]: Connection closed by 139.178.89.65 port 54148 Jul 6 23:13:37.648269 sshd-session[4368]: pam_unix(sshd:session): session closed for user core Jul 6 23:13:37.653233 systemd[1]: sshd@18-78.47.124.97:22-139.178.89.65:54148.service: Deactivated successfully. Jul 6 23:13:37.655895 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:13:37.658908 systemd-logind[1477]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:13:37.660723 systemd-logind[1477]: Removed session 18. Jul 6 23:13:42.844334 systemd[1]: Started sshd@19-78.47.124.97:22-139.178.89.65:55704.service - OpenSSH per-connection server daemon (139.178.89.65:55704). Jul 6 23:13:43.954160 sshd[4384]: Accepted publickey for core from 139.178.89.65 port 55704 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:13:43.956421 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:13:43.962541 systemd-logind[1477]: New session 19 of user core. Jul 6 23:13:43.967450 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:13:44.788726 sshd[4386]: Connection closed by 139.178.89.65 port 55704 Jul 6 23:13:44.789861 sshd-session[4384]: pam_unix(sshd:session): session closed for user core Jul 6 23:13:44.794257 systemd[1]: sshd@19-78.47.124.97:22-139.178.89.65:55704.service: Deactivated successfully. Jul 6 23:13:44.797413 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:13:44.800708 systemd-logind[1477]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:13:44.801743 systemd-logind[1477]: Removed session 19. Jul 6 23:13:46.853917 update_engine[1479]: I20250706 23:13:46.853771 1479 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:13:46.854582 update_engine[1479]: I20250706 23:13:46.854216 1479 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:13:46.854582 update_engine[1479]: I20250706 23:13:46.854516 1479 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:13:46.855116 update_engine[1479]: E20250706 23:13:46.854980 1479 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:13:46.855243 update_engine[1479]: I20250706 23:13:46.855162 1479 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 6 23:13:46.855243 update_engine[1479]: I20250706 23:13:46.855180 1479 omaha_request_action.cc:617] Omaha request response: Jul 6 23:13:46.855346 update_engine[1479]: E20250706 23:13:46.855275 1479 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 6 23:13:46.855346 update_engine[1479]: I20250706 23:13:46.855299 1479 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 6 23:13:46.855346 update_engine[1479]: I20250706 23:13:46.855309 1479 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:13:46.855346 update_engine[1479]: I20250706 23:13:46.855316 1479 update_attempter.cc:306] Processing Done. Jul 6 23:13:46.855346 update_engine[1479]: E20250706 23:13:46.855335 1479 update_attempter.cc:619] Update failed. Jul 6 23:13:46.855346 update_engine[1479]: I20250706 23:13:46.855345 1479 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 6 23:13:46.855630 update_engine[1479]: I20250706 23:13:46.855352 1479 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 6 23:13:46.855630 update_engine[1479]: I20250706 23:13:46.855360 1479 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 6 23:13:46.855830 locksmithd[1523]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 6 23:13:46.856339 update_engine[1479]: I20250706 23:13:46.855815 1479 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 6 23:13:46.856339 update_engine[1479]: I20250706 23:13:46.855861 1479 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 6 23:13:46.856339 update_engine[1479]: I20250706 23:13:46.855872 1479 omaha_request_action.cc:272] Request: Jul 6 23:13:46.856339 update_engine[1479]: Jul 6 23:13:46.856339 update_engine[1479]: Jul 6 23:13:46.856339 update_engine[1479]: Jul 6 23:13:46.856339 update_engine[1479]: Jul 6 23:13:46.856339 update_engine[1479]: Jul 6 23:13:46.856339 update_engine[1479]: Jul 6 23:13:46.856339 update_engine[1479]: I20250706 23:13:46.855880 1479 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:13:46.856339 update_engine[1479]: I20250706 23:13:46.856209 1479 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:13:46.856893 update_engine[1479]: I20250706 23:13:46.856462 1479 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:13:46.856984 update_engine[1479]: E20250706 23:13:46.856912 1479 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:13:46.857036 update_engine[1479]: I20250706 23:13:46.857004 1479 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 6 23:13:46.857036 update_engine[1479]: I20250706 23:13:46.857020 1479 omaha_request_action.cc:617] Omaha request response: Jul 6 23:13:46.857036 update_engine[1479]: I20250706 23:13:46.857031 1479 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:13:46.857185 update_engine[1479]: I20250706 23:13:46.857040 1479 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:13:46.857185 update_engine[1479]: I20250706 23:13:46.857049 1479 update_attempter.cc:306] Processing Done. Jul 6 23:13:46.857185 update_engine[1479]: I20250706 23:13:46.857059 1479 update_attempter.cc:310] Error event sent. Jul 6 23:13:46.857185 update_engine[1479]: I20250706 23:13:46.857091 1479 update_check_scheduler.cc:74] Next update check in 49m28s Jul 6 23:13:46.857595 locksmithd[1523]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 6 23:13:49.989943 systemd[1]: Started sshd@20-78.47.124.97:22-139.178.89.65:50512.service - OpenSSH per-connection server daemon (139.178.89.65:50512). Jul 6 23:13:51.089179 sshd[4398]: Accepted publickey for core from 139.178.89.65 port 50512 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:13:51.091272 sshd-session[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:13:51.097642 systemd-logind[1477]: New session 20 of user core. Jul 6 23:13:51.105530 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:13:51.918616 sshd[4400]: Connection closed by 139.178.89.65 port 50512 Jul 6 23:13:51.918446 sshd-session[4398]: pam_unix(sshd:session): session closed for user core Jul 6 23:13:51.923285 systemd[1]: sshd@20-78.47.124.97:22-139.178.89.65:50512.service: Deactivated successfully. Jul 6 23:13:51.925325 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:13:51.929188 systemd-logind[1477]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:13:51.930694 systemd-logind[1477]: Removed session 20. Jul 6 23:13:52.114494 systemd[1]: Started sshd@21-78.47.124.97:22-139.178.89.65:50520.service - OpenSSH per-connection server daemon (139.178.89.65:50520). Jul 6 23:13:53.230185 sshd[4411]: Accepted publickey for core from 139.178.89.65 port 50520 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:13:53.232313 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:13:53.237717 systemd-logind[1477]: New session 21 of user core. Jul 6 23:13:53.246521 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:13:55.595379 kubelet[2793]: I0706 23:13:55.595294 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qdd9f" podStartSLOduration=209.595275144 podStartE2EDuration="3m29.595275144s" podCreationTimestamp="2025-07-06 23:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:11:01.523861715 +0000 UTC m=+41.412969725" watchObservedRunningTime="2025-07-06 23:13:55.595275144 +0000 UTC m=+215.484383194" Jul 6 23:13:55.609751 containerd[1487]: time="2025-07-06T23:13:55.609518705Z" level=info msg="StopContainer for \"e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28\" with timeout 30 (s)" Jul 6 23:13:55.612345 containerd[1487]: time="2025-07-06T23:13:55.611354870Z" level=info msg="Stop container \"e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28\" with signal terminated" Jul 6 23:13:55.621574 systemd[1]: run-containerd-runc-k8s.io-2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1-runc.Vvctmc.mount: Deactivated successfully. Jul 6 23:13:55.635624 containerd[1487]: time="2025-07-06T23:13:55.635548686Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:13:55.639288 systemd[1]: cri-containerd-e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28.scope: Deactivated successfully. Jul 6 23:13:55.650843 containerd[1487]: time="2025-07-06T23:13:55.650624042Z" level=info msg="StopContainer for \"2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1\" with timeout 2 (s)" Jul 6 23:13:55.651514 containerd[1487]: time="2025-07-06T23:13:55.651295102Z" level=info msg="Stop container \"2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1\" with signal terminated" Jul 6 23:13:55.660922 systemd-networkd[1391]: lxc_health: Link DOWN Jul 6 23:13:55.660928 systemd-networkd[1391]: lxc_health: Lost carrier Jul 6 23:13:55.679754 systemd[1]: cri-containerd-2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1.scope: Deactivated successfully. Jul 6 23:13:55.680617 systemd[1]: cri-containerd-2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1.scope: Consumed 7.690s CPU time, 124.3M memory peak, 136K read from disk, 12.9M written to disk. Jul 6 23:13:55.696973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28-rootfs.mount: Deactivated successfully. Jul 6 23:13:55.707574 containerd[1487]: time="2025-07-06T23:13:55.707507437Z" level=info msg="shim disconnected" id=e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28 namespace=k8s.io Jul 6 23:13:55.707574 containerd[1487]: time="2025-07-06T23:13:55.707563642Z" level=warning msg="cleaning up after shim disconnected" id=e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28 namespace=k8s.io Jul 6 23:13:55.707574 containerd[1487]: time="2025-07-06T23:13:55.707571923Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:13:55.715220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1-rootfs.mount: Deactivated successfully. Jul 6 23:13:55.724155 containerd[1487]: time="2025-07-06T23:13:55.723087879Z" level=info msg="shim disconnected" id=2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1 namespace=k8s.io Jul 6 23:13:55.724155 containerd[1487]: time="2025-07-06T23:13:55.723182087Z" level=warning msg="cleaning up after shim disconnected" id=2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1 namespace=k8s.io Jul 6 23:13:55.724155 containerd[1487]: time="2025-07-06T23:13:55.723192088Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:13:55.741288 containerd[1487]: time="2025-07-06T23:13:55.741229590Z" level=info msg="StopContainer for \"e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28\" returns successfully" Jul 6 23:13:55.742752 containerd[1487]: time="2025-07-06T23:13:55.742698762Z" level=info msg="StopPodSandbox for \"c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6\"" Jul 6 23:13:55.742860 containerd[1487]: time="2025-07-06T23:13:55.742762728Z" level=info msg="Container to stop \"e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:13:55.745675 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6-shm.mount: Deactivated successfully. Jul 6 23:13:55.754752 systemd[1]: cri-containerd-c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6.scope: Deactivated successfully. Jul 6 23:13:55.756585 containerd[1487]: time="2025-07-06T23:13:55.756519085Z" level=info msg="StopContainer for \"2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1\" returns successfully" Jul 6 23:13:55.757514 containerd[1487]: time="2025-07-06T23:13:55.757267553Z" level=info msg="StopPodSandbox for \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\"" Jul 6 23:13:55.757514 containerd[1487]: time="2025-07-06T23:13:55.757303316Z" level=info msg="Container to stop \"ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:13:55.757514 containerd[1487]: time="2025-07-06T23:13:55.757313917Z" level=info msg="Container to stop \"2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:13:55.757514 containerd[1487]: time="2025-07-06T23:13:55.757324838Z" level=info msg="Container to stop \"1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:13:55.757514 containerd[1487]: time="2025-07-06T23:13:55.757335599Z" level=info msg="Container to stop \"b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:13:55.757514 containerd[1487]: time="2025-07-06T23:13:55.757343399Z" level=info msg="Container to stop \"3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:13:55.767892 systemd[1]: cri-containerd-649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2.scope: Deactivated successfully. Jul 6 23:13:55.792451 containerd[1487]: time="2025-07-06T23:13:55.792090884Z" level=info msg="shim disconnected" id=c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6 namespace=k8s.io Jul 6 23:13:55.792451 containerd[1487]: time="2025-07-06T23:13:55.792379750Z" level=warning msg="cleaning up after shim disconnected" id=c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6 namespace=k8s.io Jul 6 23:13:55.792451 containerd[1487]: time="2025-07-06T23:13:55.792401872Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:13:55.806958 containerd[1487]: time="2025-07-06T23:13:55.806476738Z" level=info msg="shim disconnected" id=649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2 namespace=k8s.io Jul 6 23:13:55.806958 containerd[1487]: time="2025-07-06T23:13:55.806666995Z" level=warning msg="cleaning up after shim disconnected" id=649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2 namespace=k8s.io Jul 6 23:13:55.806958 containerd[1487]: time="2025-07-06T23:13:55.806676996Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:13:55.817440 containerd[1487]: time="2025-07-06T23:13:55.817283390Z" level=info msg="TearDown network for sandbox \"c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6\" successfully" Jul 6 23:13:55.817440 containerd[1487]: time="2025-07-06T23:13:55.817319793Z" level=info msg="StopPodSandbox for \"c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6\" returns successfully" Jul 6 23:13:55.834477 containerd[1487]: time="2025-07-06T23:13:55.834378447Z" level=info msg="TearDown network for sandbox \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\" successfully" Jul 6 23:13:55.834477 containerd[1487]: time="2025-07-06T23:13:55.834457815Z" level=info msg="StopPodSandbox for \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\" returns successfully" Jul 6 23:13:55.902634 kubelet[2793]: I0706 23:13:55.901290 2793 scope.go:117] "RemoveContainer" containerID="2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1" Jul 6 23:13:55.903650 containerd[1487]: time="2025-07-06T23:13:55.903597152Z" level=info msg="RemoveContainer for \"2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1\"" Jul 6 23:13:55.910684 containerd[1487]: time="2025-07-06T23:13:55.910614744Z" level=info msg="RemoveContainer for \"2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1\" returns successfully" Jul 6 23:13:55.911784 kubelet[2793]: I0706 23:13:55.911111 2793 scope.go:117] "RemoveContainer" containerID="b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901" Jul 6 23:13:55.916036 containerd[1487]: time="2025-07-06T23:13:55.915971905Z" level=info msg="RemoveContainer for \"b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901\"" Jul 6 23:13:55.921252 containerd[1487]: time="2025-07-06T23:13:55.921029000Z" level=info msg="RemoveContainer for \"b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901\" returns successfully" Jul 6 23:13:55.922477 kubelet[2793]: I0706 23:13:55.921704 2793 scope.go:117] "RemoveContainer" containerID="3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec" Jul 6 23:13:55.923927 containerd[1487]: time="2025-07-06T23:13:55.923878416Z" level=info msg="RemoveContainer for \"3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec\"" Jul 6 23:13:55.929813 containerd[1487]: time="2025-07-06T23:13:55.929760905Z" level=info msg="RemoveContainer for \"3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec\" returns successfully" Jul 6 23:13:55.930165 kubelet[2793]: I0706 23:13:55.930043 2793 scope.go:117] "RemoveContainer" containerID="ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff" Jul 6 23:13:55.931599 containerd[1487]: time="2025-07-06T23:13:55.931550586Z" level=info msg="RemoveContainer for \"ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff\"" Jul 6 23:13:55.936649 containerd[1487]: time="2025-07-06T23:13:55.936579399Z" level=info msg="RemoveContainer for \"ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff\" returns successfully" Jul 6 23:13:55.937064 kubelet[2793]: I0706 23:13:55.937032 2793 scope.go:117] "RemoveContainer" containerID="1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff" Jul 6 23:13:55.938760 containerd[1487]: time="2025-07-06T23:13:55.938714111Z" level=info msg="RemoveContainer for \"1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff\"" Jul 6 23:13:55.944823 containerd[1487]: time="2025-07-06T23:13:55.944760694Z" level=info msg="RemoveContainer for \"1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff\" returns successfully" Jul 6 23:13:55.945239 kubelet[2793]: I0706 23:13:55.945199 2793 scope.go:117] "RemoveContainer" containerID="2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1" Jul 6 23:13:55.945694 containerd[1487]: time="2025-07-06T23:13:55.945581968Z" level=error msg="ContainerStatus for \"2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1\": not found" Jul 6 23:13:55.945903 kubelet[2793]: E0706 23:13:55.945798 2793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1\": not found" containerID="2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1" Jul 6 23:13:55.945972 kubelet[2793]: I0706 23:13:55.945896 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1"} err="failed to get container status \"2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ca845d125e3af33db3977a56b8be359ccfff2ca630ed5820e93aaef00160ab1\": not found" Jul 6 23:13:55.945972 kubelet[2793]: I0706 23:13:55.945956 2793 scope.go:117] "RemoveContainer" containerID="b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901" Jul 6 23:13:55.946462 containerd[1487]: time="2025-07-06T23:13:55.946403282Z" level=error msg="ContainerStatus for \"b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901\": not found" Jul 6 23:13:55.946680 kubelet[2793]: E0706 23:13:55.946635 2793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901\": not found" containerID="b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901" Jul 6 23:13:55.946727 kubelet[2793]: I0706 23:13:55.946698 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901"} err="failed to get container status \"b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901\": rpc error: code = NotFound desc = an error occurred when try to find container \"b116731ae2ebb81bdb077af2bcf08abc38675fbee374269d403214978c385901\": not found" Jul 6 23:13:55.946753 kubelet[2793]: I0706 23:13:55.946724 2793 scope.go:117] "RemoveContainer" containerID="3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec" Jul 6 23:13:55.947064 containerd[1487]: time="2025-07-06T23:13:55.946986575Z" level=error msg="ContainerStatus for \"3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec\": not found" Jul 6 23:13:55.947316 kubelet[2793]: E0706 23:13:55.947115 2793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec\": not found" containerID="3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec" Jul 6 23:13:55.947316 kubelet[2793]: I0706 23:13:55.947159 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec"} err="failed to get container status \"3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec\": rpc error: code = NotFound desc = an error occurred when try to find container \"3a0819715d81d4cc6f3d674cc1fec721b6077f0a3c2af9622601bf9f5e6b4eec\": not found" Jul 6 23:13:55.947316 kubelet[2793]: I0706 23:13:55.947175 2793 scope.go:117] "RemoveContainer" containerID="ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff" Jul 6 23:13:55.947657 containerd[1487]: time="2025-07-06T23:13:55.947579108Z" level=error msg="ContainerStatus for \"ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff\": not found" Jul 6 23:13:55.947749 kubelet[2793]: E0706 23:13:55.947720 2793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff\": not found" containerID="ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff" Jul 6 23:13:55.947790 kubelet[2793]: I0706 23:13:55.947744 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff"} err="failed to get container status \"ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac817bb7926dd706f6cc16477fb39b6a4609d06ff6eff53a4e89480ca76700ff\": not found" Jul 6 23:13:55.947790 kubelet[2793]: I0706 23:13:55.947776 2793 scope.go:117] "RemoveContainer" containerID="1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff" Jul 6 23:13:55.948150 containerd[1487]: time="2025-07-06T23:13:55.948007066Z" level=error msg="ContainerStatus for \"1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff\": not found" Jul 6 23:13:55.948210 kubelet[2793]: E0706 23:13:55.948152 2793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff\": not found" containerID="1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff" Jul 6 23:13:55.948210 kubelet[2793]: I0706 23:13:55.948171 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff"} err="failed to get container status \"1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff\": rpc error: code = NotFound desc = an error occurred when try to find container \"1b9fdf03ccdcaa75f553eb85a9f4c0074772c18b5eb6ba4b41cfbbfdfa984eff\": not found" Jul 6 23:13:55.948210 kubelet[2793]: I0706 23:13:55.948183 2793 scope.go:117] "RemoveContainer" containerID="e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28" Jul 6 23:13:55.949548 containerd[1487]: time="2025-07-06T23:13:55.949504841Z" level=info msg="RemoveContainer for \"e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28\"" Jul 6 23:13:55.954221 containerd[1487]: time="2025-07-06T23:13:55.954163820Z" level=info msg="RemoveContainer for \"e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28\" returns successfully" Jul 6 23:13:55.954568 kubelet[2793]: I0706 23:13:55.954464 2793 scope.go:117] "RemoveContainer" containerID="e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28" Jul 6 23:13:55.954969 containerd[1487]: time="2025-07-06T23:13:55.954902327Z" level=error msg="ContainerStatus for \"e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28\": not found" Jul 6 23:13:55.955075 kubelet[2793]: E0706 23:13:55.955043 2793 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28\": not found" containerID="e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28" Jul 6 23:13:55.955159 kubelet[2793]: I0706 23:13:55.955074 2793 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28"} err="failed to get container status \"e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28\": rpc error: code = NotFound desc = an error occurred when try to find container \"e56f2ee5941b0eeec62159b10cc23cdf0146f93a0ac7291d44b5e770a649dc28\": not found" Jul 6 23:13:55.969739 kubelet[2793]: I0706 23:13:55.969500 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-cilium-cgroup\") pod \"a377642c-ad77-4d5c-9fb3-88cc630d987d\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " Jul 6 23:13:55.969739 kubelet[2793]: I0706 23:13:55.969715 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-cilium-run\") pod \"a377642c-ad77-4d5c-9fb3-88cc630d987d\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " Jul 6 23:13:55.969998 kubelet[2793]: I0706 23:13:55.969775 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4jt4\" (UniqueName: \"kubernetes.io/projected/a377642c-ad77-4d5c-9fb3-88cc630d987d-kube-api-access-l4jt4\") pod \"a377642c-ad77-4d5c-9fb3-88cc630d987d\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " Jul 6 23:13:55.969998 kubelet[2793]: I0706 23:13:55.969633 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a377642c-ad77-4d5c-9fb3-88cc630d987d" (UID: "a377642c-ad77-4d5c-9fb3-88cc630d987d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:13:55.969998 kubelet[2793]: I0706 23:13:55.969856 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a377642c-ad77-4d5c-9fb3-88cc630d987d" (UID: "a377642c-ad77-4d5c-9fb3-88cc630d987d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:13:55.970608 kubelet[2793]: I0706 23:13:55.970434 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a377642c-ad77-4d5c-9fb3-88cc630d987d-clustermesh-secrets\") pod \"a377642c-ad77-4d5c-9fb3-88cc630d987d\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " Jul 6 23:13:55.970608 kubelet[2793]: I0706 23:13:55.970487 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-etc-cni-netd\") pod \"a377642c-ad77-4d5c-9fb3-88cc630d987d\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " Jul 6 23:13:55.970608 kubelet[2793]: I0706 23:13:55.970517 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-lib-modules\") pod \"a377642c-ad77-4d5c-9fb3-88cc630d987d\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " Jul 6 23:13:55.970608 kubelet[2793]: I0706 23:13:55.970556 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph2w8\" (UniqueName: \"kubernetes.io/projected/8dff44f5-4a37-4dc5-bcde-484af4530e1f-kube-api-access-ph2w8\") pod \"8dff44f5-4a37-4dc5-bcde-484af4530e1f\" (UID: \"8dff44f5-4a37-4dc5-bcde-484af4530e1f\") " Jul 6 23:13:55.970608 kubelet[2793]: I0706 23:13:55.970587 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-bpf-maps\") pod \"a377642c-ad77-4d5c-9fb3-88cc630d987d\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " Jul 6 23:13:55.970949 kubelet[2793]: I0706 23:13:55.970618 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-host-proc-sys-kernel\") pod \"a377642c-ad77-4d5c-9fb3-88cc630d987d\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " Jul 6 23:13:55.970949 kubelet[2793]: I0706 23:13:55.970650 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-xtables-lock\") pod \"a377642c-ad77-4d5c-9fb3-88cc630d987d\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " Jul 6 23:13:55.970949 kubelet[2793]: I0706 23:13:55.970678 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-hostproc\") pod \"a377642c-ad77-4d5c-9fb3-88cc630d987d\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " Jul 6 23:13:55.970949 kubelet[2793]: I0706 23:13:55.970707 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-host-proc-sys-net\") pod \"a377642c-ad77-4d5c-9fb3-88cc630d987d\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " Jul 6 23:13:55.970949 kubelet[2793]: I0706 23:13:55.970738 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-cni-path\") pod \"a377642c-ad77-4d5c-9fb3-88cc630d987d\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " Jul 6 23:13:55.970949 kubelet[2793]: I0706 23:13:55.970777 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a377642c-ad77-4d5c-9fb3-88cc630d987d-cilium-config-path\") pod \"a377642c-ad77-4d5c-9fb3-88cc630d987d\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " Jul 6 23:13:55.971331 kubelet[2793]: I0706 23:13:55.970809 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8dff44f5-4a37-4dc5-bcde-484af4530e1f-cilium-config-path\") pod \"8dff44f5-4a37-4dc5-bcde-484af4530e1f\" (UID: \"8dff44f5-4a37-4dc5-bcde-484af4530e1f\") " Jul 6 23:13:55.971331 kubelet[2793]: I0706 23:13:55.970880 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a377642c-ad77-4d5c-9fb3-88cc630d987d-hubble-tls\") pod \"a377642c-ad77-4d5c-9fb3-88cc630d987d\" (UID: \"a377642c-ad77-4d5c-9fb3-88cc630d987d\") " Jul 6 23:13:55.971331 kubelet[2793]: I0706 23:13:55.970983 2793 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-cilium-cgroup\") on node \"ci-4230-2-1-6-eb6896cb23\" DevicePath \"\"" Jul 6 23:13:55.971331 kubelet[2793]: I0706 23:13:55.971006 2793 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-cilium-run\") on node \"ci-4230-2-1-6-eb6896cb23\" DevicePath \"\"" Jul 6 23:13:55.973622 kubelet[2793]: I0706 23:13:55.973514 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a377642c-ad77-4d5c-9fb3-88cc630d987d" (UID: "a377642c-ad77-4d5c-9fb3-88cc630d987d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:13:55.974523 kubelet[2793]: I0706 23:13:55.973914 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a377642c-ad77-4d5c-9fb3-88cc630d987d" (UID: "a377642c-ad77-4d5c-9fb3-88cc630d987d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:13:55.974523 kubelet[2793]: I0706 23:13:55.973966 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-hostproc" (OuterVolumeSpecName: "hostproc") pod "a377642c-ad77-4d5c-9fb3-88cc630d987d" (UID: "a377642c-ad77-4d5c-9fb3-88cc630d987d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:13:55.974523 kubelet[2793]: I0706 23:13:55.973988 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a377642c-ad77-4d5c-9fb3-88cc630d987d" (UID: "a377642c-ad77-4d5c-9fb3-88cc630d987d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:13:55.974523 kubelet[2793]: I0706 23:13:55.974011 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-cni-path" (OuterVolumeSpecName: "cni-path") pod "a377642c-ad77-4d5c-9fb3-88cc630d987d" (UID: "a377642c-ad77-4d5c-9fb3-88cc630d987d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:13:55.974711 kubelet[2793]: I0706 23:13:55.974551 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a377642c-ad77-4d5c-9fb3-88cc630d987d" (UID: "a377642c-ad77-4d5c-9fb3-88cc630d987d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:13:55.974711 kubelet[2793]: I0706 23:13:55.974579 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a377642c-ad77-4d5c-9fb3-88cc630d987d" (UID: "a377642c-ad77-4d5c-9fb3-88cc630d987d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:13:55.977370 kubelet[2793]: I0706 23:13:55.977276 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a377642c-ad77-4d5c-9fb3-88cc630d987d" (UID: "a377642c-ad77-4d5c-9fb3-88cc630d987d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:13:55.977370 kubelet[2793]: I0706 23:13:55.977370 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a377642c-ad77-4d5c-9fb3-88cc630d987d-kube-api-access-l4jt4" (OuterVolumeSpecName: "kube-api-access-l4jt4") pod "a377642c-ad77-4d5c-9fb3-88cc630d987d" (UID: "a377642c-ad77-4d5c-9fb3-88cc630d987d"). InnerVolumeSpecName "kube-api-access-l4jt4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:13:55.977516 kubelet[2793]: I0706 23:13:55.977421 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a377642c-ad77-4d5c-9fb3-88cc630d987d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a377642c-ad77-4d5c-9fb3-88cc630d987d" (UID: "a377642c-ad77-4d5c-9fb3-88cc630d987d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:13:55.979774 kubelet[2793]: I0706 23:13:55.979654 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a377642c-ad77-4d5c-9fb3-88cc630d987d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a377642c-ad77-4d5c-9fb3-88cc630d987d" (UID: "a377642c-ad77-4d5c-9fb3-88cc630d987d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:13:55.981516 kubelet[2793]: I0706 23:13:55.981449 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dff44f5-4a37-4dc5-bcde-484af4530e1f-kube-api-access-ph2w8" (OuterVolumeSpecName: "kube-api-access-ph2w8") pod "8dff44f5-4a37-4dc5-bcde-484af4530e1f" (UID: "8dff44f5-4a37-4dc5-bcde-484af4530e1f"). InnerVolumeSpecName "kube-api-access-ph2w8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:13:55.982960 kubelet[2793]: I0706 23:13:55.982909 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dff44f5-4a37-4dc5-bcde-484af4530e1f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8dff44f5-4a37-4dc5-bcde-484af4530e1f" (UID: "8dff44f5-4a37-4dc5-bcde-484af4530e1f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:13:55.983144 kubelet[2793]: I0706 23:13:55.983092 2793 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a377642c-ad77-4d5c-9fb3-88cc630d987d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a377642c-ad77-4d5c-9fb3-88cc630d987d" (UID: "a377642c-ad77-4d5c-9fb3-88cc630d987d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:13:56.071786 kubelet[2793]: I0706 23:13:56.071710 2793 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ph2w8\" (UniqueName: \"kubernetes.io/projected/8dff44f5-4a37-4dc5-bcde-484af4530e1f-kube-api-access-ph2w8\") on node \"ci-4230-2-1-6-eb6896cb23\" DevicePath \"\"" Jul 6 23:13:56.071786 kubelet[2793]: I0706 23:13:56.071765 2793 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-bpf-maps\") on node \"ci-4230-2-1-6-eb6896cb23\" DevicePath \"\"" Jul 6 23:13:56.071786 kubelet[2793]: I0706 23:13:56.071784 2793 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-host-proc-sys-kernel\") on node \"ci-4230-2-1-6-eb6896cb23\" DevicePath \"\"" Jul 6 23:13:56.071786 kubelet[2793]: I0706 23:13:56.071802 2793 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-xtables-lock\") on node \"ci-4230-2-1-6-eb6896cb23\" DevicePath \"\"" Jul 6 23:13:56.072375 kubelet[2793]: I0706 23:13:56.071817 2793 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-hostproc\") on node \"ci-4230-2-1-6-eb6896cb23\" DevicePath \"\"" Jul 6 23:13:56.072375 kubelet[2793]: I0706 23:13:56.071857 2793 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-host-proc-sys-net\") on node \"ci-4230-2-1-6-eb6896cb23\" DevicePath \"\"" Jul 6 23:13:56.072375 kubelet[2793]: I0706 23:13:56.071875 2793 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-cni-path\") on node \"ci-4230-2-1-6-eb6896cb23\" DevicePath \"\"" Jul 6 23:13:56.072375 kubelet[2793]: I0706 23:13:56.071891 2793 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a377642c-ad77-4d5c-9fb3-88cc630d987d-cilium-config-path\") on node \"ci-4230-2-1-6-eb6896cb23\" DevicePath \"\"" Jul 6 23:13:56.072375 kubelet[2793]: I0706 23:13:56.071906 2793 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8dff44f5-4a37-4dc5-bcde-484af4530e1f-cilium-config-path\") on node \"ci-4230-2-1-6-eb6896cb23\" DevicePath \"\"" Jul 6 23:13:56.072375 kubelet[2793]: I0706 23:13:56.071927 2793 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a377642c-ad77-4d5c-9fb3-88cc630d987d-hubble-tls\") on node \"ci-4230-2-1-6-eb6896cb23\" DevicePath \"\"" Jul 6 23:13:56.072375 kubelet[2793]: I0706 23:13:56.071943 2793 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l4jt4\" (UniqueName: \"kubernetes.io/projected/a377642c-ad77-4d5c-9fb3-88cc630d987d-kube-api-access-l4jt4\") on node \"ci-4230-2-1-6-eb6896cb23\" DevicePath \"\"" Jul 6 23:13:56.072375 kubelet[2793]: I0706 23:13:56.071957 2793 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a377642c-ad77-4d5c-9fb3-88cc630d987d-clustermesh-secrets\") on node \"ci-4230-2-1-6-eb6896cb23\" DevicePath \"\"" Jul 6 23:13:56.072999 kubelet[2793]: I0706 23:13:56.071972 2793 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-etc-cni-netd\") on node \"ci-4230-2-1-6-eb6896cb23\" DevicePath \"\"" Jul 6 23:13:56.072999 kubelet[2793]: I0706 23:13:56.071989 2793 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a377642c-ad77-4d5c-9fb3-88cc630d987d-lib-modules\") on node \"ci-4230-2-1-6-eb6896cb23\" DevicePath \"\"" Jul 6 23:13:56.212316 systemd[1]: Removed slice kubepods-burstable-poda377642c_ad77_4d5c_9fb3_88cc630d987d.slice - libcontainer container kubepods-burstable-poda377642c_ad77_4d5c_9fb3_88cc630d987d.slice. Jul 6 23:13:56.212902 systemd[1]: kubepods-burstable-poda377642c_ad77_4d5c_9fb3_88cc630d987d.slice: Consumed 7.794s CPU time, 124.7M memory peak, 136K read from disk, 12.9M written to disk. Jul 6 23:13:56.218698 systemd[1]: Removed slice kubepods-besteffort-pod8dff44f5_4a37_4dc5_bcde_484af4530e1f.slice - libcontainer container kubepods-besteffort-pod8dff44f5_4a37_4dc5_bcde_484af4530e1f.slice. Jul 6 23:13:56.262281 kubelet[2793]: I0706 23:13:56.262229 2793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a377642c-ad77-4d5c-9fb3-88cc630d987d" path="/var/lib/kubelet/pods/a377642c-ad77-4d5c-9fb3-88cc630d987d/volumes" Jul 6 23:13:56.616873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6-rootfs.mount: Deactivated successfully. Jul 6 23:13:56.617020 systemd[1]: var-lib-kubelet-pods-8dff44f5\x2d4a37\x2d4dc5\x2dbcde\x2d484af4530e1f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dph2w8.mount: Deactivated successfully. Jul 6 23:13:56.617096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2-rootfs.mount: Deactivated successfully. Jul 6 23:13:56.617202 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2-shm.mount: Deactivated successfully. Jul 6 23:13:56.617295 systemd[1]: var-lib-kubelet-pods-a377642c\x2dad77\x2d4d5c\x2d9fb3\x2d88cc630d987d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl4jt4.mount: Deactivated successfully. Jul 6 23:13:56.617368 systemd[1]: var-lib-kubelet-pods-a377642c\x2dad77\x2d4d5c\x2d9fb3\x2d88cc630d987d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:13:56.617432 systemd[1]: var-lib-kubelet-pods-a377642c\x2dad77\x2d4d5c\x2d9fb3\x2d88cc630d987d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:13:57.711497 sshd[4413]: Connection closed by 139.178.89.65 port 50520 Jul 6 23:13:57.712410 sshd-session[4411]: pam_unix(sshd:session): session closed for user core Jul 6 23:13:57.716690 systemd[1]: sshd@21-78.47.124.97:22-139.178.89.65:50520.service: Deactivated successfully. Jul 6 23:13:57.720570 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:13:57.720847 systemd[1]: session-21.scope: Consumed 1.160s CPU time, 23.5M memory peak. Jul 6 23:13:57.721514 systemd-logind[1477]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:13:57.723335 systemd-logind[1477]: Removed session 21. Jul 6 23:13:57.901536 systemd[1]: Started sshd@22-78.47.124.97:22-139.178.89.65:50528.service - OpenSSH per-connection server daemon (139.178.89.65:50528). Jul 6 23:13:58.260268 kubelet[2793]: I0706 23:13:58.259971 2793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dff44f5-4a37-4dc5-bcde-484af4530e1f" path="/var/lib/kubelet/pods/8dff44f5-4a37-4dc5-bcde-484af4530e1f/volumes" Jul 6 23:13:58.995196 sshd[4574]: Accepted publickey for core from 139.178.89.65 port 50528 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:13:58.996903 sshd-session[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:13:59.006590 systemd-logind[1477]: New session 22 of user core. Jul 6 23:13:59.015930 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:14:00.424282 kubelet[2793]: E0706 23:14:00.424115 2793 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:14:00.525871 systemd[1]: Created slice kubepods-burstable-pod09081dfc_e1d6_4aa2_a36e_65ebf57db400.slice - libcontainer container kubepods-burstable-pod09081dfc_e1d6_4aa2_a36e_65ebf57db400.slice. Jul 6 23:14:00.600117 kubelet[2793]: I0706 23:14:00.600001 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09081dfc-e1d6-4aa2-a36e-65ebf57db400-hubble-tls\") pod \"cilium-82s8j\" (UID: \"09081dfc-e1d6-4aa2-a36e-65ebf57db400\") " pod="kube-system/cilium-82s8j" Jul 6 23:14:00.600858 kubelet[2793]: I0706 23:14:00.600536 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09081dfc-e1d6-4aa2-a36e-65ebf57db400-xtables-lock\") pod \"cilium-82s8j\" (UID: \"09081dfc-e1d6-4aa2-a36e-65ebf57db400\") " pod="kube-system/cilium-82s8j" Jul 6 23:14:00.600858 kubelet[2793]: I0706 23:14:00.600638 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09081dfc-e1d6-4aa2-a36e-65ebf57db400-cilium-config-path\") pod \"cilium-82s8j\" (UID: \"09081dfc-e1d6-4aa2-a36e-65ebf57db400\") " pod="kube-system/cilium-82s8j" Jul 6 23:14:00.600858 kubelet[2793]: I0706 23:14:00.600727 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bsj5\" (UniqueName: \"kubernetes.io/projected/09081dfc-e1d6-4aa2-a36e-65ebf57db400-kube-api-access-6bsj5\") pod \"cilium-82s8j\" (UID: \"09081dfc-e1d6-4aa2-a36e-65ebf57db400\") " pod="kube-system/cilium-82s8j" Jul 6 23:14:00.600858 kubelet[2793]: I0706 23:14:00.600799 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09081dfc-e1d6-4aa2-a36e-65ebf57db400-bpf-maps\") pod \"cilium-82s8j\" (UID: \"09081dfc-e1d6-4aa2-a36e-65ebf57db400\") " pod="kube-system/cilium-82s8j" Jul 6 23:14:00.601478 kubelet[2793]: I0706 23:14:00.600913 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09081dfc-e1d6-4aa2-a36e-65ebf57db400-host-proc-sys-kernel\") pod \"cilium-82s8j\" (UID: \"09081dfc-e1d6-4aa2-a36e-65ebf57db400\") " pod="kube-system/cilium-82s8j" Jul 6 23:14:00.601478 kubelet[2793]: I0706 23:14:00.601055 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/09081dfc-e1d6-4aa2-a36e-65ebf57db400-cilium-ipsec-secrets\") pod \"cilium-82s8j\" (UID: \"09081dfc-e1d6-4aa2-a36e-65ebf57db400\") " pod="kube-system/cilium-82s8j" Jul 6 23:14:00.601478 kubelet[2793]: I0706 23:14:00.601177 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09081dfc-e1d6-4aa2-a36e-65ebf57db400-host-proc-sys-net\") pod \"cilium-82s8j\" (UID: \"09081dfc-e1d6-4aa2-a36e-65ebf57db400\") " pod="kube-system/cilium-82s8j" Jul 6 23:14:00.601478 kubelet[2793]: I0706 23:14:00.601288 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09081dfc-e1d6-4aa2-a36e-65ebf57db400-cilium-cgroup\") pod \"cilium-82s8j\" (UID: \"09081dfc-e1d6-4aa2-a36e-65ebf57db400\") " pod="kube-system/cilium-82s8j" Jul 6 23:14:00.601478 kubelet[2793]: I0706 23:14:00.601377 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09081dfc-e1d6-4aa2-a36e-65ebf57db400-lib-modules\") pod \"cilium-82s8j\" (UID: \"09081dfc-e1d6-4aa2-a36e-65ebf57db400\") " pod="kube-system/cilium-82s8j" Jul 6 23:14:00.601478 kubelet[2793]: I0706 23:14:00.601442 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09081dfc-e1d6-4aa2-a36e-65ebf57db400-cni-path\") pod \"cilium-82s8j\" (UID: \"09081dfc-e1d6-4aa2-a36e-65ebf57db400\") " pod="kube-system/cilium-82s8j" Jul 6 23:14:00.601764 kubelet[2793]: I0706 23:14:00.601499 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09081dfc-e1d6-4aa2-a36e-65ebf57db400-etc-cni-netd\") pod \"cilium-82s8j\" (UID: \"09081dfc-e1d6-4aa2-a36e-65ebf57db400\") " pod="kube-system/cilium-82s8j" Jul 6 23:14:00.601764 kubelet[2793]: I0706 23:14:00.601531 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09081dfc-e1d6-4aa2-a36e-65ebf57db400-cilium-run\") pod \"cilium-82s8j\" (UID: \"09081dfc-e1d6-4aa2-a36e-65ebf57db400\") " pod="kube-system/cilium-82s8j" Jul 6 23:14:00.601764 kubelet[2793]: I0706 23:14:00.601571 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09081dfc-e1d6-4aa2-a36e-65ebf57db400-clustermesh-secrets\") pod \"cilium-82s8j\" (UID: \"09081dfc-e1d6-4aa2-a36e-65ebf57db400\") " pod="kube-system/cilium-82s8j" Jul 6 23:14:00.601764 kubelet[2793]: I0706 23:14:00.601603 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09081dfc-e1d6-4aa2-a36e-65ebf57db400-hostproc\") pod \"cilium-82s8j\" (UID: \"09081dfc-e1d6-4aa2-a36e-65ebf57db400\") " pod="kube-system/cilium-82s8j" Jul 6 23:14:00.724799 sshd[4576]: Connection closed by 139.178.89.65 port 50528 Jul 6 23:14:00.729947 sshd-session[4574]: pam_unix(sshd:session): session closed for user core Jul 6 23:14:00.736769 systemd[1]: sshd@22-78.47.124.97:22-139.178.89.65:50528.service: Deactivated successfully. Jul 6 23:14:00.739814 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:14:00.753464 systemd-logind[1477]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:14:00.755188 systemd-logind[1477]: Removed session 22. Jul 6 23:14:00.833285 containerd[1487]: time="2025-07-06T23:14:00.833209818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-82s8j,Uid:09081dfc-e1d6-4aa2-a36e-65ebf57db400,Namespace:kube-system,Attempt:0,}" Jul 6 23:14:00.869574 containerd[1487]: time="2025-07-06T23:14:00.868512714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:14:00.869574 containerd[1487]: time="2025-07-06T23:14:00.868582521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:14:00.869574 containerd[1487]: time="2025-07-06T23:14:00.868602562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:14:00.869850 containerd[1487]: time="2025-07-06T23:14:00.869507684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:14:00.892554 systemd[1]: Started cri-containerd-7bb026b2ec1476ed9e7535574df87efd132071c6392c21c4ec753637e07c84ef.scope - libcontainer container 7bb026b2ec1476ed9e7535574df87efd132071c6392c21c4ec753637e07c84ef. Jul 6 23:14:00.910838 systemd[1]: Started sshd@23-78.47.124.97:22-139.178.89.65:41728.service - OpenSSH per-connection server daemon (139.178.89.65:41728). Jul 6 23:14:00.937825 containerd[1487]: time="2025-07-06T23:14:00.937553686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-82s8j,Uid:09081dfc-e1d6-4aa2-a36e-65ebf57db400,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bb026b2ec1476ed9e7535574df87efd132071c6392c21c4ec753637e07c84ef\"" Jul 6 23:14:00.946423 containerd[1487]: time="2025-07-06T23:14:00.946383041Z" level=info msg="CreateContainer within sandbox \"7bb026b2ec1476ed9e7535574df87efd132071c6392c21c4ec753637e07c84ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:14:00.958894 containerd[1487]: time="2025-07-06T23:14:00.958824280Z" level=info msg="CreateContainer within sandbox \"7bb026b2ec1476ed9e7535574df87efd132071c6392c21c4ec753637e07c84ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"548ca9514229d997d2e2f04b903bf9634babc6a76beb019cec8da7fa11f55618\"" Jul 6 23:14:00.961387 containerd[1487]: time="2025-07-06T23:14:00.961352428Z" level=info msg="StartContainer for \"548ca9514229d997d2e2f04b903bf9634babc6a76beb019cec8da7fa11f55618\"" Jul 6 23:14:00.990424 systemd[1]: Started cri-containerd-548ca9514229d997d2e2f04b903bf9634babc6a76beb019cec8da7fa11f55618.scope - libcontainer container 548ca9514229d997d2e2f04b903bf9634babc6a76beb019cec8da7fa11f55618. Jul 6 23:14:01.021897 containerd[1487]: time="2025-07-06T23:14:01.021826309Z" level=info msg="StartContainer for \"548ca9514229d997d2e2f04b903bf9634babc6a76beb019cec8da7fa11f55618\" returns successfully" Jul 6 23:14:01.036526 systemd[1]: cri-containerd-548ca9514229d997d2e2f04b903bf9634babc6a76beb019cec8da7fa11f55618.scope: Deactivated successfully. Jul 6 23:14:01.075154 containerd[1487]: time="2025-07-06T23:14:01.075031617Z" level=info msg="shim disconnected" id=548ca9514229d997d2e2f04b903bf9634babc6a76beb019cec8da7fa11f55618 namespace=k8s.io Jul 6 23:14:01.075725 containerd[1487]: time="2025-07-06T23:14:01.075117625Z" level=warning msg="cleaning up after shim disconnected" id=548ca9514229d997d2e2f04b903bf9634babc6a76beb019cec8da7fa11f55618 namespace=k8s.io Jul 6 23:14:01.075725 containerd[1487]: time="2025-07-06T23:14:01.075246076Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:14:01.937954 containerd[1487]: time="2025-07-06T23:14:01.937852738Z" level=info msg="CreateContainer within sandbox \"7bb026b2ec1476ed9e7535574df87efd132071c6392c21c4ec753637e07c84ef\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:14:01.962749 containerd[1487]: time="2025-07-06T23:14:01.962697134Z" level=info msg="CreateContainer within sandbox \"7bb026b2ec1476ed9e7535574df87efd132071c6392c21c4ec753637e07c84ef\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d96674f1aedf927c5798ff13c7021eb00edf1c92a040558345df1e750f8aedad\"" Jul 6 23:14:01.963797 containerd[1487]: time="2025-07-06T23:14:01.963754549Z" level=info msg="StartContainer for \"d96674f1aedf927c5798ff13c7021eb00edf1c92a040558345df1e750f8aedad\"" Jul 6 23:14:02.001855 sshd[4624]: Accepted publickey for core from 139.178.89.65 port 41728 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:14:02.006470 systemd[1]: Started cri-containerd-d96674f1aedf927c5798ff13c7021eb00edf1c92a040558345df1e750f8aedad.scope - libcontainer container d96674f1aedf927c5798ff13c7021eb00edf1c92a040558345df1e750f8aedad. Jul 6 23:14:02.006472 sshd-session[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:14:02.017494 systemd-logind[1477]: New session 23 of user core. Jul 6 23:14:02.022270 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:14:02.051991 containerd[1487]: time="2025-07-06T23:14:02.051901121Z" level=info msg="StartContainer for \"d96674f1aedf927c5798ff13c7021eb00edf1c92a040558345df1e750f8aedad\" returns successfully" Jul 6 23:14:02.064043 systemd[1]: cri-containerd-d96674f1aedf927c5798ff13c7021eb00edf1c92a040558345df1e750f8aedad.scope: Deactivated successfully. Jul 6 23:14:02.093957 containerd[1487]: time="2025-07-06T23:14:02.093733446Z" level=info msg="shim disconnected" id=d96674f1aedf927c5798ff13c7021eb00edf1c92a040558345df1e750f8aedad namespace=k8s.io Jul 6 23:14:02.093957 containerd[1487]: time="2025-07-06T23:14:02.093820534Z" level=warning msg="cleaning up after shim disconnected" id=d96674f1aedf927c5798ff13c7021eb00edf1c92a040558345df1e750f8aedad namespace=k8s.io Jul 6 23:14:02.093957 containerd[1487]: time="2025-07-06T23:14:02.093842056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:14:02.708793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d96674f1aedf927c5798ff13c7021eb00edf1c92a040558345df1e750f8aedad-rootfs.mount: Deactivated successfully. Jul 6 23:14:02.744676 sshd[4718]: Connection closed by 139.178.89.65 port 41728 Jul 6 23:14:02.745606 sshd-session[4624]: pam_unix(sshd:session): session closed for user core Jul 6 23:14:02.751323 systemd-logind[1477]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:14:02.751873 systemd[1]: sshd@23-78.47.124.97:22-139.178.89.65:41728.service: Deactivated successfully. Jul 6 23:14:02.756512 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:14:02.759460 systemd-logind[1477]: Removed session 23. Jul 6 23:14:02.943523 systemd[1]: Started sshd@24-78.47.124.97:22-139.178.89.65:41734.service - OpenSSH per-connection server daemon (139.178.89.65:41734). Jul 6 23:14:02.952447 containerd[1487]: time="2025-07-06T23:14:02.951602208Z" level=info msg="CreateContainer within sandbox \"7bb026b2ec1476ed9e7535574df87efd132071c6392c21c4ec753637e07c84ef\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:14:02.981240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount601877366.mount: Deactivated successfully. Jul 6 23:14:02.993146 containerd[1487]: time="2025-07-06T23:14:02.992882563Z" level=info msg="CreateContainer within sandbox \"7bb026b2ec1476ed9e7535574df87efd132071c6392c21c4ec753637e07c84ef\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"36f41664a7c2c770669d0af2c56f2632423e675486cd3ce8c8b6e7cc43b1ef28\"" Jul 6 23:14:02.996249 containerd[1487]: time="2025-07-06T23:14:02.994337494Z" level=info msg="StartContainer for \"36f41664a7c2c770669d0af2c56f2632423e675486cd3ce8c8b6e7cc43b1ef28\"" Jul 6 23:14:03.048819 systemd[1]: Started cri-containerd-36f41664a7c2c770669d0af2c56f2632423e675486cd3ce8c8b6e7cc43b1ef28.scope - libcontainer container 36f41664a7c2c770669d0af2c56f2632423e675486cd3ce8c8b6e7cc43b1ef28. Jul 6 23:14:03.093522 containerd[1487]: time="2025-07-06T23:14:03.093474097Z" level=info msg="StartContainer for \"36f41664a7c2c770669d0af2c56f2632423e675486cd3ce8c8b6e7cc43b1ef28\" returns successfully" Jul 6 23:14:03.097987 systemd[1]: cri-containerd-36f41664a7c2c770669d0af2c56f2632423e675486cd3ce8c8b6e7cc43b1ef28.scope: Deactivated successfully. Jul 6 23:14:03.127089 containerd[1487]: time="2025-07-06T23:14:03.126989953Z" level=info msg="shim disconnected" id=36f41664a7c2c770669d0af2c56f2632423e675486cd3ce8c8b6e7cc43b1ef28 namespace=k8s.io Jul 6 23:14:03.127089 containerd[1487]: time="2025-07-06T23:14:03.127060560Z" level=warning msg="cleaning up after shim disconnected" id=36f41664a7c2c770669d0af2c56f2632423e675486cd3ce8c8b6e7cc43b1ef28 namespace=k8s.io Jul 6 23:14:03.127089 containerd[1487]: time="2025-07-06T23:14:03.127072681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:14:03.709278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36f41664a7c2c770669d0af2c56f2632423e675486cd3ce8c8b6e7cc43b1ef28-rootfs.mount: Deactivated successfully. Jul 6 23:14:03.956560 containerd[1487]: time="2025-07-06T23:14:03.956364237Z" level=info msg="CreateContainer within sandbox \"7bb026b2ec1476ed9e7535574df87efd132071c6392c21c4ec753637e07c84ef\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:14:03.979614 containerd[1487]: time="2025-07-06T23:14:03.979389750Z" level=info msg="CreateContainer within sandbox \"7bb026b2ec1476ed9e7535574df87efd132071c6392c21c4ec753637e07c84ef\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"341c18ccc5fb2858cabb8f9b1bb2c21320ca8ea1dc797798198fc9ea298f1af2\"" Jul 6 23:14:03.981190 containerd[1487]: time="2025-07-06T23:14:03.980414122Z" level=info msg="StartContainer for \"341c18ccc5fb2858cabb8f9b1bb2c21320ca8ea1dc797798198fc9ea298f1af2\"" Jul 6 23:14:04.016338 systemd[1]: Started cri-containerd-341c18ccc5fb2858cabb8f9b1bb2c21320ca8ea1dc797798198fc9ea298f1af2.scope - libcontainer container 341c18ccc5fb2858cabb8f9b1bb2c21320ca8ea1dc797798198fc9ea298f1af2. Jul 6 23:14:04.033963 sshd[4762]: Accepted publickey for core from 139.178.89.65 port 41734 ssh2: RSA SHA256:3q3uKGA7TZmlUpdAn9FattmwR+Ld0dURBV17/HvM010 Jul 6 23:14:04.035232 sshd-session[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:14:04.047537 systemd-logind[1477]: New session 24 of user core. Jul 6 23:14:04.052722 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:14:04.055422 systemd[1]: cri-containerd-341c18ccc5fb2858cabb8f9b1bb2c21320ca8ea1dc797798198fc9ea298f1af2.scope: Deactivated successfully. Jul 6 23:14:04.058857 containerd[1487]: time="2025-07-06T23:14:04.058738972Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09081dfc_e1d6_4aa2_a36e_65ebf57db400.slice/cri-containerd-341c18ccc5fb2858cabb8f9b1bb2c21320ca8ea1dc797798198fc9ea298f1af2.scope/memory.events\": no such file or directory" Jul 6 23:14:04.061318 containerd[1487]: time="2025-07-06T23:14:04.061243997Z" level=info msg="StartContainer for \"341c18ccc5fb2858cabb8f9b1bb2c21320ca8ea1dc797798198fc9ea298f1af2\" returns successfully" Jul 6 23:14:04.089728 containerd[1487]: time="2025-07-06T23:14:04.089625832Z" level=info msg="shim disconnected" id=341c18ccc5fb2858cabb8f9b1bb2c21320ca8ea1dc797798198fc9ea298f1af2 namespace=k8s.io Jul 6 23:14:04.090069 containerd[1487]: time="2025-07-06T23:14:04.090037829Z" level=warning msg="cleaning up after shim disconnected" id=341c18ccc5fb2858cabb8f9b1bb2c21320ca8ea1dc797798198fc9ea298f1af2 namespace=k8s.io Jul 6 23:14:04.090212 containerd[1487]: time="2025-07-06T23:14:04.090185562Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:14:04.710173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-341c18ccc5fb2858cabb8f9b1bb2c21320ca8ea1dc797798198fc9ea298f1af2-rootfs.mount: Deactivated successfully. Jul 6 23:14:04.959885 containerd[1487]: time="2025-07-06T23:14:04.959724828Z" level=info msg="CreateContainer within sandbox \"7bb026b2ec1476ed9e7535574df87efd132071c6392c21c4ec753637e07c84ef\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:14:04.982199 containerd[1487]: time="2025-07-06T23:14:04.980249435Z" level=info msg="CreateContainer within sandbox \"7bb026b2ec1476ed9e7535574df87efd132071c6392c21c4ec753637e07c84ef\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b4f0baefd1f1d27742ba775885df083c4105efa8aa09c07df52a3b9cafadb817\"" Jul 6 23:14:04.982199 containerd[1487]: time="2025-07-06T23:14:04.980882572Z" level=info msg="StartContainer for \"b4f0baefd1f1d27742ba775885df083c4105efa8aa09c07df52a3b9cafadb817\"" Jul 6 23:14:04.982327 kubelet[2793]: I0706 23:14:04.980492 2793 setters.go:618] "Node became not ready" node="ci-4230-2-1-6-eb6896cb23" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-06T23:14:04Z","lastTransitionTime":"2025-07-06T23:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 6 23:14:05.038627 systemd[1]: Started cri-containerd-b4f0baefd1f1d27742ba775885df083c4105efa8aa09c07df52a3b9cafadb817.scope - libcontainer container b4f0baefd1f1d27742ba775885df083c4105efa8aa09c07df52a3b9cafadb817. Jul 6 23:14:05.079073 containerd[1487]: time="2025-07-06T23:14:05.078950719Z" level=info msg="StartContainer for \"b4f0baefd1f1d27742ba775885df083c4105efa8aa09c07df52a3b9cafadb817\" returns successfully" Jul 6 23:14:05.407166 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 6 23:14:05.983518 kubelet[2793]: I0706 23:14:05.983435 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-82s8j" podStartSLOduration=5.983415655 podStartE2EDuration="5.983415655s" podCreationTimestamp="2025-07-06 23:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:14:05.983020699 +0000 UTC m=+225.872128709" watchObservedRunningTime="2025-07-06 23:14:05.983415655 +0000 UTC m=+225.872523665" Jul 6 23:14:08.399681 systemd-networkd[1391]: lxc_health: Link UP Jul 6 23:14:08.409822 systemd-networkd[1391]: lxc_health: Gained carrier Jul 6 23:14:10.026378 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jul 6 23:14:15.485818 systemd[1]: run-containerd-runc-k8s.io-b4f0baefd1f1d27742ba775885df083c4105efa8aa09c07df52a3b9cafadb817-runc.KUqPPt.mount: Deactivated successfully. Jul 6 23:14:15.731241 sshd[4850]: Connection closed by 139.178.89.65 port 41734 Jul 6 23:14:15.732345 sshd-session[4762]: pam_unix(sshd:session): session closed for user core Jul 6 23:14:15.739198 systemd[1]: sshd@24-78.47.124.97:22-139.178.89.65:41734.service: Deactivated successfully. Jul 6 23:14:15.743063 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:14:15.746225 systemd-logind[1477]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:14:15.747724 systemd-logind[1477]: Removed session 24. Jul 6 23:14:20.306503 containerd[1487]: time="2025-07-06T23:14:20.306271136Z" level=info msg="StopPodSandbox for \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\"" Jul 6 23:14:20.306503 containerd[1487]: time="2025-07-06T23:14:20.306382226Z" level=info msg="TearDown network for sandbox \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\" successfully" Jul 6 23:14:20.306503 containerd[1487]: time="2025-07-06T23:14:20.306394787Z" level=info msg="StopPodSandbox for \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\" returns successfully" Jul 6 23:14:20.309193 containerd[1487]: time="2025-07-06T23:14:20.307333112Z" level=info msg="RemovePodSandbox for \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\"" Jul 6 23:14:20.309193 containerd[1487]: time="2025-07-06T23:14:20.307395558Z" level=info msg="Forcibly stopping sandbox \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\"" Jul 6 23:14:20.309193 containerd[1487]: time="2025-07-06T23:14:20.307461564Z" level=info msg="TearDown network for sandbox \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\" successfully" Jul 6 23:14:20.312978 containerd[1487]: time="2025-07-06T23:14:20.312720957Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:14:20.312978 containerd[1487]: time="2025-07-06T23:14:20.312798284Z" level=info msg="RemovePodSandbox \"649db00ca1ba7f0079922e9af5290a330980f1309914e3a0793ee93b5b025bf2\" returns successfully" Jul 6 23:14:20.313957 containerd[1487]: time="2025-07-06T23:14:20.313617238Z" level=info msg="StopPodSandbox for \"c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6\"" Jul 6 23:14:20.313957 containerd[1487]: time="2025-07-06T23:14:20.313710167Z" level=info msg="TearDown network for sandbox \"c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6\" successfully" Jul 6 23:14:20.313957 containerd[1487]: time="2025-07-06T23:14:20.313721648Z" level=info msg="StopPodSandbox for \"c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6\" returns successfully" Jul 6 23:14:20.314483 containerd[1487]: time="2025-07-06T23:14:20.314315141Z" level=info msg="RemovePodSandbox for \"c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6\"" Jul 6 23:14:20.314483 containerd[1487]: time="2025-07-06T23:14:20.314342624Z" level=info msg="Forcibly stopping sandbox \"c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6\"" Jul 6 23:14:20.314483 containerd[1487]: time="2025-07-06T23:14:20.314422351Z" level=info msg="TearDown network for sandbox \"c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6\" successfully" Jul 6 23:14:20.319477 containerd[1487]: time="2025-07-06T23:14:20.319299910Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:14:20.319477 containerd[1487]: time="2025-07-06T23:14:20.319376797Z" level=info msg="RemovePodSandbox \"c99567c3134a2c2719ec339028fd1c4b8b3bd3c6258c1a916a2fc6ff31eaeab6\" returns successfully" Jul 6 23:14:31.150763 kubelet[2793]: E0706 23:14:31.150169 2793 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:49880->10.0.0.2:2379: read: connection timed out" Jul 6 23:14:31.157658 systemd[1]: cri-containerd-ac29005872a43fe53616148fa28434e4aa413827fe35d36a9e3ef42b86777b83.scope: Deactivated successfully. Jul 6 23:14:31.159210 systemd[1]: cri-containerd-ac29005872a43fe53616148fa28434e4aa413827fe35d36a9e3ef42b86777b83.scope: Consumed 4.496s CPU time, 22.3M memory peak. Jul 6 23:14:31.193228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac29005872a43fe53616148fa28434e4aa413827fe35d36a9e3ef42b86777b83-rootfs.mount: Deactivated successfully. Jul 6 23:14:31.198968 containerd[1487]: time="2025-07-06T23:14:31.198765380Z" level=info msg="shim disconnected" id=ac29005872a43fe53616148fa28434e4aa413827fe35d36a9e3ef42b86777b83 namespace=k8s.io Jul 6 23:14:31.198968 containerd[1487]: time="2025-07-06T23:14:31.198929555Z" level=warning msg="cleaning up after shim disconnected" id=ac29005872a43fe53616148fa28434e4aa413827fe35d36a9e3ef42b86777b83 namespace=k8s.io Jul 6 23:14:31.198968 containerd[1487]: time="2025-07-06T23:14:31.198938556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:14:31.376725 systemd[1]: cri-containerd-0bfd455c97dc34fd618ab0e70c75c7064caf7ddee2a12d44fe77ab50e3576d86.scope: Deactivated successfully. Jul 6 23:14:31.377602 systemd[1]: cri-containerd-0bfd455c97dc34fd618ab0e70c75c7064caf7ddee2a12d44fe77ab50e3576d86.scope: Consumed 5.122s CPU time, 55M memory peak. Jul 6 23:14:31.402306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bfd455c97dc34fd618ab0e70c75c7064caf7ddee2a12d44fe77ab50e3576d86-rootfs.mount: Deactivated successfully. Jul 6 23:14:31.410040 containerd[1487]: time="2025-07-06T23:14:31.409900092Z" level=info msg="shim disconnected" id=0bfd455c97dc34fd618ab0e70c75c7064caf7ddee2a12d44fe77ab50e3576d86 namespace=k8s.io Jul 6 23:14:31.410040 containerd[1487]: time="2025-07-06T23:14:31.409983099Z" level=warning msg="cleaning up after shim disconnected" id=0bfd455c97dc34fd618ab0e70c75c7064caf7ddee2a12d44fe77ab50e3576d86 namespace=k8s.io Jul 6 23:14:31.410040 containerd[1487]: time="2025-07-06T23:14:31.409999620Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:14:32.037351 kubelet[2793]: I0706 23:14:32.036683 2793 scope.go:117] "RemoveContainer" containerID="0bfd455c97dc34fd618ab0e70c75c7064caf7ddee2a12d44fe77ab50e3576d86" Jul 6 23:14:32.039561 containerd[1487]: time="2025-07-06T23:14:32.039369430Z" level=info msg="CreateContainer within sandbox \"587eda1d02b922e74451f1dca26cc8937c0cb9784f06576fdfd8e8e8e95561e3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 6 23:14:32.041794 kubelet[2793]: I0706 23:14:32.041748 2793 scope.go:117] "RemoveContainer" containerID="ac29005872a43fe53616148fa28434e4aa413827fe35d36a9e3ef42b86777b83" Jul 6 23:14:32.045564 containerd[1487]: time="2025-07-06T23:14:32.045435537Z" level=info msg="CreateContainer within sandbox \"a1a607ee3fa8fe35d3036d3ca12d5e99cac13050d606ce2121bf4ce5d4e5b45c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 6 23:14:32.060435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618546078.mount: Deactivated successfully. Jul 6 23:14:32.069154 containerd[1487]: time="2025-07-06T23:14:32.067431960Z" level=info msg="CreateContainer within sandbox \"587eda1d02b922e74451f1dca26cc8937c0cb9784f06576fdfd8e8e8e95561e3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c2eb290e0a5b4fd841b1216375fe1b7b0a13ed14b569a62e659ed063971f7f1b\"" Jul 6 23:14:32.069154 containerd[1487]: time="2025-07-06T23:14:32.069010582Z" level=info msg="StartContainer for \"c2eb290e0a5b4fd841b1216375fe1b7b0a13ed14b569a62e659ed063971f7f1b\"" Jul 6 23:14:32.077950 containerd[1487]: time="2025-07-06T23:14:32.077839898Z" level=info msg="CreateContainer within sandbox \"a1a607ee3fa8fe35d3036d3ca12d5e99cac13050d606ce2121bf4ce5d4e5b45c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8b2c9298f7b42b95d38c5dee0788b222cecd955bc8ddb936eeb7a202256fe1b9\"" Jul 6 23:14:32.078827 containerd[1487]: time="2025-07-06T23:14:32.078677093Z" level=info msg="StartContainer for \"8b2c9298f7b42b95d38c5dee0788b222cecd955bc8ddb936eeb7a202256fe1b9\"" Jul 6 23:14:32.107455 systemd[1]: Started cri-containerd-c2eb290e0a5b4fd841b1216375fe1b7b0a13ed14b569a62e659ed063971f7f1b.scope - libcontainer container c2eb290e0a5b4fd841b1216375fe1b7b0a13ed14b569a62e659ed063971f7f1b. Jul 6 23:14:32.117712 systemd[1]: Started cri-containerd-8b2c9298f7b42b95d38c5dee0788b222cecd955bc8ddb936eeb7a202256fe1b9.scope - libcontainer container 8b2c9298f7b42b95d38c5dee0788b222cecd955bc8ddb936eeb7a202256fe1b9. Jul 6 23:14:32.168683 containerd[1487]: time="2025-07-06T23:14:32.168162880Z" level=info msg="StartContainer for \"c2eb290e0a5b4fd841b1216375fe1b7b0a13ed14b569a62e659ed063971f7f1b\" returns successfully" Jul 6 23:14:32.178762 containerd[1487]: time="2025-07-06T23:14:32.178591900Z" level=info msg="StartContainer for \"8b2c9298f7b42b95d38c5dee0788b222cecd955bc8ddb936eeb7a202256fe1b9\" returns successfully" Jul 6 23:14:34.801421 kubelet[2793]: E0706 23:14:34.801258 2793 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:49690->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-2-1-6-eb6896cb23.184fcc87f46a0afe kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-2-1-6-eb6896cb23,UID:2fb31c15171711b288a04341e6885dd6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-2-1-6-eb6896cb23,},FirstTimestamp:2025-07-06 23:14:24.363653886 +0000 UTC m=+244.252761936,LastTimestamp:2025-07-06 23:14:24.363653886 +0000 UTC m=+244.252761936,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-1-6-eb6896cb23,}"